Compare commits

...

100 commits
v1.2.0 ... main

Author SHA1 Message Date
Tan N. Le
db10285368
Update CHANGELOG.md 2022-10-12 21:47:07 -07:00
shivrsrivastava
8454a6ebf3
Adding priority to the task (#140) 2022-10-12 21:46:07 -07:00
Tan N. Le
c318042e96
release 1.24.0 (#139) 2021-11-09 09:00:35 -08:00
Tan N. Le
db9bebb802
enable default sla for slaDrain (#138) 2021-11-01 18:17:49 -07:00
Renán I. Del Valle
fff2c16751
Enabling code analysis 2021-09-01 10:09:11 -07:00
Renán I. Del Valle
c59d01ab51
Changes Travis CI badge to Github Actions badge (#137) 2021-08-06 18:16:06 -07:00
Renán I. Del Valle
62df98a3c8
Bug fix for auto paused update monitor (#136)
Returns success if the update has finished updating successfully.
2021-08-06 16:02:52 -07:00
Renán I. Del Valle
5c39a23eb2
Enable Github Actions for PRs
Run CI on pull requests and when the branch is pushed.
2021-08-06 15:52:46 -07:00
Renán I. Del Valle
dbc396b0db
Disables Travis CI (#135)
Travis CI is no longer needed as we have migrated to Github Actions
2021-08-06 10:58:34 -07:00
Renán I. Del Valle
86eb045808
Adds go.sum and removes dep files (#134) 2021-08-06 10:57:45 -07:00
Renán I. Del Valle
c7e309f421
Actions fix (#133)
* Moving main.yml to the right place.
2021-08-06 10:06:12 -07:00
Renán I. Del Valle
49877b7d41
Adds support for running CI on github actions. (#132) 2021-08-06 10:00:57 -07:00
Renán I. Del Valle
82b40a53f0
Add verification to retry mechanism (#131)
CreateJob, CreateService, and StartJobUpdate now include a rudimentary verification function to check if the call made it to the Aurora Scheduler when the client experiences a timeout.
2021-05-11 13:37:23 -07:00
Renán I. Del Valle
a9d99067ee
Documentation fix (#130)
Fixes documentation so that it is more compliant with godoc format.
2021-04-29 10:48:43 -07:00
Renán Del Valle
e7f9c0cba9 Bumping up version to 1.23.1 2021-02-25 17:59:02 -08:00
Renán Del Valle
fbaf218dfb Preparing release notes for 1.23.0 2021-02-25 17:59:02 -08:00
Renán I. Del Valle
6a1cf25c87
Upgrading Mesos to 1.7.2 and Aurora Scheduler to 0.23.0 (#128) 2021-02-25 17:55:42 -08:00
Renán Del Valle
4aaec87d32 Bumping up version to 1.23.0 2021-02-25 17:34:48 -08:00
Renán Del Valle
895d810b6c Releasing Version 1.22.5 2021-02-25 17:34:48 -08:00
Renán I. Del Valle
511e76a0ac
Upgrading to Thrift 0.14.0 (#126)
Upgrading thrift to 0.14.0 in order to pick up bug fixes, including the fix for trying to write to closed connections.
2021-02-25 16:37:46 -08:00
Renan DelValle
8be66da88a
Bumping up go version to 1.15 and removing v2 tests from Travis CI config file. 2020-09-28 11:13:29 -07:00
Renan DelValle
6d20f347f7
Update latest version gorealis has been tested against. 2020-09-28 10:32:36 -07:00
Renan DelValle
74b12c36b1
Adding version to README and changing badge to point to the right place. 2020-09-28 10:32:13 -07:00
Renan DelValle
269e0208c1
Change travis CI configuration to use main branch. 2020-09-28 10:30:48 -07:00
Suchith Arodi
4acb0d54a9
return response object in case of noop update (#125)
Return response object such that end user can look into what went wrong when the response is nil.

Cases like this occur when there is a no-op update.
2020-07-29 13:30:56 -07:00
Renán I. Del Valle
5f667555dc
Bumping up CI build to go 1.14 (#124)
Bumping up Travis CI build to go 1.14 as well as increasing the timeout for go tests.
2020-05-27 16:04:07 -07:00
Renán I. Del Valle
2b6025e67d
Checking previously ignored error which caused issues. (#123)
Error in monitor was going unchecked which caused some issues when the monitor timed out.
2020-05-27 12:45:53 -07:00
Renán I. Del Valle
5ec22fab98
Restoring location of r.Close() in retry mechanism since the move created a deadlock. (#122)
Moving the r.Close() call in the retry mechanism created a deadlock since r.Close() also uses the client lock to avoid multiple routines closing at the same time.

This commit reverts that change.
2020-05-27 12:36:52 -07:00
Renan DelValle
f196aa9ed7 Fixing some cosmetic issues and a potential race condition. 2020-05-26 20:40:09 -07:00
Renan DelValle
bb5408f5e2 Bumping up Thrift Version to v0.13.2 forked as v0.13.1 contains a bug. 2020-05-26 20:40:09 -07:00
Renán I. Del Valle
ea8e48f3b8
Allow users to define what extensions CA certs will have (#120)
* Allow users to define what extensions CA certs will have. Skip any files that don't have the right extension.
2020-02-26 08:24:41 -08:00
Renán I. Del Valle
3dc3b09a8e
Point to temporary Thrift fork while we wait for 0.14.0 to be released (#118)
* Updating readme to reflect changes made to the Aurora Scheduler project.

* Changing dependency of mod to point to forked version of the Thrift library while 0.14.0 is released.
2020-02-18 14:18:13 -08:00
Renan I. Del Valle
3fa2a20fe4
Thrift Upgrade to v0.13.0 (#117)
* Removing go.sum file as it's no longer required as of go1.13.

* Removing uncessary client command.

* Bumping up thrift version to v0.13.0
2020-02-12 12:31:56 -08:00
Renan I. Del Valle
c6a2a23ddb
Changing how constraints are handled internally (#115)
* Updating Changelog to reflect what's changing in 1.22.1

* Bug fix: Setting the same constraint multiple times is no longer allowed.

* Constraints map has been added to handle constraints being added to Aurora Jobs.

* Lowering timeout to avoid flaky test for bad payload timeout.

* Adding attributes to Mesos agents in order to test limits by constraint.



* Make two instances schedulable per zone in order to experience flaky behavior.
2020-01-15 08:21:12 -08:00
Renan I. Del Valle
9da3b96b1f Moving future to final 0.22.0 release and Mesos 1.6.2 (#114)
Changes in compose testing setup:
* Upgrading Aurora to 0.22.0
* Upgrading Mesos to 1.6.2
2020-01-14 15:50:10 -08:00
Renan I. Del Valle
976dc26dcc Adding autopause APIs to future (#110)
* Updating thrift definitions to add autopause for batch based update strategies.

* Adding batch calculator utility and test cases for it.

* Adding PauseUpdateMonitor which allows users to poll Aurora for information on an active Update being carried out until it enters the ROLL_FORWARD_PAUSED state.

* Tests for PauseUpdateMonitor and VariableBatchStep added to the end to end tests.

* Adding TerminalUpdateStates function which returns a slice containing all terminal states for an update. Changed signature of JobUpdateStatus from using a map for desired states to a slice. A map is no longer necessary with the new version of thrift and only adds complexity.
2020-01-14 15:50:10 -08:00
Renan DelValle
fe692040aa Variable Batch Update Support (#100)
* Changing generateBinding.sh check to check for thrift 0.12.0 and adding support for Variable Batch updates.

* Adding update strategies change to changelog, changed docker-compose to point to aurora 0.22.0 snapshot. Added test coverage for update strategies.
2020-01-14 15:50:10 -08:00
Renan DelValle
0b2dd44d94 Increasing aurora version for future branch. 2020-01-14 15:50:10 -08:00
Renan DelValle
df8fc2fba1
Documentation and linting improvements (#108)
* Simplifying documentation for getting started: Removed outdated information about install Golang on different platforms and instead included a link to the official Golang website which has more up to date information. Instructions for installing docker-compose have also been added.

* Added documentation to all exported functions and structs.

* Unexported some structures and functions that were needlessly exported.

* Adding golang CI default configuration which can be useful while developing and may be turned on later in the CI.

* Moving build process in CI to xenial.

* Reducing line size. in some files and shadowing in some test cases.
2019-06-12 11:22:59 -07:00
Renan DelValle
6dc4bf93b9
Retry temporary errors by default (#107)
* Adding Aurora URL validator in order to handle scenarios where incomplete information is passed to the client. The client will do its best to guess the missing information such as protocol and port.

* Upgraded to testify 1.3.0.

* Added configuration to fail on a non-temporary error. This is reverting to the original behavior of the retry mechanism. However, this allows the user to opt to fail in a non-temporary error.
2019-06-11 11:47:14 -07:00
Renan DelValle
4ffb509939
Adding go mod files to v1 (#106)
* Declaring dependencies using go mod.
2019-05-06 11:33:14 -07:00
Renan DelValle
1a15c4a5aa
V1 CreateService and StartJobUpdate Timeout signal and cleanup (#105)
* Bumped up version to 1.21.1

* Moving admin functions to a new file. They are still part of the same pointer receiver type.

* Removing dead code and fixing some comments to add space between backslash and comment.

* Adding set up and tear down to run tests script. It sets up a pod, runs all tests, and then tears down the pod.

* Added `--rm` to run tests Mac script.

* Removing cookie jar from transport layer as it's not needed.

* Changing all error messages to start with a lower case letter. Changing some messages around to be more descriptive.

* Adding an argument to allow the retry mechanism to stop if a timeout has been encountered. This is useful for mutating API calls. Only StartUpdate and CreateService have enabled by default stop at timeout.

* Added 2 tests for when a call goes through despite the client timing out. One is with a good payload, one is with a bad payload.

* Updating changelog with information about the error type returned.

* Adding test for duplicate metadata.

* Refactored JobUpdateStatus monitor to use a new monitor called JobUpdateQuery. Update monitor will now still continue if it does not find an update to monitor. Furthermore, it has been optimized to reduce returning payloads from the scheduler as much as possible. This is through using the GetJobUpdateSummaries API instead of JobUpdateDetails and by including a the statuses we're searching for as part of the query.


* Added documentation as to how to handle a timeout on an API request.

* Optimized GetInstancesIds to create a copy of the JobKey being passed down in order to avoid unexpected behavior. Instead of setting every variable name separately, now a JobKey array is being created.
2019-05-05 11:46:22 -07:00
Renan DelValle
e16e390afe
1.21.0 (formerly 1.4.0) release 2019-03-15 15:15:37 -07:00
Renan DelValle
f7bd7cc20f
Bug fix for metadata duplicates as well as un-initialized GPU re… (#103)
* Fix for metadata duplicates as well.
* Fix for un-initialized GPU resource when creating a new job update.
2019-03-15 15:10:31 -07:00
Renan DelValle
c997b90720
Adding future branch to testing. 2019-03-15 12:17:43 -07:00
Renan DelValle
773d842b03
Adding missing GPU to Job interface. 2019-03-05 11:43:50 -08:00
Renan DelValle
1f459dd56a
Adds support for Tier and SlaPolicy to the Job interface (#99)
* Adding parameter for Aurora so that we're able to run SLA aware updates with less than 20 instances. Lowered time it takes to run test by reducing watch time per instance as well.

* Reducing the number of instances and time for SLA aware instances in docker-compose set up.

* Adding another Mesos agent to the docker-compose setup.

* Huge thanks to @zircote for this contribution.
2019-02-20 16:36:50 -08:00
Renan DelValle
79fa7ba16d
Upgrading gorealis v1 to Thrift 0.12.0 code generation. End to end tests cleanup (#96)
* Ported all code from Thrift 0.9.3 to Thrift 0.12.0 while backporting some fixes from gorealis v2

* Removing git.apache.org dependency from Vendor folder as this dependency has migrated to github.

* Adding github.com thrift dependency back but now it points to github.com

* Removing unnecessary files from Thrift Vendor folder and adding them to .gitignore.

* Updating dep dependencies to include Thrift 0.12.0 from github.com

* Adding changelog.

* End to end tests: Adding coverage for killinstances.

*  End to end tests: Deleting instances after partition policy recovers them.

*  End to end tests: Adding more coverage to the realis API.

*  End to end tests: Allowing arguments to be passed to runTestMac so that '-run <test name>' can be passed in.

*  End to end tests: Reducing the resources used by CreateJob test.

*  End to end tests: Adding coverage for Pause and Resume update.

*   End to end tests: Removed checks for Aurora_OK response as that should always be handled by the error returned by the API. Changed names to be less verbose and repetitive.

*  End to end tests: Reducing watch time for instance running when creating service for reducing time it takes to run end to end test.
2019-02-20 11:11:46 -08:00
Renan DelValle
2b7eb3a852
Making abort job synchronous (#95)
* Making abort job synchronous to avoid scenarios where kill is received before job update lock is released.
* Adding missing cases for terminal update statues to JobUpdate monitor.
* Monitors now return errors which provide context through behavior.
* Adding notes to the doc explaining what happens when AbortJob times out.
2019-01-15 14:55:59 -08:00
Renan DelValle
10c620de7b
Fixing logger not unrolling variadic argument when appending to the front of it. 2019-01-11 12:20:01 -08:00
Renan DelValle
1d3854aa5f
Trace level for logger (#94)
* Add trace level to print out response thrift objects. Allows user to control whether these are printed or not to avoid pollution.

* Using named parameters to be more explicit about what is being set for LevelLogger.

* Adding TracePrint and TracePrintln. Inlined library level prefixes.
2019-01-10 16:58:59 -08:00
Renan DelValle
73e7ab2671
Releasing version 1.3.1 2019-01-08 15:57:19 -08:00
Renan DelValle
22b1d82d88
Bug fix for logger interface. Varidic arguments need to be unrolled when passed to print functions. 2019-01-08 15:37:25 -08:00
Renan DelValle
2f7015571c
Adding support for setting GPU as a resource. (#93)
* Adding support for setting GPU as a resource.
* Refactoring pulse update test.
2019-01-08 15:11:52 -08:00
Robert Allen
296af622d1 This adds the following function to the PartitionPolicy configuration to the Job interface (#91)
* Adding Partition Policy API
2018-12-20 14:38:06 -08:00
Renan DelValle
9a835631b2
Running goimports on all repository to conform to newest goimports. 2018-12-19 15:33:35 -08:00
Renan DelValle
b100158080
Updating Travis CI config file to include running CI on master-v2.0 branch 2018-12-19 15:30:22 -08:00
Renan DelValle
45a4416830
Adding .gitattributes to ignore generated files. 2018-12-03 16:09:46 -08:00
Renan DelValle
2eaa60f681
Support Drain SLA API (#88)
* Bringing thrift API up to date with Aurora 0.21.0.

* Adding support for SLA Drain Host API.
2018-11-16 11:41:09 -08:00
Renan DelValle
a09a18ea3b
Stop retrying if we find a permanent url error. (#85)
* Detecting if the transport error was not temporary in which case we stop retrying. Changed bug where get results was being called before we checked for an error.

* Adding exception for EOF error. All EOF errors will be retried.

* Addressing race conditions that may happen when client is closed or connection is re-established.

* Adding documentation about how this particular implemantion of the realis client uses retries in scenarios where a temporary error is found.
2018-11-01 17:00:03 -07:00
Renan DelValle
6762c1784b
Bug fix: get quota and set quota would not retry if an error was hit. (#84) 2018-10-29 14:56:24 -07:00
Renan DelValle
fa5133c13d
Test coverage improvement (#83)
* Adding tests for getPendingReasons and startMaintenance.

* Added tests for ThriftBinary and ThriftJSON.

* Adding test for NOOP Logger.
2018-10-28 19:16:44 -07:00
JC Martin
5de913493c Add Start Maintenance and Get Pending Reason (#82)
* Add startMaintenance

* Add getPendingReason
2018-10-26 11:38:03 -07:00
Renan DelValle
2306d6180f
Adding force Implicit and force Explicit recon to gorealis. (#81) 2018-10-22 16:43:35 -07:00
Renan DelValle
231793df71
Adding a separate function to add dedicated attributes. (#80)
Dedicated wrapper for "dedicated" constraints
2018-10-11 09:43:35 -07:00
Renan DelValle
e0f33ab60e
Bumping up the version number advertised by gorealis to the scheduler. 2018-10-05 08:09:30 -07:00
Renan DelValle
9dcb7a8969
Moving the Codecov badge to right beside the Travis CI badge. 2018-10-05 08:09:05 -07:00
Renan DelValle
4395c2ae1a
Code coverage (#79)
* Turning on codecoverage from Codecov.
2018-10-05 07:57:19 -07:00
Renan DelValle
70252ffacf Updating Aurora compatibility in anticipation of next release. 2018-10-04 18:46:27 -07:00
Renan DelValle
4963bbb922 Sharling layers in docker compose between agent and master. 2018-10-04 18:46:27 -07:00
Renan DelValle
149d03988c
Sample Client cleanup, misc cleanup (#74)
* Changing print + os.exit to log.Fatal. Leaving a TODO to move documentation to interface.
2018-10-04 11:28:32 -07:00
Renan DelValle
037c636d6d
Retry switch fallthrough fix and create multiple tests (#77)
* Bugfix: switch statements were missing fallthrough statement thus making them retry non-retriable errors. Using a list to catch cases now.

* Adding tests for CreateService, createService when the executor doesn't exist, and createJob when the executor doesn't exist. Renamed Pulse test to reflect that it's using CreateService instead of CreateJob.

* Repsonse propagate back up to caller for context for CreateJob, CreateService, and StartJobUpdate.

* Deleting PR template as Travis CI takes care of running tests and formatting tests now.
2018-10-04 10:47:08 -07:00
Renan DelValle
9ebf118e71
Create job bevaviour does not override default batch size. (#75) 2018-09-25 16:37:17 -07:00
Renan DelValle
e85781e6d4
Upgrade Aurora to 0.21.0 and Mesos to 1.5.1 for compose setup. 2018-09-14 16:38:05 -07:00
Renan DelValle
5099d7e6ec
Adding force snapshot and force backup APIs (#73)
* Adding force snapshot and force backup APIs.
2018-09-14 15:04:16 -07:00
Renan DelValle
0f2ece10ac
Ignoring vendor folder when checking for goimports failure. 2018-09-13 17:22:04 -07:00
Renan DelValle
ad0da8c867
Adding goimports check. From here on in, any PR that doesn't pass goimports will fail the CI build. 2018-09-13 17:14:38 -07:00
Renan DelValle
48318e026c
Fixing issues caught by goimports before adding goimports check to CI. 2018-09-13 17:02:15 -07:00
Renan DelValle
98d2fa2dd7
Forking Thrift Go library to use 0.10.0 with THRIFT-4215 and THRIFT-4219 on top of it in hopes of fixing a stray nil buffer error. (#72)
This should fix #65
2018-08-21 08:20:41 -07:00
Renan DelValle
1c2b1c5079
Continous integration through Travis CI (#71)
* Adding Travis CI badge

* Modifying end to end tests to reflect testing against docker-compose setup in Travis CI.

* Adding bash script to run simple container with tests within bridge network for Mac.

* Adding documentation for setting up a developer environment.

* Decreasing amount of CPU needed for CreateJobWithPulse because a higher value causes Travis CI to hang.
2018-08-13 20:09:25 -07:00
PRADYUMNA KAUSHIK
0e4a0d726b Fix JSON client example and Update documentation. (#67)
* Updated the JSON client to be consistent with the library.
The JSON client requires two JSONs,
1. JOB json -- contains job description.
2. Config json -- contains configuration information such as username,
	password, schedulerUrl, zookeeper cluster configuration etc.

* Job json using docker-compose executor.

Used https://github.com/paypal/dce-go/blob/develop/examples/client.go#L50
to create a json file for a job that uses the docker-compose executor.
The current job json file (examples/job.json) uses an outdated version
of docker-compose executor. Once examples/client.go has been modified
to use examples/job_dce.json, it should be okay to get rid of
examples/job.json.

* Run thermos jobs using json client.

Added an extra field to JobJson, ExecutorDataFile, that holds
the path to the json file representing the executor configuration
data.
Added a new example job json file (examples/job_thermos.json) that
is to be passed to the json client along with the config file to
run a thermos job.

* Using scheduler URL instead of leader from zk.

The endpoints returned by ZKEndpoints(...) is not reachable
from outside the vagrant box. Hence, using the scheduler URL
directly.

* Added docs for using dce-go and json client.

* Place json client docs in separate subsection.

* Config now embeds realis.Cluster to be backwards compatible with
the python client cluster.json file.
Changed the type of Transport to string to stay flexible if new
transport types come up. JSON is used as the default transport option
is not transport is provided with the config.
2018-07-13 11:14:11 +02:00
Ezequiel Torres Feyuk
fe567ee966 Task query optional parameters (#69)
* Change TaskQuery struct parameters to optional

* Thrift API is modified to make all the parameters in the
  TaskQuery struct optional

* Autogenerated code is regenerated

* Changes in TaskQuery structs used in the project

* Now that TaskQuery receive optional values, pointers
  instead of values must be passed to the struct
2018-06-28 11:48:28 -07:00
Renan DelValle
6c8ab10b64 Merge develop branch into master (#68)
* Fixing possible race condition when passing backoff around as a pointer.

* Adding a debug logger that is turned off by default.
Info logger is enabled by default but prints out less information.

* Removing OK Aurora acknowledgment.

* Making Mutex a pointer so that there's no chance it can accidentally be copied.

* Changing %v to %+v for composite structs. Removing a repetitive statement for the Aurora return code.

* Removing another superflous debug statement.

* Removing a leftover helper function from before we changed how we configured the client.

* Changing the logging paradigm to only require a single logger. All logging will be disabled by default. If debug is enabled, and a logger has not been set, the library will default to printing all logging (INFO and DEBUG) to the stdout.

* Minor changes to demonstrate how a logger can be used in conjunction to debug mode.

* Removing port override as it is not needed

* Changing code comments to reflect getting rid of port override.

* Adding port override back in.

* Bug fix: Logger was being set to NOOP despite no logger being provided when debug mode is turned on.

* Turn on logging by default.

* Removing option to override schema and ports for information found on Zookeeper.

* Turning off debug mode for tests because it's too verbose. Making sure LevelLogger is initialized correctly under all scenarios.

* Removing override fields for zk config.

* Remove space.

* Removing info that is now incorrect about zk options.
2018-06-22 12:57:21 -07:00
Renan DelValle
8ca953f925
Bug fix: using AND in place of OR or SSL flags. (#64)
* Bug fix: using AND in place of OR or SSL flags.

* Separating CA certificate path and client key and cert addition to options.
2018-05-29 12:46:16 -07:00
kkrishna
800efccb31
Merge pull request #63 from paypal/addSSLToExample
Add ssl to example client, misc doc fixes
2018-05-23 11:43:58 -07:00
Renan DelValle
5d12029227
Update PR template to hide away instructions on submission. 2018-05-22 17:00:30 -07:00
Renan DelValle
4f6a5e9741
Adding SSL flags to sample client. 2018-05-22 16:56:42 -07:00
Renan DelValle
e6b204b9da
Removing unnecessary space. 2018-05-13 18:34:34 -07:00
Renan DelValle
d03a7b61e4
Removing napping from the TODO list as go's native http libraries are good enough. 2018-05-13 18:32:38 -07:00
Renan DelValle
4f5766b443
Misc. bug fixes and addition of debug logging (#61)
* Fixing possible race condition when passing backoff around as a pointer.

* Adding a debug logger that is turned off by default. If debug is turned on, but a logger has not been assigned, a default logger that will print to STDOUT will be created.

* Making Mutex a pointer so that there's no chance it can accidentally be copied.

* Removing a leftover helper function from before we changed how we configured the client.

* Minor changes to demonstrate how a logger can be used in conjunction to debug mode in the sample client.
2018-04-13 11:03:29 -07:00
Robert Allen
c0d2969976 Adding Admin Client calls GetQuota & SetQuota (#59)
* Adding Admin Client calls `GetQuota` & `SetQuota`

This change set adds admin client calls to fetch and
mutate the OwnerRole quota[cpu,ram,disk].
2018-03-07 16:24:27 -08:00
kkrishna
66809c55f7
Merge pull request #58 from paypal/zkPolish
Zookeeper functions and retry functions cleanup
2018-03-05 12:09:18 -08:00
Renan DelValle
acc54c1015
Adding logging when there is a client error. 2018-03-05 11:20:39 -08:00
Renan DelValle
0bb23cec71
Adding unit tests for Zookeeper related functions to prevent regressions. 2018-03-03 14:13:47 -08:00
Renan DelValle
3d62df1684
* Errors have been refactored.
* ZK retries have been cleaned up. We will now retry after every error
EXCEPT when we have a badly formed path.
* ZK library has been reworked with optional arguments pattern to not be
so intertwined with the cluster.json file.
* Timeout error has been re-implemented as RetryError. RetryError
behaves like a Timeout error but is used exclusively to add more context
privately. This allows us to have unit tests that check our retry
mechanism is actually retrying.
* Additional logging has been added to retry mechanisms as well as to
the Zookeeper library we use.
2018-03-03 14:08:04 -08:00
Sivaram Mothiki
dc327bebad change config for certs path (#57)
Bug fix. Changing hardcoded certificate to one provided by configuration object.
2018-03-02 15:21:45 -08:00
Renan DelValle
a43dc81ea8
Simplifying retry mechanism for Thrift Calls (#56)
* Deleting permament error as it doesn't make sense. Just return a plain old error and that will be considered permanent.

* Removing double closure at as it's unmaintainable and can be error prone. Separated back offs into a generic one and a thrift call specific one.

* ZK leader finder now returns a temporary error instead of constantly no leader found and quitting. It could be that the leader info is being propagated so it's worth trying another time.

* Adding more logging to the retry.

* Wrapping lock and unlock in an anonymous function so that we can use defer on unlock such that it is called in the case of a panic.
2018-02-15 15:16:39 -08:00
Renan DelValle
64948c3712
Backoff mechanism fix (#54)
* Fixing logic that can lead to nil error being returned and retry stopping early.

* Fixing possible code path that may lead to an incorrect nil error.
2018-02-06 12:44:27 -08:00
kkrishna
a6b077d1fd Aurora jobupdate functionality -- pause/resume/pulse api (#55)
* Adding GetJobs api

* Adding Aurora pause/resume/pulse api
2018-02-06 12:39:02 -08:00
kkrishna
8bd3957247 GetJobs api (#53)
* GetJobs API added
2018-01-27 10:33:55 -08:00
2208 changed files with 42924 additions and 413385 deletions

View file

@ -1 +1 @@
0.19.0
0.23.0

3
.gitattributes vendored Normal file
View file

@ -0,0 +1,3 @@
gen-go/ linguist-generated=true
vendor/ linguist-generated=true
Gopkg.lock linguist-generated=true

View file

@ -1,13 +0,0 @@
-----------------------------------------
## Please read instructions below ##
Before submitting, please make sure you run a vagrant box running Aurora with the latest version shown in .auroraversion and run go test from the project root.
To run an Aurora Vagrant image, follow the instructions here:
http://aurora.apache.org/documentation/latest/getting-started/vagrant/
* Have you run goformat on the project before submitting?
* Have you run go test on the project before submitting? Do all tests pass?
* Does the Pull Request require a test to be added to the end to end tests? If so, has it been added?

25
.github/main.yml vendored Normal file
View file

@ -0,0 +1,25 @@
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Go for use with actions
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Install goimports
run: go get golang.org/x/tools/cmd/goimports
- name: Set env with list of directories in repo containin go code
run: echo GO_USR_DIRS=$(go list -f {{.Dir}} ./... | grep -E -v "/gen-go/|/vendor/") >> $GITHUB_ENV
- name: Run goimports check
run: test -z "`for d in $GO_USR_DIRS; do goimports -d $d/*.go | tee /dev/stderr; done`"
- name: Create aurora/mesos docker cluster
run: docker-compose up -d
- name: Run tests
run: go test -timeout 35m -race -coverprofile=coverage.txt -covermode=atomic -v github.com/paypal/gorealis

57
.github/workflows/codeql-analysis.yml vendored Normal file
View file

@ -0,0 +1,57 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ main ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ main ]
schedule:
- cron: '34 4 * * 3'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'go' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
- run: go build examples/client.go
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

30
.github/workflows/main.yml vendored Normal file
View file

@ -0,0 +1,30 @@
name: CI
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Go for use with actions
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Install goimports
run: go get golang.org/x/tools/cmd/goimports
- name: Set env with list of directories in repo containin go code
run: echo GO_USR_DIRS=$(go list -f {{.Dir}} ./... | grep -E -v "/gen-go/|/vendor/") >> $GITHUB_ENV
- name: Run goimports check
run: test -z "`for d in $GO_USR_DIRS; do goimports -d $d/*.go | tee /dev/stderr; done`"
- name: Create aurora/mesos docker cluster
run: docker-compose up -d
- name: Run tests
run: go test -timeout 35m -race -coverprofile=coverage.txt -covermode=atomic -v github.com/paypal/gorealis

11
.gitignore vendored
View file

@ -8,6 +8,17 @@ _obj
_test
.idea
# Thrift library comes with a lot of other files we don't need.
# Ignore everything but the files we do need
vendor/github.com/apache/thrift/*
!vendor/github.com/apache/thrift/lib/
vendor/github.com/apache/thrift/lib/*
!vendor/github.com/apache/thrift/lib/go/
vendor/github.com/apache/thrift/lib/go/*
!vendor/github.com/apache/thrift/lib/go/thrift/
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out

71
.golangci.yml Normal file
View file

@ -0,0 +1,71 @@
# This file contains all available configuration options
# with their default values.
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
deadline: 1m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# include test files or not, default is true
tests: true
skip-dirs:
- gen-go/
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# all available settings of specific linters
linters-settings:
errcheck:
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: true
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: true
govet:
# report about shadowed variables
check-shadowing: true
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 2
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 4
linters:
enable:
- govet
- goimports
- golint
- lll
- goconst
enable-all: false
fast: false

62
CHANGELOG.md Normal file
View file

@ -0,0 +1,62 @@
1.25.1 (unreleased)
1.25.0
* Add priority api
1.24.0
* enable default sla for slaDrain
* Changes Travis CI badge to Github Actions badge
* Bug fix for auto paused update monitor
* Adds support for running CI on github actions
1.23.0
* First release tested against Aurora Scheduler 0.23.0
1.22.5
* Upgrading to thrift 0.14.0
1.22.4
* Updates which result in a no-op now return a response value so that the caller may analyze it to determine what happened
1.22.3
* Contains a monitor timeout fix. Previously an error was being left unchecked which made a specific monitor timining out not be handled properly.
1.22.2
* Bug fix: Change in retry mechanism created a deadlock. This release reverts that particular change.
1.22.1
* Adding safeguards against setting multiple constraints with the same name for a single task.
1.22.0
* CreateService and StartJobUpdate do not continue retrying if a timeout has been encountered
by the HTTP client. Instead they now return an error that conforms to the Timedout interface.
Users can check for a Timedout error by using `realis.IsTimeout(err)`.
* New API function VariableBatchStep has been added which returns the current batch at which
a Variable Batch Update configured Update is currently in.
* Added new PauseUpdateMonitor which monitors an update until it is an `ROLL_FORWARD_PAUSED` state.
* Added variableBatchStep command to sample client to be used for testing new VariableBatchStep api.
* JobUpdateStatus has changed function signature from:
`JobUpdateStatus(updateKey aurora.JobUpdateKey, desiredStatuses map[aurora.JobUpdateStatus]bool, interval, timeout time.Duration) (aurora.JobUpdateStatus, error)`
to
`JobUpdateStatus(updateKey aurora.JobUpdateKey, desiredStatuses []aurora.JobUpdateStatus, interval, timeout time.Duration) (aurora.JobUpdateStatus, error)`
* Added TerminalUpdateStates function which returns an slice containing all UpdateStates which are considered terminal states.
1.21.0
* Version numbering change. Future versions will be labled X.Y.Z where X is the major version, Y is the Aurora version the library has been tested against (e.g. 21 -> 0.21.0), and X is the minor revision.
* Moved to Thrift 0.12.0 code generator and go library.
* `aurora.ACTIVE_STATES`, `aurora.SLAVE_ASSIGNED_STATES`, `aurora.LIVE_STATES`, `aurora.TERMINAL_STATES`, `aurora.ACTIVE_JOB_UPDATE_STATES`, `aurora.AWAITNG_PULSE_JOB_UPDATE_STATES` are all now generated as a slices.
* Please use `realis.ActiveStates`, `realis.SlaveAssignedStates`,`realis.LiveStates`, `realis.TerminalStates`, `realis.ActiveJobUpdateStates`, `realis.AwaitingPulseJobUpdateStates` in their places when map representations are needed.
* `GetInstanceIds(key *aurora.JobKey, states map[aurora.ScheduleStatus]bool) (map[int32]bool, error)` has changed signature to ` GetInstanceIds(key *aurora.JobKey, states []aurora.ScheduleStatus) ([]int32, error)`
* Adding support for GPU as resource.
* Changing compose environment to Aurora snapshot in order to support staggered update.
* Adding staggered updates API.

43
Gopkg.lock generated
View file

@ -1,43 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
name = "git.apache.org/thrift.git"
packages = ["lib/go/thrift"]
revision = "b2a4d4ae21c789b689dd162deb819665567f481c"
version = "0.10.0"
[[projects]]
name = "github.com/davecgh/go-spew"
packages = ["spew"]
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
name = "github.com/pkg/errors"
packages = ["."]
revision = "e881fd58d78e04cf6d0de1217f8707c8cc2249bc"
[[projects]]
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
name = "github.com/samuel/go-zookeeper"
packages = ["zk"]
revision = "471cd4e61d7a78ece1791fa5faa0345dc8c7d5a5"
[[projects]]
name = "github.com/stretchr/testify"
packages = ["assert"]
revision = "b91bfb9ebec76498946beb6af7c0230c7cc7ba6c"
version = "v1.2.0"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "6b1d8788979382ad684db5a62850b9d206014ea36f020a0a481331adf3c234dd"
solver-name = "gps-cdcl"
solver-version = 1

View file

@ -1,16 +0,0 @@
[[constraint]]
name = "git.apache.org/thrift.git"
version = "0.10.0"
[[constraint]]
name = "github.com/pkg/errors"
revision = "e881fd58d78e04cf6d0de1217f8707c8cc2249bc"
[[constraint]]
name = "github.com/samuel/go-zookeeper"
revision = "471cd4e61d7a78ece1791fa5faa0345dc8c7d5a5"
[[constraint]]
name = "github.com/stretchr/testify"
version = "1.2.0"

View file

@ -1,6 +1,8 @@
# gorealis [![GoDoc](https://godoc.org/github.com/paypal/gorealis?status.svg)](https://godoc.org/github.com/paypal/gorealis)
# gorealis [![GoDoc](https://godoc.org/github.com/paypal/gorealis?status.svg)](https://godoc.org/github.com/paypal/gorealis) ![CI Build Status](https://github.com/paypal/gorealis/actions/workflows/main.yml/badge.svg) [![codecov](https://codecov.io/gh/paypal/gorealis/branch/main/graph/badge.svg)](https://codecov.io/gh/paypal/gorealis)
Go library for interacting with [Apache Aurora](https://github.com/apache/aurora).
Version 1 of Go library for interacting with [Aurora Scheduler](https://github.com/aurora-scheduler/aurora).
Version 2 of this library can be found [here](https://github.com/aurora-scheduler/gorealis).
### Aurora version compatibility
Please see [.auroraversion](./.auroraversion) to see the latest Aurora version against which this
@ -12,8 +14,10 @@ library has been tested.
* [Using the sample client](docs/using-the-sample-client.md)
* [Leveraging the library](docs/leveraging-the-library.md)
## To Do
* Create or import a custom transport that uses https://github.com/jmcvetta/napping to improve efficiency
## Projects using gorealis
* [australis](https://github.com/aurora-scheduler/australis)
## Contributions
Contributions are always welcome. Please raise an issue so that the contribution may be discussed before it's made.
Contributions are always welcome. Please raise an issue to discuss a contribution before it is made.

View file

@ -115,11 +115,13 @@ struct JobKey {
3: string name
}
// TODO(jly): Deprecated, remove in 0.21. See AURORA-1959.
/** A unique lock key. */
union LockKey {
1: JobKey job
}
// TODO(jly): Deprecated, remove in 0.21. See AURORA-1959.
/** A generic lock struct to facilitate context specific resource/operation serialization. */
struct Lock {
/** ID of the lock - unique per storage */
@ -238,6 +240,42 @@ union Resource {
5: i64 numGpus
}
struct PartitionPolicy {
1: bool reschedule
2: optional i64 delaySecs
}
/** SLA requirements expressed as the percentage of instances to be RUNNING every durationSecs */
struct PercentageSlaPolicy {
/* The percentage of active instances required every `durationSecs`. */
1: double percentage
/** Minimum time duration a task needs to be `RUNNING` to be treated as active */
2: i64 durationSecs
}
/** SLA requirements expressed as the number of instances to be RUNNING every durationSecs */
struct CountSlaPolicy {
/** The number of active instances required every `durationSecs` */
1: i64 count
/** Minimum time duration a task needs to be `RUNNING` to be treated as active */
2: i64 durationSecs
}
/** SLA requirements to be delegated to an external coordinator */
struct CoordinatorSlaPolicy {
/** URL for the coordinator service that needs to be contacted for SLA checks */
1: string coordinatorUrl
/** Field in the Coordinator response json indicating if the action is allowed or not */
2: string statusKey
}
/** SLA requirements expressed in one of the many types */
union SlaPolicy {
1: PercentageSlaPolicy percentageSlaPolicy
2: CountSlaPolicy countSlaPolicy
3: CoordinatorSlaPolicy coordinatorSlaPolicy
}
/** Description of the tasks contained within a job. */
struct TaskConfig {
/** Job task belongs to. */
@ -246,12 +284,6 @@ struct TaskConfig {
/** contains the role component of JobKey */
17: Identity owner
7: bool isService
// TODO(maxim): Deprecated. See AURORA-1707.
8: double numCpus
// TODO(maxim): Deprecated. See AURORA-1707.
9: i64 ramMb
// TODO(maxim): Deprecated. See AURORA-1707.
10: i64 diskMb
11: i32 priority
13: i32 maxTaskFailures
// TODO(mnurolahzade): Deprecated. See AURORA-1708.
@ -263,8 +295,6 @@ struct TaskConfig {
32: set<Resource> resources
20: set<Constraint> constraints
/** a list of named ports this task requests */
21: set<string> requestedPorts
/** Resources to retrieve with Mesos Fetcher */
33: optional set<MesosFetcherURI> mesosFetcherUris
/**
@ -278,6 +308,10 @@ struct TaskConfig {
25: optional ExecutorConfig executorConfig
/** Used to display additional details in the UI. */
27: optional set<Metadata> metadata
/** Policy for how to deal with task partitions */
34: optional PartitionPolicy partitionPolicy
/** SLA requirements to be met during maintenance */
35: optional SlaPolicy slaPolicy
// This field is deliberately placed at the end to work around a bug in the immutable wrapper
// code generator. See AURORA-1185 for details.
@ -286,15 +320,6 @@ struct TaskConfig {
}
struct ResourceAggregate {
// TODO(maxim): Deprecated. See AURORA-1707.
/** Number of CPU cores allotted. */
1: double numCpus
// TODO(maxim): Deprecated. See AURORA-1707.
/** Megabytes of RAM allotted. */
2: i64 ramMb
// TODO(maxim): Deprecated. See AURORA-1707.
/** Megabytes of disk space allotted. */
3: i64 diskMb
/** Aggregated resource values. */
4: set<Resource> resources
}
@ -422,7 +447,11 @@ enum ScheduleStatus {
/** A fault in the task environment has caused the system to believe the task no longer exists.
* This can happen, for example, when a slave process disappears.
*/
LOST = 7
LOST = 7,
/**
* The task is currently partitioned and in an unknown state.
**/
PARTITIONED = 18
}
// States that a task may be in while still considered active.
@ -434,6 +463,7 @@ const set<ScheduleStatus> ACTIVE_STATES = [ScheduleStatus.ASSIGNED,
ScheduleStatus.RESTARTING
ScheduleStatus.RUNNING,
ScheduleStatus.STARTING,
ScheduleStatus.PARTITIONED,
ScheduleStatus.THROTTLED]
// States that a task may be in while associated with a slave machine and non-terminal.
@ -443,6 +473,7 @@ const set<ScheduleStatus> SLAVE_ASSIGNED_STATES = [ScheduleStatus.ASSIGNED,
ScheduleStatus.PREEMPTING,
ScheduleStatus.RESTARTING,
ScheduleStatus.RUNNING,
ScheduleStatus.PARTITIONED,
ScheduleStatus.STARTING]
// States that a task may be in while in an active sandbox.
@ -450,6 +481,7 @@ const set<ScheduleStatus> LIVE_STATES = [ScheduleStatus.KILLING,
ScheduleStatus.PREEMPTING,
ScheduleStatus.RESTARTING,
ScheduleStatus.DRAINING,
ScheduleStatus.PARTITIONED,
ScheduleStatus.RUNNING]
// States a completed task may be in.
@ -518,6 +550,11 @@ struct ScheduledTask {
* this task.
*/
3: i32 failureCount
/**
* The number of partitions this task has accumulated over its lifetime.
*/
6: i32 timesPartitioned
/** State change history for this task. */
4: list<TaskEvent> taskEvents
/**
@ -540,16 +577,16 @@ struct GetJobsResult {
* (terms are AND'ed together).
*/
struct TaskQuery {
14: string role
9: string environment
2: string jobName
4: set<string> taskIds
5: set<ScheduleStatus> statuses
7: set<i32> instanceIds
10: set<string> slaveHosts
11: set<JobKey> jobKeys
12: i32 offset
13: i32 limit
14: optional string role
9: optional string environment
2: optional string jobName
4: optional set<string> taskIds
5: optional set<ScheduleStatus> statuses
7: optional set<i32> instanceIds
10: optional set<string> slaveHosts
11: optional set<JobKey> jobKeys
12: optional i32 offset
13: optional i32 limit
}
struct HostStatus {
@ -619,7 +656,6 @@ const set<JobUpdateStatus> ACTIVE_JOB_UPDATE_STATES = [JobUpdateStatus.ROLLING_F
JobUpdateStatus.ROLL_BACK_PAUSED,
JobUpdateStatus.ROLL_FORWARD_AWAITING_PULSE,
JobUpdateStatus.ROLL_BACK_AWAITING_PULSE]
/** States the job update can be in while waiting for a pulse. */
const set<JobUpdateStatus> AWAITNG_PULSE_JOB_UPDATE_STATES = [JobUpdateStatus.ROLL_FORWARD_AWAITING_PULSE,
JobUpdateStatus.ROLL_BACK_AWAITING_PULSE]
@ -680,9 +716,40 @@ struct JobUpdateKey {
2: string id
}
/** Limits the amount of active changes being made to instances to groupSize. */
struct QueueJobUpdateStrategy {
1: i32 groupSize
}
/** Similar to Queue strategy but will not start a new group until all instances in an active
* group have finished updating.
*/
struct BatchJobUpdateStrategy {
1: i32 groupSize
/* Update will pause automatically after each batch completes */
2: bool autopauseAfterBatch
}
/** Same as Batch strategy but each time an active group completes, the size of the next active
* group may change.
*/
struct VariableBatchJobUpdateStrategy {
1: list<i32> groupSizes
/* Update will pause automatically after each batch completes */
2: bool autopauseAfterBatch
}
union JobUpdateStrategy {
1: QueueJobUpdateStrategy queueStrategy
2: BatchJobUpdateStrategy batchStrategy
3: VariableBatchJobUpdateStrategy varBatchStrategy
}
/** Job update thresholds and limits. */
struct JobUpdateSettings {
/** Max number of instances being updated at any given moment. */
/** Deprecated, please set value inside of desired update strategy instead.
* Max number of instances being updated at any given moment.
*/
1: i32 updateGroupSize
/** Max number of instance failures to tolerate before marking instance as FAILED. */
@ -700,19 +767,28 @@ struct JobUpdateSettings {
/** Instance IDs to act on. All instances will be affected if this is not set. */
7: set<Range> updateOnlyTheseInstances
/**
/** Deprecated, please set updateStrategy to the Batch strategy instead.
* If true, use updateGroupSize as strict batching boundaries, and avoid proceeding to another
* batch until the preceding batch finishes updating.
*/
8: bool waitForBatchCompletion
/**
* If set, requires external calls to pulseJobUpdate RPC within the specified rate for the
* update to make progress. If no pulses received within specified interval the update will
* block. A blocked update is unable to continue but retains its current status. It may only get
* unblocked by a fresh pulseJobUpdate call.
*/
/**
* If set, requires external calls to pulseJobUpdate RPC within the specified rate for the
* update to make progress. If no pulses received within specified interval the update will
* block. A blocked update is unable to continue but retains its current status. It may only get
* unblocked by a fresh pulseJobUpdate call.
*/
9: optional i32 blockIfNoPulsesAfterMs
/**
* If true, updates will obey the SLA requirements of the tasks being updated. If the SLA policy
* differs between the old and new task configurations, updates will use the newest configuration.
*/
10: optional bool slaAware
/** Update strategy to be used for the update. See JobUpdateStrategy for choices. */
11: optional JobUpdateStrategy updateStrategy
}
/** Event marking a state transition in job update lifecycle. */
@ -743,6 +819,9 @@ struct JobInstanceUpdateEvent {
/** Job update action taken on the instance. */
3: JobUpdateAction action
/** Optional message explaining the instance update event. */
4: optional string message
}
/** Maps instance IDs to TaskConfigs it. */
@ -855,6 +934,13 @@ struct JobUpdateQuery {
7: i32 limit
}
struct HostMaintenanceRequest {
1: string host
2: SlaPolicy defaultSlaPolicy
3: i64 timeoutSecs
4: i64 createdTimestampMs
}
struct ListBackupsResult {
1: set<string> backups
}
@ -1039,7 +1125,6 @@ service ReadOnlyScheduler {
Response getJobUpdateSummaries(1: JobUpdateQuery jobUpdateQuery)
/** Gets job update details. */
// TODO(zmanji): `key` is deprecated, remove this with AURORA-1765
Response getJobUpdateDetails(2: JobUpdateQuery query)
/** Gets the diff between client (desired) and server (current) job states. */
@ -1192,6 +1277,12 @@ service AuroraAdmin extends AuroraSchedulerManager {
/** Set the given hosts back into serving mode. */
Response endMaintenance(1: Hosts hosts)
/**
* Ask scheduler to put hosts into DRAINING mode and move scheduled tasks off of the hosts
* such that its SLA requirements are satisfied. Use defaultSlaPolicy if it is not set for a task.
**/
Response slaDrainHosts(1: Hosts hosts, 2: SlaPolicy defaultSlaPolicy, 3: i64 timeoutSecs)
/** Start a storage snapshot and block until it completes. */
Response snapshot()

View file

@ -21,6 +21,8 @@ import (
"github.com/pkg/errors"
)
// Cluster contains the definition of the clusters.json file used by the default Aurora
// client for configuration
type Cluster struct {
Name string `json:"name"`
AgentRoot string `json:"slave_root"`
@ -33,7 +35,8 @@ type Cluster struct {
AuthMechanism string `json:"auth_mechanism"`
}
// Loads clusters.json file traditionally located at /etc/aurora/clusters.json
// LoadClusters loads clusters.json file traditionally located at /etc/aurora/clusters.json
// for use with a gorealis client
func LoadClusters(config string) (map[string]Cluster, error) {
file, err := os.Open(config)

View file

@ -18,7 +18,7 @@ import (
"fmt"
"testing"
"github.com/paypal/gorealis"
realis "github.com/paypal/gorealis"
"github.com/stretchr/testify/assert"
)

View file

@ -18,31 +18,40 @@ import (
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
// Container is an interface that defines a single function needed to create
// an Aurora container type. It exists because the code must support both Mesos
// and Docker containers.
type Container interface {
Build() *aurora.Container
}
// MesosContainer is a Mesos style container that can be used by Aurora Jobs.
type MesosContainer struct {
container *aurora.MesosContainer
}
// DockerContainer is a vanilla Docker style container that can be used by Aurora Jobs.
type DockerContainer struct {
container *aurora.DockerContainer
}
// NewDockerContainer creates a new Aurora compatible Docker container configuration.
func NewDockerContainer() DockerContainer {
return DockerContainer{container: aurora.NewDockerContainer()}
}
// Build creates an Aurora container based upon the configuration provided.
func (c DockerContainer) Build() *aurora.Container {
return &aurora.Container{Docker: c.container}
}
// Image adds the name of a Docker image to be used by the Job when running.
func (c DockerContainer) Image(image string) DockerContainer {
c.container.Image = image
return c
}
// AddParameter adds a parameter to be passed to Docker when the container is run.
func (c DockerContainer) AddParameter(name, value string) DockerContainer {
c.container.Parameters = append(c.container.Parameters, &aurora.DockerParameter{
Name: name,
@ -51,14 +60,17 @@ func (c DockerContainer) AddParameter(name, value string) DockerContainer {
return c
}
// NewMesosContainer creates a Mesos style container to be configured and built for use by an Aurora Job.
func NewMesosContainer() MesosContainer {
return MesosContainer{container: aurora.NewMesosContainer()}
}
// Build creates a Mesos style Aurora container configuration to be passed on to the Aurora Job.
func (c MesosContainer) Build() *aurora.Container {
return &aurora.Container{Mesos: c.container}
}
// DockerImage configures the Mesos container to use a specific Docker image when being run.
func (c MesosContainer) DockerImage(name, tag string) MesosContainer {
if c.container.Image == nil {
c.container.Image = aurora.NewImage()
@ -68,11 +80,12 @@ func (c MesosContainer) DockerImage(name, tag string) MesosContainer {
return c
}
func (c MesosContainer) AppcImage(name, imageId string) MesosContainer {
// AppcImage configures the Mesos container to use an image in the Appc format to run the container.
func (c MesosContainer) AppcImage(name, imageID string) MesosContainer {
if c.container.Image == nil {
c.container.Image = aurora.NewImage()
}
c.container.Image.Appc = &aurora.AppcImage{Name: name, ImageId: imageId}
c.container.Image.Appc = &aurora.AppcImage{Name: name, ImageId: imageID}
return c
}

109
docker-compose.yml Normal file
View file

@ -0,0 +1,109 @@
version: "2"
services:
zk:
image: rdelvalle/zookeeper
restart: on-failure
ports:
- "2181:2181"
environment:
ZK_CONFIG: tickTime=2000,initLimit=10,syncLimit=5,maxClientCnxns=128,forceSync=no,clientPort=2181
ZK_ID: 1
networks:
aurora_cluster:
ipv4_address: 192.168.33.2
master:
image: aurorascheduler/mesos-master:1.7.2
restart: on-failure
ports:
- "5050:5050"
environment:
MESOS_ZK: zk://192.168.33.2:2181/mesos
MESOS_QUORUM: 1
MESOS_HOSTNAME: localhost
MESOS_CLUSTER: test-cluster
MESOS_REGISTRY: replicated_log
MESOS_WORK_DIR: /tmp/mesos
networks:
aurora_cluster:
ipv4_address: 192.168.33.3
depends_on:
- zk
agent-one:
image: aurorascheduler/mesos-agent:1.7.2
pid: host
restart: on-failure
ports:
- "5051:5051"
environment:
MESOS_ATTRIBUTES: 'zone:west'
MESOS_MASTER: zk://192.168.33.2:2181/mesos
MESOS_CONTAINERIZERS: docker,mesos
MESOS_PORT: 5051
MESOS_HOSTNAME: localhost
MESOS_RESOURCES: ports(*):[11000-11999]
MESOS_SYSTEMD_ENABLE_SUPPORT: 'false'
MESOS_WORK_DIR: /tmp/mesos
networks:
aurora_cluster:
ipv4_address: 192.168.33.4
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zk
agent-two:
image: aurorascheduler/mesos-agent:1.7.2
pid: host
restart: on-failure
ports:
- "5061:5061"
environment:
MESOS_ATTRIBUTES: 'zone:east'
MESOS_MASTER: zk://192.168.33.2:2181/mesos
MESOS_CONTAINERIZERS: docker,mesos
MESOS_HOSTNAME: localhost
MESOS_PORT: 5061
MESOS_RESOURCES: ports(*):[11000-11999]
MESOS_SYSTEMD_ENABLE_SUPPORT: 'false'
MESOS_WORK_DIR: /tmp/mesos
networks:
aurora_cluster:
ipv4_address: 192.168.33.5
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zk
aurora-one:
image: aurorascheduler/scheduler:0.23.0
pid: host
ports:
- "8081:8081"
restart: on-failure
environment:
CLUSTER_NAME: test-cluster
ZK_ENDPOINTS: "192.168.33.2:2181"
MESOS_MASTER: "zk://192.168.33.2:2181/mesos"
EXTRA_SCHEDULER_ARGS: "-min_required_instances_for_sla_check=1"
networks:
aurora_cluster:
ipv4_address: 192.168.33.7
depends_on:
- zk
- master
- agent-one
networks:
aurora_cluster:
driver: bridge
ipam:
config:
- subnet: 192.168.33.0/16
gateway: 192.168.33.1

90
docs/developing.md Normal file
View file

@ -0,0 +1,90 @@
# Developing gorealis
### Installing Docker
For our developer environment we leverage of Docker containers.
First you must have Docker installed. Instructions on how to install Docker
vary from platform to platform and can be found [here](https://docs.docker.com/install/).
### Installing docker-compose
To make the creation of our developer environment as simple as possible, we leverage
docker-compose to bring up all independent components up separately.
This also allows us to delete and recreate our development cluster very quickly.
To install docker-compose please follow the instructions for your platform
[here](https://docs.docker.com/compose/install/).
### Getting the source code
As of go 1.10.x, GOPATH is still relevant. This may change in the future but
for the sake of making development less error prone, it is suggested that the following
directories be created:
`$ mkdir -p $GOPATH/src/github.com/paypal`
And then clone the master branch into the newly created folder:
`$ cd $GOPATH/src/github.com/paypal; git clone git@github.com:paypal/gorealis.git`
Since we check in our vendor folder, gorealis no further set up is needed.
### Bringing up the cluster
To develop gorealis, you will need a fully functioning Mesos cluster along with
Apache Aurora.
In order to bring up our docker-compose set up execute the following command from the root
of the git repository:
`$ docker-compose up -d`
### Testing code
Since Docker does not work well using host mode under MacOS, a workaround has been employed:
docker-compose brings up a bridged network.
* The ports 8081 is exposed for Aurora. http://localhost:8081 will load the Aurora Web UI.
* The port 5050 is exposed for Mesos. http://localhost:5050 will load the Mesos Web UI.
#### Note for developers on MacOS:
Running the cluster using a bridged network on MacOS has some side effects.
Since Aurora exposes it's internal IP location through Zookeeper, gorealis will determine
the address to be 192.168.33.7. The address 192.168.33.7 is valid when running in a Linux
environment but not when running under MacOS. To run code involving the ZK leader fetcher
(such as the tests), a container connected to the network needs to be launched.
For example, running the tests in a container can be done through the following command from
the root of the git repository:
`$ docker run -t -v $(pwd):/go/src/github.com/paypal/gorealis --network gorealis_aurora_cluster golang:1.10.3-alpine go test github.com/paypal/gorealis`
Or
`$ ./runTestsMac.sh`
Alternatively, if an interactive shell is necessary, the following command may be used:
`$ docker run -it -v $(pwd):/go/src/github.com/paypal/gorealis --network gorealis_aurora_cluster golang:1.10.3-alpine /bin/sh`
### Cleaning up the cluster
If something went wrong while developing and a clean environment is desired, perform the
following command from the root of the git directory:
`$ docker-compose down && docker-compose up -d`
### Tearing down the cluster
Once development is done, the environment may be torn down by executing (from the root of the
git directory):
`$ docker-compose down`

View file

@ -88,90 +88,25 @@ On Ubuntu, restarting the aurora-scheduler can be achieved by running the follow
$ sudo service aurora-scheduler restart
```
### Using a custom client
Pystachio does not yet support launching tasks using custom executors. Therefore, a custom
client must be used in order to launch tasks using a custom executor. In this case,
we will be using [gorealis](https://github.com/paypal/gorealis) to launch a task with
the compose executor on Aurora.
## Using [dce-go](https://github.com/paypal/dce-go)
Instead of manually configuring Aurora to run the docker-compose executor, one can follow the instructions provided [here](https://github.com/paypal/dce-go/blob/develop/docs/environment.md) to quickly create a DCE environment that would include mesos, aurora, golang1.7, docker, docker-compose and DCE installed.
Please note that when using dce-go, the endpoints are going to be as shown below,
```
Aurora endpoint --> http://192.168.33.8:8081
Mesos endpoint --> http://192.168.33.8:5050
```
## Configuring the system to run a custom client and docker-compose executor
### Installing Go
#### Linux
Follow the instructions at the official golang website: [golang.org/doc/install](https://golang.org/doc/install)
##### Ubuntu
### Installing docker-compose
###### Adding a PPA and install via apt-get
```
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
$ sudo apt-get update
$ sudo apt-get install golang
```
###### Configuring the GOPATH
Configure the environment to be able to compile and run Go code.
```
$ mkdir $HOME/go
$ echo export GOPATH=$HOME/go >> $HOME/.bashrc
$ echo export GOROOT=/usr/lib/go >> $HOME/.bashrc
$ echo export PATH=$PATH:$GOPATH/bin >> $HOME/.bashrc
$ echo export PATH=$PATH:$GOROOT/bin >> $HOME/.bashrc
```
Finally we must reload the .bashrc configuration:
```
$ source $HOME/.bashrc
```
#### OS X
One way to install go on OS X is by using [Homebrew](http://brew.sh/)
##### Installing Homebrew
Run the following command from the terminal to install Hombrew:
```
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
```
##### Installing Go using Hombrew
Run the following command from the terminal to install Go:
```
$ brew install go
```
##### Configuring the GOPATH
Configure the environment to be able to compile and run Go code.
```
$ mkdir $HOME/go
$ echo export GOPATH=$HOME/go >> $HOME/.profile
$ echo export GOROOT=/usr/local/opt/go/libexec >> $HOME/.profile
$ echo export PATH=$PATH:$GOPATH/bin >> $HOME/.profile
$ echo export PATH=$PATH:$GOROOT/bin >> $HOME/.profile
```
Finally we must reload the .profile configuration:
```
$ source $HOME/.profile
```
#### Windows
Download and run the msi installer from https://golang.org/dl/
## Installing Docker Compose
To show Aurora's new multi executor feature, we need to use at least one custom executor.
In this case we will be using the [docker-compose-executor](https://github.com/mesos/docker-compose-executor).
In order to run the docker-compose executor, each agent must have docker-compose installed on it.
This can be done using pip:
```
$ sudo pip install docker-compose
```
Agents which will run dce-go will need docker-compose in order to sucessfully run the executor.
Instructions for installing docker-compose on various platforms may be found on Docker's webiste: [docs.docker.com/compose/install/](https://docs.docker.com/compose/install/)
## Downloading gorealis
Finally, we must get `gorealis` using the `go get` command:
@ -183,7 +118,7 @@ go get github.com/paypal/gorealis
# Creating Aurora Jobs
## Creating a thermos job
To demonstrate that we are able to run jobs using different executors on the
To demonstrate that we are able to run jobs using different executors on the
same scheduler, we'll first launch a thermos job using the default Aurora Client.
We can use a sample job for this:
@ -250,8 +185,8 @@ go run $GOPATH/src/github.com/paypal/gorealis/examples/client.go -executor=compo
```
If everything went according to plan, a new job will be shown in the Aurora UI.
We can further investigate inside the Mesos task sandbox. Inside the sandbox, under
the sample-app folder, we can find a docker-compose.yml-generated.yml. If we inspect this file,
We can further investigate inside the Mesos task sandbox. Inside the sandbox, under
the sample-app folder, we can find a docker-compose.yml-generated.yml. If we inspect this file,
we can find the port at which we can find the web server we launched.
Under Web->Ports, we find the port Mesos allocated. We can then navigate to:
@ -260,10 +195,10 @@ Under Web->Ports, we find the port Mesos allocated. We can then navigate to:
A message from the executor should greet us.
## Creating a Thermos job using gorealis
It is also possible to create a thermos job using gorealis. To do this, however,
It is also possible to create a thermos job using gorealis. To do this, however,
a thermos payload is required. A thermos payload consists of a JSON blob that details
the entire task as it exists inside the Aurora Scheduler. *Creating the blob is unfortunately
out of the scope of was gorealis does*, so a thermos payload must be generated beforehand or
out of the scope of what gorealis does*, so a thermos payload must be generated beforehand or
retrieved from the structdump of an existing task for testing purposes.
A sample thermos JSON payload may be found [here](../examples/thermos_payload.json) in the examples folder.
@ -292,13 +227,32 @@ $ cd $GOPATH/src/github.com/paypal/gorealis
$ go run examples/client.go -executor=thermos -url=http://192.168.33.7:8081 -cmd=create -executor=thermos
```
## Creating jobs using gorealis JSON client
We can also use the [JSON client](../examples/jsonClient.go) to create Aurora jobs using gorealis.
If using _dce-go_, then use `http://192.168.33.8:8081` as the scheduler URL.
```
$ cd $GOPATH/src/github.com/paypal/gorealis/examples
```
To launch a job using the Thermos executor,
```
$ go run jsonClient.go -job=job_thermos.json -config=config.json
```
To launch a job using docker-compose executor,
```
$ go run jsonClient.go -job=job_dce.json -config=config.json
```
# Cleaning up
To stop the jobs we've launched, we need to send a job kill request to Aurora.
It should be noted that although we can't create jobs using a custom executor using the default Aurora client,
we ~can~ use the default Aurora client to kill them. Additionally, we can use gorealis perform the clean up as well.
## Using the Default Client
## Using the Default Client (if manually configured Aurora)
```
$ aurora job killall devcluster/www-data/prod/hello

View file

@ -57,4 +57,19 @@ updateJob := realis.NewUpdateJob(job)
updateJob.InstanceCount(1)
updateJob.Ram(128)
msg, err := r.UpdateJob(updateJob, "")
```
```
* Handling a timeout scenario:
When sending an API call to Aurora, the call may timeout at the client side.
This means that the time limit has been reached while waiting for the scheduler
to reply. In such a case it is recommended that the timeout is increased through
the use of the `realis.TimeoutMS()` option.
As these timeouts cannot be totally avoided, there exists a mechanism to mitigate such
scenarios. The `StartJobUpdate` and `CreateService` API will return an error that
implements the Timeout interface.
An error can be checked to see if it is a Timeout error by using the `realis.IsTimeout()`
function.

View file

@ -1,6 +1,6 @@
# Using the Sample client
## Usage:
## Usage:
```
Usage of ./client:
-cluster string

View file

@ -16,52 +16,89 @@ package realis
// Using a pattern described by Dave Cheney to differentiate errors
// https://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully
// Timeout errors are returned when a function is unable to continue executing due
// to a time constraint or meeting a set number of retries.
type timeout interface {
Timeout() bool
Timedout() bool
}
// IsTimeout returns true if the error being passed as an argument implements the Timeout interface
// and the Timedout function returns true.
func IsTimeout(err error) bool {
temp, ok := err.(timeout)
return ok && temp.Timeout()
return ok && temp.Timedout()
}
type TimeoutErr struct {
type timeoutErr struct {
error
timeout bool
timedout bool
}
func (t *TimeoutErr) Timeout() bool {
return t.timeout
func (r *timeoutErr) Timedout() bool {
return r.timedout
}
func NewTimeoutError(err error) *TimeoutErr {
return &TimeoutErr{error: err, timeout: true}
func newTimedoutError(err error) *timeoutErr {
return &timeoutErr{error: err, timedout: true}
}
// retryErr is a superset of timeout which includes extra context
// with regards to our retry mechanism. This is done in order to make sure
// that our retry mechanism works as expected through our tests and should
// never be relied on or used directly. It is not made part of the public API
// on purpose.
type retryErr struct {
error
timedout bool
retryCount int // How many times did the mechanism retry the command
}
// Retry error is a timeout type error with added context.
func (r *retryErr) Timedout() bool {
return r.timedout
}
func (r *retryErr) RetryCount() int {
return r.retryCount
}
// ToRetryCount is a helper function for testing verification to avoid whitebox testing
// as well as keeping retryErr as a private.
// Should NOT be used under any other context.
func ToRetryCount(err error) *retryErr {
if retryErr, ok := err.(*retryErr); ok {
return retryErr
}
return nil
}
func newRetryError(err error, retryCount int) *retryErr {
return &retryErr{error: err, timedout: true, retryCount: retryCount}
}
// Temporary errors indicate that the action may or should be retried.
type temporary interface {
Temporary() bool
}
// IsTemporary indicates whether the error passed in as an argument implements the temporary interface
// and if the Temporary function returns true.
func IsTemporary(err error) bool {
temp, ok := err.(temporary)
return ok && temp.Temporary()
}
type TemporaryErr struct {
type temporaryErr struct {
error
temporary bool
}
func (t *TemporaryErr) Temporary() bool {
func (t *temporaryErr) Temporary() bool {
return t.temporary
}
// Retrying after receiving this error is advised
func NewTemporaryError(err error) *TemporaryErr {
return &TemporaryErr{error: err, temporary: true}
}
// Nothing can be done about this error
func NewPermamentError(err error) TemporaryErr {
return TemporaryErr{error: err, temporary: false}
// NewTemporaryError creates a new error which satisfies the Temporary interface.
func NewTemporaryError(err error) *temporaryErr {
return &temporaryErr{error: err, temporary: true}
}

View file

@ -18,22 +18,20 @@ import (
"flag"
"fmt"
"io/ioutil"
"os"
"log"
"strings"
"time"
"strings"
"log"
"github.com/paypal/gorealis"
realis "github.com/paypal/gorealis"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/paypal/gorealis/response"
)
var cmd, executor, url, clustersConfig, clusterName, updateId, username, password, zkUrl, hostList string
var cmd, executor, url, clustersConfig, clusterName, updateId, username, password, zkUrl, hostList, role string
var caCertsPath string
var clientKey, clientCert string
var CONNECTION_TIMEOUT = 20000
var ConnectionTimeout = 20000
func init() {
flag.StringVar(&cmd, "cmd", "", "Job request type to send to Aurora Scheduler")
@ -46,6 +44,11 @@ func init() {
flag.StringVar(&password, "password", "secret", "Password to use for authorization")
flag.StringVar(&zkUrl, "zkurl", "", "zookeeper url")
flag.StringVar(&hostList, "hostList", "", "Comma separated list of hosts to operate on")
flag.StringVar(&role, "role", "", "owner role to use")
flag.StringVar(&caCertsPath, "caCertsPath", "", "Path to CA certs on local machine.")
flag.StringVar(&clientCert, "clientCert", "", "Client certificate to use to connect to Aurora.")
flag.StringVar(&clientKey, "clientKey", "", "Client private key to use to connect to Aurora.")
flag.Parse()
// Attempt to load leader from zookeeper using a
@ -54,20 +57,17 @@ func init() {
if clustersConfig != "" {
clusters, err := realis.LoadClusters(clustersConfig)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatalln(err)
}
cluster, ok := clusters[clusterName]
if !ok {
fmt.Printf("Cluster %s doesn't exist in the file provided\n", clusterName)
os.Exit(1)
log.Fatalf("Cluster %s doesn't exist in the file provided\n", clusterName)
}
url, err = realis.LeaderFromZK(cluster)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatalln(err)
}
}
}
@ -82,17 +82,16 @@ func main() {
clientOptions := []realis.ClientOption{
realis.BasicAuth(username, password),
realis.ThriftJSON(),
realis.TimeoutMS(CONNECTION_TIMEOUT),
realis.BackOff(&realis.Backoff{
realis.TimeoutMS(ConnectionTimeout),
realis.BackOff(realis.Backoff{
Steps: 2,
Duration: 10 * time.Second,
Factor: 2.0,
Jitter: 0.1,
}),
realis.SetLogger(log.New(os.Stdout, "realis-debug: ", log.Ldate)),
realis.Debug(),
}
//check if zkUrl is available.
if zkUrl != "" {
fmt.Println("zkUrl: ", zkUrl)
clientOptions = append(clientOptions, realis.ZKUrl(zkUrl))
@ -100,20 +99,26 @@ func main() {
clientOptions = append(clientOptions, realis.SchedulerUrl(url))
}
if caCertsPath != "" {
clientOptions = append(clientOptions, realis.Certspath(caCertsPath))
}
if clientKey != "" && clientCert != "" {
clientOptions = append(clientOptions, realis.ClientCerts(clientKey, clientCert))
}
r, err = realis.NewRealisClient(clientOptions...)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatalln(err)
}
monitor = &realis.Monitor{r}
monitor = &realis.Monitor{Client: r}
defer r.Close()
switch executor {
case "thermos":
payload, err := ioutil.ReadFile("examples/thermos_payload.json")
if err != nil {
fmt.Println("Error reading json config file: ", err)
os.Exit(1)
log.Fatalln("Error reading json config file: ", err)
}
job = realis.NewJob().
@ -128,23 +133,21 @@ func main() {
IsService(true).
InstanceCount(1).
AddPorts(1)
break
case "compose":
job = realis.NewJob().
Environment("prod").
Role("vagrant").
Name("docker-compose").
Name("docker-compose-test").
ExecutorName("docker-compose-executor").
ExecutorData("{}").
CPU(0.25).
RAM(64).
RAM(512).
Disk(100).
IsService(true).
InstanceCount(1).
AddPorts(4).
AddLabel("fileName", "sample-app/docker-compose.yml").
AddURIs(true, true, "https://github.com/mesos/docker-compose-executor/releases/download/0.1.0/sample-app.tar.gz")
break
case "none":
job = realis.NewJob().
Environment("prod").
@ -156,10 +159,8 @@ func main() {
IsService(true).
InstanceCount(1).
AddPorts(1)
break
default:
fmt.Println("Only thermos, compose, and none are supported for now")
os.Exit(1)
log.Fatalln("Only thermos, compose, and none are supported for now")
}
switch cmd {
@ -167,89 +168,73 @@ func main() {
fmt.Println("Creating job")
resp, err := r.CreateJob(job)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatalln(err)
}
fmt.Println(resp.String())
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, err := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 5, 50); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Println("ok: ", ok)
fmt.Println("err: ", err)
if ok, mErr := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 5, 50); !ok || mErr != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
log.Fatalln(err)
}
log.Fatalf("ok: %v\n err: %v", ok, mErr)
}
break
case "createService":
// Create a service with three instances using the update API instead of the createJob API
fmt.Println("Creating service")
settings := realis.NewUpdateSettings()
job.InstanceCount(3)
_, resp, err := r.CreateService(job, *settings)
resp, result, err := r.CreateService(job, settings)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Println("error: ", err)
log.Fatal("response: ", resp.String())
}
fmt.Println(resp.String())
fmt.Println(result.String())
if ok, err := monitor.JobUpdate(*resp.GetKey(), 5, 50); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if ok, mErr := monitor.JobUpdate(*result.GetKey(), 5, 180); !ok || mErr != nil {
_, err := r.AbortJobUpdate(*result.GetKey(), "Monitor timed out")
_, err = r.KillJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println("ok: ", ok)
fmt.Println("err: ", err)
log.Fatalf("ok: %v\n err: %v", ok, mErr)
}
break
case "createDocker":
fmt.Println("Creating a docker based job")
container := realis.NewDockerContainer().Image("python:2.7").AddParameter("network", "host")
job.Container(container)
resp, err := r.CreateJob(job)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, err := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 10, 300); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if ok, err := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 10, 300); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
}
break
case "createMesosContainer":
fmt.Println("Creating a docker based job")
container := realis.NewMesosContainer().DockerImage("python", "2.7")
job.Container(container)
resp, err := r.CreateJob(job)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, err := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 10, 300); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if ok, err := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 10, 300); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
}
break
case "scheduleCron":
fmt.Println("Scheduling a Cron job")
// Cron config
@ -257,81 +242,68 @@ func main() {
job.IsService(false)
resp, err := r.ScheduleCronJob(job)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
break
case "startCron":
fmt.Println("Starting a Cron job")
resp, err := r.StartCronJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
break
case "descheduleCron":
fmt.Println("Descheduling a Cron job")
resp, err := r.DescheduleCronJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
break
case "kill":
fmt.Println("Killing job")
resp, err := r.KillJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, err := monitor.Instances(job.JobKey(), 0, 5, 50); !ok || err != nil {
fmt.Println("Unable to kill all instances of job")
os.Exit(1)
}
if ok, err := monitor.Instances(job.JobKey(), 0, 5, 50); !ok || err != nil {
log.Fatal("Unable to kill all instances of job")
}
fmt.Println(resp.String())
break
case "restart":
fmt.Println("Restarting job")
resp, err := r.RestartJob(job.JobKey())
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
break
case "liveCount":
fmt.Println("Getting instance count")
live, err := r.GetInstanceIds(job.JobKey(), aurora.LIVE_STATES)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Printf("Live instances: %+v\n", live)
break
case "activeCount":
fmt.Println("Getting instance count")
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println("Number of live instances: ", len(live))
break
case "flexUp":
fmt.Println("Flexing up job")
@ -339,34 +311,25 @@ func main() {
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
currInstances := int32(len(live))
fmt.Println("Current num of instances: ", currInstances)
var instId int32
for k := range live {
instId = k
break
}
resp, err := r.AddInstances(aurora.InstanceKey{
JobKey: job.JobKey(),
InstanceId: instId,
InstanceId: live[0],
},
numOfInstances)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, err := monitor.Instances(job.JobKey(), currInstances+numOfInstances, 5, 50); !ok || err != nil {
fmt.Println("Flexing up failed")
}
if ok, err := monitor.Instances(job.JobKey(), currInstances+numOfInstances, 5, 50); !ok || err != nil {
fmt.Println("Flexing up failed")
}
fmt.Println(resp.String())
break
case "flexDown":
fmt.Println("Flexing down job")
@ -374,57 +337,79 @@ func main() {
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
currInstances := int32(len(live))
fmt.Println("Current num of instances: ", currInstances)
resp, err := r.RemoveInstances(job.JobKey(), numOfInstances)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, err := monitor.Instances(job.JobKey(), currInstances-numOfInstances, 5, 50); !ok || err != nil {
fmt.Println("flexDown failed")
}
if ok, err := monitor.Instances(job.JobKey(), currInstances-numOfInstances, 5, 100); !ok || err != nil {
fmt.Println("flexDown failed")
}
fmt.Println(resp.String())
break
case "update":
fmt.Println("Updating a job with with more RAM and to 5 instances")
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
var instId int32
for k := range live {
instId = k
break
log.Fatal(err)
}
taskConfig, err := r.FetchTaskConfig(aurora.InstanceKey{
JobKey: job.JobKey(),
InstanceId: instId,
InstanceId: live[0],
})
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
updateJob := realis.NewDefaultUpdateJob(taskConfig)
updateJob.InstanceCount(5).RAM(128)
resp, err := r.StartJobUpdate(updateJob, "")
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
jobUpdateKey := response.JobUpdateKey(resp)
monitor.JobUpdate(*jobUpdateKey, 5, 500)
break
case "pauseJobUpdate":
resp, err := r.PauseJobUpdate(&aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
}, "")
if err != nil {
log.Fatal(err)
}
fmt.Println("PauseJobUpdate response: ", resp.String())
case "resumeJobUpdate":
resp, err := r.ResumeJobUpdate(&aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
}, "")
if err != nil {
log.Fatal(err)
}
fmt.Println("ResumeJobUpdate response: ", resp.String())
case "pulseJobUpdate":
resp, err := r.PulseJobUpdate(&aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
})
if err != nil {
log.Fatal(err)
}
fmt.Println("PulseJobUpdate response: ", resp.String())
case "updateDetails":
resp, err := r.JobUpdateDetails(aurora.JobUpdateQuery{
Key: &aurora.JobUpdateKey{
@ -435,11 +420,11 @@ func main() {
})
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(response.JobUpdateDetails(resp))
break
case "abortUpdate":
fmt.Println("Abort update")
resp, err := r.AbortJobUpdate(aurora.JobUpdateKey{
@ -449,11 +434,10 @@ func main() {
"")
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
break
case "rollbackUpdate":
fmt.Println("Abort update")
resp, err := r.RollbackJobUpdate(aurora.JobUpdateKey{
@ -463,34 +447,28 @@ func main() {
"")
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
fmt.Println(resp.String())
break
case "taskConfig":
fmt.Println("Getting job info")
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
var instId int32
for k := range live {
instId = k
break
log.Fatal(err)
}
config, err := r.FetchTaskConfig(aurora.InstanceKey{
JobKey: job.JobKey(),
InstanceId: instId,
InstanceId: live[0],
})
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal(err)
}
print(config.String())
break
log.Println(config.String())
case "updatesummary":
fmt.Println("Getting job update summary")
jobquery := &aurora.JobUpdateQuery{
@ -499,49 +477,50 @@ func main() {
}
updatesummary, err := r.GetJobUpdateSummaries(jobquery)
if err != nil {
fmt.Printf("error while getting update summary: %v", err)
os.Exit(1)
log.Fatalf("error while getting update summary: %v", err)
}
fmt.Println(updatesummary)
case "taskStatus":
fmt.Println("Getting task status")
taskQ := &aurora.TaskQuery{
Role: job.JobKey().Role,
Environment: job.JobKey().Environment,
JobName: job.JobKey().Name,
Role: &job.JobKey().Role,
Environment: &job.JobKey().Environment,
JobName: &job.JobKey().Name,
}
tasks, err := r.GetTaskStatus(taskQ)
if err != nil {
fmt.Printf("error: %+v\n ", err)
os.Exit(1)
log.Fatalf("error: %+v\n ", err)
}
fmt.Printf("length: %d\n ", len(tasks))
fmt.Printf("tasks: %+v\n", tasks)
case "tasksWithoutConfig":
fmt.Println("Getting task status")
taskQ := &aurora.TaskQuery{
Role: job.JobKey().Role,
Environment: job.JobKey().Environment,
JobName: job.JobKey().Name,
Role: &job.JobKey().Role,
Environment: &job.JobKey().Environment,
JobName: &job.JobKey().Name,
}
tasks, err := r.GetTasksWithoutConfigs(taskQ)
if err != nil {
fmt.Printf("error: %+v\n ", err)
os.Exit(1)
log.Fatalf("error: %+v\n ", err)
}
fmt.Printf("length: %d\n ", len(tasks))
fmt.Printf("tasks: %+v\n", tasks)
case "drainHosts":
fmt.Println("Setting hosts to DRAINING")
if hostList == "" {
fmt.Println("No hosts specified to drain")
os.Exit(1)
log.Fatal("No hosts specified to drain")
}
hosts := strings.Split(hostList, ",")
_, result, err := r.DrainHosts(hosts...)
if err != nil {
fmt.Printf("error: %+v\n", err.Error())
os.Exit(1)
log.Fatalf("error: %+v\n", err.Error())
}
// Monitor change to DRAINING and DRAINED mode
@ -556,23 +535,51 @@ func main() {
fmt.Printf("Host %s did not transtion into desired mode(s)\n", host)
}
}
fmt.Printf("error: %+v\n", err.Error())
os.Exit(1)
log.Fatalf("error: %+v\n", err.Error())
}
fmt.Print(result.String())
case "SLADrainHosts":
fmt.Println("Setting hosts to DRAINING using SLA aware draining")
if hostList == "" {
log.Fatal("No hosts specified to drain")
}
hosts := strings.Split(hostList, ",")
policy := aurora.SlaPolicy{PercentageSlaPolicy: &aurora.PercentageSlaPolicy{Percentage: 50.0}}
result, err := r.SLADrainHosts(&policy, 30, hosts...)
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
// Monitor change to DRAINING and DRAINED mode
hostResult, err := monitor.HostMaintenance(
hosts,
[]aurora.MaintenanceMode{aurora.MaintenanceMode_DRAINED, aurora.MaintenanceMode_DRAINING},
5,
10)
if err != nil {
for host, ok := range hostResult {
if !ok {
fmt.Printf("Host %s did not transtion into desired mode(s)\n", host)
}
}
log.Fatalf("error: %+v\n", err.Error())
}
fmt.Print(result.String())
case "endMaintenance":
fmt.Println("Setting hosts to ACTIVE")
if hostList == "" {
fmt.Println("No hosts specified to drain")
os.Exit(1)
log.Fatal("No hosts specified to drain")
}
hosts := strings.Split(hostList, ",")
_, result, err := r.EndMaintenance(hosts...)
if err != nil {
fmt.Printf("error: %+v\n", err.Error())
os.Exit(1)
log.Fatalf("error: %+v\n", err.Error())
}
// Monitor change to DRAINING and DRAINED mode
@ -587,14 +594,64 @@ func main() {
fmt.Printf("Host %s did not transtion into desired mode(s)\n", host)
}
}
fmt.Printf("error: %+v\n", err.Error())
os.Exit(1)
log.Fatalf("error: %+v\n", err.Error())
}
fmt.Print(result.String())
case "getPendingReasons":
fmt.Println("Getting pending reasons")
taskQ := &aurora.TaskQuery{
Role: &job.JobKey().Role,
Environment: &job.JobKey().Environment,
JobName: &job.JobKey().Name,
}
reasons, err := r.GetPendingReason(taskQ)
if err != nil {
log.Fatalf("error: %+v\n ", err)
}
fmt.Printf("length: %d\n ", len(reasons))
fmt.Printf("tasks: %+v\n", reasons)
case "getJobs":
fmt.Println("GetJobs...role: ", role)
_, result, err := r.GetJobs(role)
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
fmt.Println("map size: ", len(result.Configs))
fmt.Println(result.String())
case "snapshot":
fmt.Println("Forcing scheduler to write snapshot to mesos replicated log")
err := r.Snapshot()
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
case "performBackup":
fmt.Println("Writing Backup of Snapshot to file system")
err := r.PerformBackup()
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
case "forceExplicitRecon":
fmt.Println("Force an explicit recon")
err := r.ForceExplicitTaskReconciliation(nil)
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
case "forceImplicitRecon":
fmt.Println("Force an implicit recon")
err := r.ForceImplicitTaskReconciliation()
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
default:
fmt.Println("Command not supported")
os.Exit(1)
log.Fatal("Command not supported")
}
}

13
examples/config.json Normal file
View file

@ -0,0 +1,13 @@
{
"username": "aurora",
"password": "secret",
"sched_url": "http://192.168.33.7:8081",
"cluster" : {
"name": "devcluster",
"zk": "192.168.33.7",
"scheduler_zk_path": "/aurora/scheduler",
"auth_mechanism": "UNAUTHENTICATED",
"slave_run_directory": "latest",
"slave_root": "/var/lib/mesos"
}
}

21
examples/job_dce.json Normal file
View file

@ -0,0 +1,21 @@
{
"name": "sampleapp",
"cpu": 0.25,
"ram_mb": 256,
"disk_mb": 100,
"executor": "docker-compose-executor",
"service": true,
"ports": 4,
"instances": 1,
"uris": [
{
"uri": "http://192.168.33.8/app.tar.gz",
"extract": true,
"cache": false
}
],
"labels":{
"fileName":"sampleapp/docker-compose.yml,sampleapp/docker-compose-healthcheck.yml"
}
}

11
examples/job_thermos.json Normal file
View file

@ -0,0 +1,11 @@
{
"name": "hello_world_from_gorealis",
"cpu": 1.0,
"ram_mb": 64,
"disk_mb": 100,
"executor": "thermos",
"exec_data_file": "examples/thermos_payload.json",
"service": true,
"ports": 1,
"instances": 1
}

View file

@ -18,9 +18,14 @@ import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"os"
"time"
"github.com/paypal/gorealis"
realis "github.com/paypal/gorealis"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/pkg/errors"
)
type URIJson struct {
@ -30,16 +35,17 @@ type URIJson struct {
}
type JobJson struct {
Name string `json:"name"`
CPU float64 `json:"cpu"`
RAM int64 `json:"ram_mb"`
Disk int64 `json:"disk_mb"`
Executor string `json:"executor"`
Instances int32 `json:"instances"`
URIs []URIJson `json:"uris"`
Labels map[string]string `json:"labels"`
Service bool `json:"service"`
Ports int `json:"ports"`
Name string `json:"name"`
CPU float64 `json:"cpu"`
RAM int64 `json:"ram_mb"`
Disk int64 `json:"disk_mb"`
Executor string `json:"executor"`
ExecutorDataFile string `json:"exec_data_file,omitempty"`
Instances int32 `json:"instances"`
URIs []URIJson `json:"uris"`
Labels map[string]string `json:"labels"`
Service bool `json:"service"`
Ports int `json:"ports"`
}
func (j *JobJson) Validate() bool {
@ -63,67 +69,158 @@ func (j *JobJson) Validate() bool {
return true
}
func main() {
type Config struct {
realis.Cluster `json:"cluster"`
Username string `json:"username"`
Password string `json:"password"`
SchedUrl string `json:"sched_url"`
Transport string `json:"transport,omitempty"`
Debug bool `json:"debug,omitempty"`
}
// Command-line arguments for config and job JSON files.
var configJSONFile, jobJSONFile string
var job *JobJson
var config *Config
// Reading command line arguments and validating.
// If Aurora scheduler URL not provided, then using zookeeper to locate the leader.
func init() {
flag.StringVar(&configJSONFile, "config", "./config.json", "The config file that contains username, password, and the cluster configuration information.")
flag.StringVar(&jobJSONFile, "job", "./job.json", "JSON file containing job definitions.")
jsonFile := flag.String("file", "", "JSON file containing job definition")
flag.Parse()
if *jsonFile == "" {
job = new(JobJson)
config = new(Config)
if jobsFile, jobJSONReadErr := os.Open(jobJSONFile); jobJSONReadErr != nil {
flag.Usage()
fmt.Println("Error reading the job JSON file: ", jobJSONReadErr)
os.Exit(1)
} else {
if unmarshallErr := json.NewDecoder(jobsFile).Decode(job); unmarshallErr != nil {
flag.Usage()
fmt.Println("Error parsing job json file: ", unmarshallErr)
os.Exit(1)
}
// Need to validate the job JSON file.
if !job.Validate() {
fmt.Println("Invalid Job.")
os.Exit(1)
}
}
file, err := os.Open(*jsonFile)
if err != nil {
fmt.Println("Error opening file ", err)
if configFile, configJSONErr := os.Open(configJSONFile); configJSONErr != nil {
flag.Usage()
fmt.Println("Error reading the config JSON file: ", configJSONErr)
os.Exit(1)
} else {
if unmarshallErr := json.NewDecoder(configFile).Decode(config); unmarshallErr != nil {
fmt.Println("Error parsing config JSON file: ", unmarshallErr)
os.Exit(1)
}
}
}
func CreateRealisClient(config *Config) (realis.Realis, error) {
var transportOption realis.ClientOption
// Configuring transport protocol. If not transport is provided, then using JSON as the
// default transport protocol.
switch config.Transport {
case "binary":
transportOption = realis.ThriftBinary()
case "json", "":
transportOption = realis.ThriftJSON()
default:
fmt.Println("Invalid transport option provided!")
os.Exit(1)
}
clientOptions := []realis.ClientOption{
realis.BasicAuth(config.Username, config.Password),
transportOption,
realis.ZKCluster(&config.Cluster),
// realis.SchedulerUrl(config.SchedUrl),
realis.SetLogger(log.New(os.Stdout, "realis-debug: ", log.Ldate)),
realis.BackOff(realis.Backoff{
Steps: 2,
Duration: 10 * time.Second,
Factor: 2.0,
Jitter: 0.1,
}),
}
if config.Debug {
clientOptions = append(clientOptions, realis.Debug())
}
return realis.NewRealisClient(clientOptions...)
}
func main() {
if r, clientCreationErr := CreateRealisClient(config); clientCreationErr != nil {
fmt.Println(clientCreationErr)
os.Exit(1)
} else {
monitor := &realis.Monitor{Client: r}
defer r.Close()
uris := job.URIs
labels := job.Labels
auroraJob := realis.NewJob().
Environment("prod").
Role("vagrant").
Name(job.Name).
CPU(job.CPU).
RAM(job.RAM).
Disk(job.Disk).
IsService(job.Service).
InstanceCount(job.Instances).
AddPorts(job.Ports)
// If thermos executor, then reading in the thermos payload.
if (job.Executor == aurora.AURORA_EXECUTOR_NAME) || (job.Executor == "thermos") {
payload, err := ioutil.ReadFile(job.ExecutorDataFile)
if err != nil {
fmt.Println(errors.Wrap(err, "Invalid thermos payload file!"))
os.Exit(1)
}
auroraJob.ExecutorName(aurora.AURORA_EXECUTOR_NAME).
ExecutorData(string(payload))
} else {
auroraJob.ExecutorName(job.Executor)
}
// Adding URIs.
for _, uri := range uris {
auroraJob.AddURIs(uri.Extract, uri.Cache, uri.URI)
}
// Adding Labels.
for key, value := range labels {
auroraJob.AddLabel(key, value)
}
fmt.Println("Creating Job...")
if resp, jobCreationErr := r.CreateJob(auroraJob); jobCreationErr != nil {
fmt.Println("Error creating Aurora job: ", jobCreationErr)
os.Exit(1)
} else {
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, monitorErr := monitor.Instances(auroraJob.JobKey(), auroraJob.GetInstanceCount(), 5, 50); !ok || monitorErr != nil {
if _, jobErr := r.KillJob(auroraJob.JobKey()); jobErr !=
nil {
fmt.Println(jobErr)
os.Exit(1)
} else {
fmt.Println("ok: ", ok)
fmt.Println("jobErr: ", jobErr)
}
}
}
}
}
jsonJob := new(JobJson)
err = json.NewDecoder(file).Decode(jsonJob)
if err != nil {
fmt.Println("Error parsing file ", err)
os.Exit(1)
}
jsonJob.Validate()
//Create new configuration with default transport layer
config, err := realis.NewDefaultConfig("http://192.168.33.7:8081")
if err != nil {
fmt.Print(err)
os.Exit(1)
}
realis.AddBasicAuth(&config, "aurora", "secret")
r := realis.NewClient(config)
auroraJob := realis.NewJob().
Environment("prod").
Role("vagrant").
Name(jsonJob.Name).
CPU(jsonJob.CPU).
RAM(jsonJob.RAM).
Disk(jsonJob.Disk).
ExecutorName(jsonJob.Executor).
InstanceCount(jsonJob.Instances).
IsService(jsonJob.Service).
AddPorts(jsonJob.Ports)
for _, uri := range jsonJob.URIs {
auroraJob.AddURIs(uri.Extract, uri.Cache, uri.URI)
}
for k, v := range jsonJob.Labels {
auroraJob.AddLabel(k, v)
}
resp, err := r.CreateJob(auroraJob)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Println(resp)
}

View file

@ -0,0 +1,6 @@
// Code generated by Thrift Compiler (0.14.0). DO NOT EDIT.
package aurora
var GoUnusedProtection__ int;

View file

@ -0,0 +1,53 @@
// Code generated by Thrift Compiler (0.14.0). DO NOT EDIT.
package aurora
import(
"bytes"
"context"
"fmt"
"time"
"github.com/apache/thrift/lib/go/thrift"
)
// (needed to ensure safety because of naive import list construction.)
var _ = thrift.ZERO
var _ = fmt.Printf
var _ = context.Background
var _ = time.Now
var _ = bytes.Equal
const AURORA_EXECUTOR_NAME = "AuroraExecutor"
var ACTIVE_STATES []ScheduleStatus
var SLAVE_ASSIGNED_STATES []ScheduleStatus
var LIVE_STATES []ScheduleStatus
var TERMINAL_STATES []ScheduleStatus
const GOOD_IDENTIFIER_PATTERN = "^[\\w\\-\\.]+$"
const GOOD_IDENTIFIER_PATTERN_JVM = "^[\\w\\-\\.]+$"
const GOOD_IDENTIFIER_PATTERN_PYTHON = "^[\\w\\-\\.]+$"
var ACTIVE_JOB_UPDATE_STATES []JobUpdateStatus
var AWAITNG_PULSE_JOB_UPDATE_STATES []JobUpdateStatus
const BYPASS_LEADER_REDIRECT_HEADER_NAME = "Bypass-Leader-Redirect"
const TASK_FILESYSTEM_MOUNT_POINT = "taskfs"
func init() {
ACTIVE_STATES = []ScheduleStatus{
9, 17, 6, 0, 13, 12, 2, 1, 18, 16, }
SLAVE_ASSIGNED_STATES = []ScheduleStatus{
9, 17, 6, 13, 12, 2, 18, 1, }
LIVE_STATES = []ScheduleStatus{
6, 13, 12, 17, 18, 2, }
TERMINAL_STATES = []ScheduleStatus{
4, 3, 5, 7, }
ACTIVE_JOB_UPDATE_STATES = []JobUpdateStatus{
0, 1, 2, 3, 9, 10, }
AWAITNG_PULSE_JOB_UPDATE_STATES = []JobUpdateStatus{
9, 10, }
}

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,86 +0,0 @@
// Autogenerated by Thrift Compiler (0.9.3)
// DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
package aurora
import (
"bytes"
"fmt"
"git.apache.org/thrift.git/lib/go/thrift"
)
// (needed to ensure safety because of naive import list construction.)
var _ = thrift.ZERO
var _ = fmt.Printf
var _ = bytes.Equal
const AURORA_EXECUTOR_NAME = "AuroraExecutor"
var ACTIVE_STATES map[ScheduleStatus]bool
var SLAVE_ASSIGNED_STATES map[ScheduleStatus]bool
var LIVE_STATES map[ScheduleStatus]bool
var TERMINAL_STATES map[ScheduleStatus]bool
const GOOD_IDENTIFIER_PATTERN = "^[\\w\\-\\.]+$"
const GOOD_IDENTIFIER_PATTERN_JVM = "^[\\w\\-\\.]+$"
const GOOD_IDENTIFIER_PATTERN_PYTHON = "^[\\w\\-\\.]+$"
var ACTIVE_JOB_UPDATE_STATES map[JobUpdateStatus]bool
var AWAITNG_PULSE_JOB_UPDATE_STATES map[JobUpdateStatus]bool
const BYPASS_LEADER_REDIRECT_HEADER_NAME = "Bypass-Leader-Redirect"
const TASK_FILESYSTEM_MOUNT_POINT = "taskfs"
func init() {
ACTIVE_STATES = map[ScheduleStatus]bool{
9: true,
17: true,
6: true,
0: true,
13: true,
12: true,
2: true,
1: true,
16: true,
}
SLAVE_ASSIGNED_STATES = map[ScheduleStatus]bool{
9: true,
17: true,
6: true,
13: true,
12: true,
2: true,
1: true,
}
LIVE_STATES = map[ScheduleStatus]bool{
6: true,
13: true,
12: true,
17: true,
2: true,
}
TERMINAL_STATES = map[ScheduleStatus]bool{
4: true,
3: true,
5: true,
7: true,
}
ACTIVE_JOB_UPDATE_STATES = map[JobUpdateStatus]bool{
0: true,
1: true,
2: true,
3: true,
9: true,
10: true,
}
AWAITNG_PULSE_JOB_UPDATE_STATES = map[JobUpdateStatus]bool{
9: true,
10: true,
}
}

View file

@ -1,382 +1,411 @@
// Autogenerated by Thrift Compiler (0.9.3)
// DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
// Code generated by Thrift Compiler (0.14.0). DO NOT EDIT.
package main
import (
"apache/aurora"
"context"
"flag"
"fmt"
"git.apache.org/thrift.git/lib/go/thrift"
"math"
"net"
"net/url"
"os"
"strconv"
"strings"
"github.com/apache/thrift/lib/go/thrift"
"apache/aurora"
)
var _ = aurora.GoUnusedProtection__
func Usage() {
fmt.Fprintln(os.Stderr, "Usage of ", os.Args[0], " [-h host:port] [-u url] [-f[ramed]] function [arg1 [arg2...]]:")
flag.PrintDefaults()
fmt.Fprintln(os.Stderr, "\nFunctions:")
fmt.Fprintln(os.Stderr, " Response getRoleSummary()")
fmt.Fprintln(os.Stderr, " Response getJobSummary(string role)")
fmt.Fprintln(os.Stderr, " Response getTasksStatus(TaskQuery query)")
fmt.Fprintln(os.Stderr, " Response getTasksWithoutConfigs(TaskQuery query)")
fmt.Fprintln(os.Stderr, " Response getPendingReason(TaskQuery query)")
fmt.Fprintln(os.Stderr, " Response getConfigSummary(JobKey job)")
fmt.Fprintln(os.Stderr, " Response getJobs(string ownerRole)")
fmt.Fprintln(os.Stderr, " Response getQuota(string ownerRole)")
fmt.Fprintln(os.Stderr, " Response populateJobConfig(JobConfiguration description)")
fmt.Fprintln(os.Stderr, " Response getJobUpdateSummaries(JobUpdateQuery jobUpdateQuery)")
fmt.Fprintln(os.Stderr, " Response getJobUpdateDetails(JobUpdateQuery query)")
fmt.Fprintln(os.Stderr, " Response getJobUpdateDiff(JobUpdateRequest request)")
fmt.Fprintln(os.Stderr, " Response getTierConfigs()")
fmt.Fprintln(os.Stderr)
os.Exit(0)
fmt.Fprintln(os.Stderr, "Usage of ", os.Args[0], " [-h host:port] [-u url] [-f[ramed]] function [arg1 [arg2...]]:")
flag.PrintDefaults()
fmt.Fprintln(os.Stderr, "\nFunctions:")
fmt.Fprintln(os.Stderr, " Response getRoleSummary()")
fmt.Fprintln(os.Stderr, " Response getJobSummary(string role)")
fmt.Fprintln(os.Stderr, " Response getTasksStatus(TaskQuery query)")
fmt.Fprintln(os.Stderr, " Response getTasksWithoutConfigs(TaskQuery query)")
fmt.Fprintln(os.Stderr, " Response getPendingReason(TaskQuery query)")
fmt.Fprintln(os.Stderr, " Response getConfigSummary(JobKey job)")
fmt.Fprintln(os.Stderr, " Response getJobs(string ownerRole)")
fmt.Fprintln(os.Stderr, " Response getQuota(string ownerRole)")
fmt.Fprintln(os.Stderr, " Response populateJobConfig(JobConfiguration description)")
fmt.Fprintln(os.Stderr, " Response getJobUpdateSummaries(JobUpdateQuery jobUpdateQuery)")
fmt.Fprintln(os.Stderr, " Response getJobUpdateDetails(JobUpdateQuery query)")
fmt.Fprintln(os.Stderr, " Response getJobUpdateDiff(JobUpdateRequest request)")
fmt.Fprintln(os.Stderr, " Response getTierConfigs()")
fmt.Fprintln(os.Stderr)
os.Exit(0)
}
type httpHeaders map[string]string
func (h httpHeaders) String() string {
var m map[string]string = h
return fmt.Sprintf("%s", m)
}
func (h httpHeaders) Set(value string) error {
parts := strings.Split(value, ": ")
if len(parts) != 2 {
return fmt.Errorf("header should be of format 'Key: Value'")
}
h[parts[0]] = parts[1]
return nil
}
func main() {
flag.Usage = Usage
var host string
var port int
var protocol string
var urlString string
var framed bool
var useHttp bool
var parsedUrl url.URL
var trans thrift.TTransport
_ = strconv.Atoi
_ = math.Abs
flag.Usage = Usage
flag.StringVar(&host, "h", "localhost", "Specify host and port")
flag.IntVar(&port, "p", 9090, "Specify port")
flag.StringVar(&protocol, "P", "binary", "Specify the protocol (binary, compact, simplejson, json)")
flag.StringVar(&urlString, "u", "", "Specify the url")
flag.BoolVar(&framed, "framed", false, "Use framed transport")
flag.BoolVar(&useHttp, "http", false, "Use http")
flag.Parse()
if len(urlString) > 0 {
parsedUrl, err := url.Parse(urlString)
if err != nil {
fmt.Fprintln(os.Stderr, "Error parsing URL: ", err)
flag.Usage()
}
host = parsedUrl.Host
useHttp = len(parsedUrl.Scheme) <= 0 || parsedUrl.Scheme == "http"
} else if useHttp {
_, err := url.Parse(fmt.Sprint("http://", host, ":", port))
if err != nil {
fmt.Fprintln(os.Stderr, "Error parsing URL: ", err)
flag.Usage()
}
}
cmd := flag.Arg(0)
var err error
if useHttp {
trans, err = thrift.NewTHttpClient(parsedUrl.String())
} else {
portStr := fmt.Sprint(port)
if strings.Contains(host, ":") {
host, portStr, err = net.SplitHostPort(host)
if err != nil {
fmt.Fprintln(os.Stderr, "error with host:", err)
os.Exit(1)
}
}
trans, err = thrift.NewTSocket(net.JoinHostPort(host, portStr))
if err != nil {
fmt.Fprintln(os.Stderr, "error resolving address:", err)
os.Exit(1)
}
if framed {
trans = thrift.NewTFramedTransport(trans)
}
}
if err != nil {
fmt.Fprintln(os.Stderr, "Error creating transport", err)
os.Exit(1)
}
defer trans.Close()
var protocolFactory thrift.TProtocolFactory
switch protocol {
case "compact":
protocolFactory = thrift.NewTCompactProtocolFactory()
break
case "simplejson":
protocolFactory = thrift.NewTSimpleJSONProtocolFactory()
break
case "json":
protocolFactory = thrift.NewTJSONProtocolFactory()
break
case "binary", "":
protocolFactory = thrift.NewTBinaryProtocolFactoryDefault()
break
default:
fmt.Fprintln(os.Stderr, "Invalid protocol specified: ", protocol)
Usage()
os.Exit(1)
}
client := aurora.NewReadOnlySchedulerClientFactory(trans, protocolFactory)
if err := trans.Open(); err != nil {
fmt.Fprintln(os.Stderr, "Error opening socket to ", host, ":", port, " ", err)
os.Exit(1)
}
switch cmd {
case "getRoleSummary":
if flag.NArg()-1 != 0 {
fmt.Fprintln(os.Stderr, "GetRoleSummary requires 0 args")
flag.Usage()
}
fmt.Print(client.GetRoleSummary())
fmt.Print("\n")
break
case "getJobSummary":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobSummary requires 1 args")
flag.Usage()
}
argvalue0 := flag.Arg(1)
value0 := argvalue0
fmt.Print(client.GetJobSummary(value0))
fmt.Print("\n")
break
case "getTasksStatus":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetTasksStatus requires 1 args")
flag.Usage()
}
arg82 := flag.Arg(1)
mbTrans83 := thrift.NewTMemoryBufferLen(len(arg82))
defer mbTrans83.Close()
_, err84 := mbTrans83.WriteString(arg82)
if err84 != nil {
Usage()
return
}
factory85 := thrift.NewTSimpleJSONProtocolFactory()
jsProt86 := factory85.GetProtocol(mbTrans83)
argvalue0 := aurora.NewTaskQuery()
err87 := argvalue0.Read(jsProt86)
if err87 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetTasksStatus(value0))
fmt.Print("\n")
break
case "getTasksWithoutConfigs":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetTasksWithoutConfigs requires 1 args")
flag.Usage()
}
arg88 := flag.Arg(1)
mbTrans89 := thrift.NewTMemoryBufferLen(len(arg88))
defer mbTrans89.Close()
_, err90 := mbTrans89.WriteString(arg88)
if err90 != nil {
Usage()
return
}
factory91 := thrift.NewTSimpleJSONProtocolFactory()
jsProt92 := factory91.GetProtocol(mbTrans89)
argvalue0 := aurora.NewTaskQuery()
err93 := argvalue0.Read(jsProt92)
if err93 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetTasksWithoutConfigs(value0))
fmt.Print("\n")
break
case "getPendingReason":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetPendingReason requires 1 args")
flag.Usage()
}
arg94 := flag.Arg(1)
mbTrans95 := thrift.NewTMemoryBufferLen(len(arg94))
defer mbTrans95.Close()
_, err96 := mbTrans95.WriteString(arg94)
if err96 != nil {
Usage()
return
}
factory97 := thrift.NewTSimpleJSONProtocolFactory()
jsProt98 := factory97.GetProtocol(mbTrans95)
argvalue0 := aurora.NewTaskQuery()
err99 := argvalue0.Read(jsProt98)
if err99 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetPendingReason(value0))
fmt.Print("\n")
break
case "getConfigSummary":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetConfigSummary requires 1 args")
flag.Usage()
}
arg100 := flag.Arg(1)
mbTrans101 := thrift.NewTMemoryBufferLen(len(arg100))
defer mbTrans101.Close()
_, err102 := mbTrans101.WriteString(arg100)
if err102 != nil {
Usage()
return
}
factory103 := thrift.NewTSimpleJSONProtocolFactory()
jsProt104 := factory103.GetProtocol(mbTrans101)
argvalue0 := aurora.NewJobKey()
err105 := argvalue0.Read(jsProt104)
if err105 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetConfigSummary(value0))
fmt.Print("\n")
break
case "getJobs":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobs requires 1 args")
flag.Usage()
}
argvalue0 := flag.Arg(1)
value0 := argvalue0
fmt.Print(client.GetJobs(value0))
fmt.Print("\n")
break
case "getQuota":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetQuota requires 1 args")
flag.Usage()
}
argvalue0 := flag.Arg(1)
value0 := argvalue0
fmt.Print(client.GetQuota(value0))
fmt.Print("\n")
break
case "populateJobConfig":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "PopulateJobConfig requires 1 args")
flag.Usage()
}
arg108 := flag.Arg(1)
mbTrans109 := thrift.NewTMemoryBufferLen(len(arg108))
defer mbTrans109.Close()
_, err110 := mbTrans109.WriteString(arg108)
if err110 != nil {
Usage()
return
}
factory111 := thrift.NewTSimpleJSONProtocolFactory()
jsProt112 := factory111.GetProtocol(mbTrans109)
argvalue0 := aurora.NewJobConfiguration()
err113 := argvalue0.Read(jsProt112)
if err113 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.PopulateJobConfig(value0))
fmt.Print("\n")
break
case "getJobUpdateSummaries":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobUpdateSummaries requires 1 args")
flag.Usage()
}
arg114 := flag.Arg(1)
mbTrans115 := thrift.NewTMemoryBufferLen(len(arg114))
defer mbTrans115.Close()
_, err116 := mbTrans115.WriteString(arg114)
if err116 != nil {
Usage()
return
}
factory117 := thrift.NewTSimpleJSONProtocolFactory()
jsProt118 := factory117.GetProtocol(mbTrans115)
argvalue0 := aurora.NewJobUpdateQuery()
err119 := argvalue0.Read(jsProt118)
if err119 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetJobUpdateSummaries(value0))
fmt.Print("\n")
break
case "getJobUpdateDetails":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobUpdateDetails requires 1 args")
flag.Usage()
}
arg120 := flag.Arg(1)
mbTrans121 := thrift.NewTMemoryBufferLen(len(arg120))
defer mbTrans121.Close()
_, err122 := mbTrans121.WriteString(arg120)
if err122 != nil {
Usage()
return
}
factory123 := thrift.NewTSimpleJSONProtocolFactory()
jsProt124 := factory123.GetProtocol(mbTrans121)
argvalue0 := aurora.NewJobUpdateQuery()
err125 := argvalue0.Read(jsProt124)
if err125 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetJobUpdateDetails(value0))
fmt.Print("\n")
break
case "getJobUpdateDiff":
if flag.NArg()-1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobUpdateDiff requires 1 args")
flag.Usage()
}
arg126 := flag.Arg(1)
mbTrans127 := thrift.NewTMemoryBufferLen(len(arg126))
defer mbTrans127.Close()
_, err128 := mbTrans127.WriteString(arg126)
if err128 != nil {
Usage()
return
}
factory129 := thrift.NewTSimpleJSONProtocolFactory()
jsProt130 := factory129.GetProtocol(mbTrans127)
argvalue0 := aurora.NewJobUpdateRequest()
err131 := argvalue0.Read(jsProt130)
if err131 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetJobUpdateDiff(value0))
fmt.Print("\n")
break
case "getTierConfigs":
if flag.NArg()-1 != 0 {
fmt.Fprintln(os.Stderr, "GetTierConfigs requires 0 args")
flag.Usage()
}
fmt.Print(client.GetTierConfigs())
fmt.Print("\n")
break
case "":
Usage()
break
default:
fmt.Fprintln(os.Stderr, "Invalid function ", cmd)
}
flag.Usage = Usage
var host string
var port int
var protocol string
var urlString string
var framed bool
var useHttp bool
headers := make(httpHeaders)
var parsedUrl *url.URL
var trans thrift.TTransport
_ = strconv.Atoi
_ = math.Abs
flag.Usage = Usage
flag.StringVar(&host, "h", "localhost", "Specify host and port")
flag.IntVar(&port, "p", 9090, "Specify port")
flag.StringVar(&protocol, "P", "binary", "Specify the protocol (binary, compact, simplejson, json)")
flag.StringVar(&urlString, "u", "", "Specify the url")
flag.BoolVar(&framed, "framed", false, "Use framed transport")
flag.BoolVar(&useHttp, "http", false, "Use http")
flag.Var(headers, "H", "Headers to set on the http(s) request (e.g. -H \"Key: Value\")")
flag.Parse()
if len(urlString) > 0 {
var err error
parsedUrl, err = url.Parse(urlString)
if err != nil {
fmt.Fprintln(os.Stderr, "Error parsing URL: ", err)
flag.Usage()
}
host = parsedUrl.Host
useHttp = len(parsedUrl.Scheme) <= 0 || parsedUrl.Scheme == "http" || parsedUrl.Scheme == "https"
} else if useHttp {
_, err := url.Parse(fmt.Sprint("http://", host, ":", port))
if err != nil {
fmt.Fprintln(os.Stderr, "Error parsing URL: ", err)
flag.Usage()
}
}
cmd := flag.Arg(0)
var err error
if useHttp {
trans, err = thrift.NewTHttpClient(parsedUrl.String())
if len(headers) > 0 {
httptrans := trans.(*thrift.THttpClient)
for key, value := range headers {
httptrans.SetHeader(key, value)
}
}
} else {
portStr := fmt.Sprint(port)
if strings.Contains(host, ":") {
host, portStr, err = net.SplitHostPort(host)
if err != nil {
fmt.Fprintln(os.Stderr, "error with host:", err)
os.Exit(1)
}
}
trans, err = thrift.NewTSocket(net.JoinHostPort(host, portStr))
if err != nil {
fmt.Fprintln(os.Stderr, "error resolving address:", err)
os.Exit(1)
}
if framed {
trans = thrift.NewTFramedTransport(trans)
}
}
if err != nil {
fmt.Fprintln(os.Stderr, "Error creating transport", err)
os.Exit(1)
}
defer trans.Close()
var protocolFactory thrift.TProtocolFactory
switch protocol {
case "compact":
protocolFactory = thrift.NewTCompactProtocolFactory()
break
case "simplejson":
protocolFactory = thrift.NewTSimpleJSONProtocolFactory()
break
case "json":
protocolFactory = thrift.NewTJSONProtocolFactory()
break
case "binary", "":
protocolFactory = thrift.NewTBinaryProtocolFactoryDefault()
break
default:
fmt.Fprintln(os.Stderr, "Invalid protocol specified: ", protocol)
Usage()
os.Exit(1)
}
iprot := protocolFactory.GetProtocol(trans)
oprot := protocolFactory.GetProtocol(trans)
client := aurora.NewReadOnlySchedulerClient(thrift.NewTStandardClient(iprot, oprot))
if err := trans.Open(); err != nil {
fmt.Fprintln(os.Stderr, "Error opening socket to ", host, ":", port, " ", err)
os.Exit(1)
}
switch cmd {
case "getRoleSummary":
if flag.NArg() - 1 != 0 {
fmt.Fprintln(os.Stderr, "GetRoleSummary requires 0 args")
flag.Usage()
}
fmt.Print(client.GetRoleSummary(context.Background()))
fmt.Print("\n")
break
case "getJobSummary":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobSummary requires 1 args")
flag.Usage()
}
argvalue0 := flag.Arg(1)
value0 := argvalue0
fmt.Print(client.GetJobSummary(context.Background(), value0))
fmt.Print("\n")
break
case "getTasksStatus":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetTasksStatus requires 1 args")
flag.Usage()
}
arg132 := flag.Arg(1)
mbTrans133 := thrift.NewTMemoryBufferLen(len(arg132))
defer mbTrans133.Close()
_, err134 := mbTrans133.WriteString(arg132)
if err134 != nil {
Usage()
return
}
factory135 := thrift.NewTJSONProtocolFactory()
jsProt136 := factory135.GetProtocol(mbTrans133)
argvalue0 := aurora.NewTaskQuery()
err137 := argvalue0.Read(context.Background(), jsProt136)
if err137 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetTasksStatus(context.Background(), value0))
fmt.Print("\n")
break
case "getTasksWithoutConfigs":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetTasksWithoutConfigs requires 1 args")
flag.Usage()
}
arg138 := flag.Arg(1)
mbTrans139 := thrift.NewTMemoryBufferLen(len(arg138))
defer mbTrans139.Close()
_, err140 := mbTrans139.WriteString(arg138)
if err140 != nil {
Usage()
return
}
factory141 := thrift.NewTJSONProtocolFactory()
jsProt142 := factory141.GetProtocol(mbTrans139)
argvalue0 := aurora.NewTaskQuery()
err143 := argvalue0.Read(context.Background(), jsProt142)
if err143 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetTasksWithoutConfigs(context.Background(), value0))
fmt.Print("\n")
break
case "getPendingReason":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetPendingReason requires 1 args")
flag.Usage()
}
arg144 := flag.Arg(1)
mbTrans145 := thrift.NewTMemoryBufferLen(len(arg144))
defer mbTrans145.Close()
_, err146 := mbTrans145.WriteString(arg144)
if err146 != nil {
Usage()
return
}
factory147 := thrift.NewTJSONProtocolFactory()
jsProt148 := factory147.GetProtocol(mbTrans145)
argvalue0 := aurora.NewTaskQuery()
err149 := argvalue0.Read(context.Background(), jsProt148)
if err149 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetPendingReason(context.Background(), value0))
fmt.Print("\n")
break
case "getConfigSummary":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetConfigSummary requires 1 args")
flag.Usage()
}
arg150 := flag.Arg(1)
mbTrans151 := thrift.NewTMemoryBufferLen(len(arg150))
defer mbTrans151.Close()
_, err152 := mbTrans151.WriteString(arg150)
if err152 != nil {
Usage()
return
}
factory153 := thrift.NewTJSONProtocolFactory()
jsProt154 := factory153.GetProtocol(mbTrans151)
argvalue0 := aurora.NewJobKey()
err155 := argvalue0.Read(context.Background(), jsProt154)
if err155 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetConfigSummary(context.Background(), value0))
fmt.Print("\n")
break
case "getJobs":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobs requires 1 args")
flag.Usage()
}
argvalue0 := flag.Arg(1)
value0 := argvalue0
fmt.Print(client.GetJobs(context.Background(), value0))
fmt.Print("\n")
break
case "getQuota":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetQuota requires 1 args")
flag.Usage()
}
argvalue0 := flag.Arg(1)
value0 := argvalue0
fmt.Print(client.GetQuota(context.Background(), value0))
fmt.Print("\n")
break
case "populateJobConfig":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "PopulateJobConfig requires 1 args")
flag.Usage()
}
arg158 := flag.Arg(1)
mbTrans159 := thrift.NewTMemoryBufferLen(len(arg158))
defer mbTrans159.Close()
_, err160 := mbTrans159.WriteString(arg158)
if err160 != nil {
Usage()
return
}
factory161 := thrift.NewTJSONProtocolFactory()
jsProt162 := factory161.GetProtocol(mbTrans159)
argvalue0 := aurora.NewJobConfiguration()
err163 := argvalue0.Read(context.Background(), jsProt162)
if err163 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.PopulateJobConfig(context.Background(), value0))
fmt.Print("\n")
break
case "getJobUpdateSummaries":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobUpdateSummaries requires 1 args")
flag.Usage()
}
arg164 := flag.Arg(1)
mbTrans165 := thrift.NewTMemoryBufferLen(len(arg164))
defer mbTrans165.Close()
_, err166 := mbTrans165.WriteString(arg164)
if err166 != nil {
Usage()
return
}
factory167 := thrift.NewTJSONProtocolFactory()
jsProt168 := factory167.GetProtocol(mbTrans165)
argvalue0 := aurora.NewJobUpdateQuery()
err169 := argvalue0.Read(context.Background(), jsProt168)
if err169 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetJobUpdateSummaries(context.Background(), value0))
fmt.Print("\n")
break
case "getJobUpdateDetails":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobUpdateDetails requires 1 args")
flag.Usage()
}
arg170 := flag.Arg(1)
mbTrans171 := thrift.NewTMemoryBufferLen(len(arg170))
defer mbTrans171.Close()
_, err172 := mbTrans171.WriteString(arg170)
if err172 != nil {
Usage()
return
}
factory173 := thrift.NewTJSONProtocolFactory()
jsProt174 := factory173.GetProtocol(mbTrans171)
argvalue0 := aurora.NewJobUpdateQuery()
err175 := argvalue0.Read(context.Background(), jsProt174)
if err175 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetJobUpdateDetails(context.Background(), value0))
fmt.Print("\n")
break
case "getJobUpdateDiff":
if flag.NArg() - 1 != 1 {
fmt.Fprintln(os.Stderr, "GetJobUpdateDiff requires 1 args")
flag.Usage()
}
arg176 := flag.Arg(1)
mbTrans177 := thrift.NewTMemoryBufferLen(len(arg176))
defer mbTrans177.Close()
_, err178 := mbTrans177.WriteString(arg176)
if err178 != nil {
Usage()
return
}
factory179 := thrift.NewTJSONProtocolFactory()
jsProt180 := factory179.GetProtocol(mbTrans177)
argvalue0 := aurora.NewJobUpdateRequest()
err181 := argvalue0.Read(context.Background(), jsProt180)
if err181 != nil {
Usage()
return
}
value0 := argvalue0
fmt.Print(client.GetJobUpdateDiff(context.Background(), value0))
fmt.Print("\n")
break
case "getTierConfigs":
if flag.NArg() - 1 != 0 {
fmt.Fprintln(os.Stderr, "GetTierConfigs requires 0 args")
flag.Usage()
}
fmt.Print(client.GetTierConfigs(context.Background()))
fmt.Print("\n")
break
case "":
Usage()
break
default:
fmt.Fprintln(os.Stderr, "Invalid function ", cmd)
}
}

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,6 @@
#! /bin/bash
THRIFT_VER=0.9.3
THRIFT_VER=0.14.0
if [[ $(thrift -version | grep -e $THRIFT_VER -c) -ne 1 ]]; then
echo "Warning: This wrapper has only been tested with version" $THRIFT_VER;

12
go.mod Normal file
View file

@ -0,0 +1,12 @@
module github.com/paypal/gorealis
go 1.13
require (
github.com/apache/thrift v0.14.0
github.com/davecgh/go-spew v1.1.0 // indirect
github.com/pkg/errors v0.9.1
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/samuel/go-zookeeper v0.0.0-20171117190445-471cd4e61d7a
github.com/stretchr/testify v1.7.0
)

30
go.sum Normal file
View file

@ -0,0 +1,30 @@
github.com/apache/thrift v0.13.0 h1:5hryIiq9gtn+MiLVn0wP37kb/uTeRZgN08WoCsAhIhI=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.14.0 h1:vqZ2DP42i8th2OsgCcYZkirtbzvpZEFx53LiWDJXIAs=
github.com/apache/thrift v0.14.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e h1:+RHxT/gm0O3UF7nLJbdNzAmULvCFt4XfXHWzh3XI/zs=
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/ridv/thrift v0.12.1 h1:b80V1Oa2Mbd++jrlJZbJsIybO5/MCfbXKzd1A5v4aSo=
github.com/ridv/thrift v0.12.1/go.mod h1:yTMRF94RCZjO1fY1xt69yncvMbQCPdRL8BhbwIrjPx8=
github.com/ridv/thrift v0.13.1 h1:/8XnTRUqJJeiuqoL7mfnJQmXQa4GJn9tUCiP7+i6Y9o=
github.com/ridv/thrift v0.13.1/go.mod h1:yTMRF94RCZjO1fY1xt69yncvMbQCPdRL8BhbwIrjPx8=
github.com/ridv/thrift v0.13.2 h1:Q3Smr8poXd7VkWZPHvdJZzlQCJO+b5W37ECfoUL4qHc=
github.com/ridv/thrift v0.13.2/go.mod h1:yTMRF94RCZjO1fY1xt69yncvMbQCPdRL8BhbwIrjPx8=
github.com/samuel/go-zookeeper v0.0.0-20171117190445-471cd4e61d7a h1:EYL2xz/Zdo0hyqdZMXR4lmT2O11jDLTPCEqIe/FR6W4=
github.com/samuel/go-zookeeper v0.0.0-20171117190445-471cd4e61d7a/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/stretchr/objx v0.1.0 h1:4G4v2dO3VZwixGIRoQ5Lfboy6nUhCyYzaqnIAPPhYs4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.0 h1:LThGCOvhuJic9Gyd1VBCkhyUXmO8vKaBFvBsJ2k03rg=
github.com/stretchr/testify v1.2.0/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

21
helpers.go Normal file
View file

@ -0,0 +1,21 @@
package realis
import (
"context"
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
func (r *realisClient) jobExists(key aurora.JobKey) (bool, error) {
resp, err := r.client.GetConfigSummary(context.TODO(), &key)
if err != nil {
return false, err
}
return resp == nil ||
resp.GetResult_() == nil ||
resp.GetResult_().GetConfigSummaryResult_() == nil ||
resp.GetResult_().GetConfigSummaryResult_().GetSummary() == nil ||
resp.GetResponseCode() != aurora.ResponseCode_OK,
nil
}

242
job.go
View file

@ -20,16 +20,20 @@ import (
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
// Job inteface is used to define a set of functions an Aurora Job object
// must implemement.
// TODO(rdelvalle): Consider getting rid of the Job interface
type Job interface {
// Set Job Key environment.
Environment(env string) Job
Role(role string) Job
Name(name string) Job
CPU(cpus float64) Job
CronSchedule(cron string) Job
CronCollisionPolicy(policy aurora.CronCollisionPolicy) Job
CPU(cpus float64) Job
Disk(disk int64) Job
RAM(ram int64) Job
GPU(gpu int64) Job
ExecutorName(name string) Job
ExecutorData(data string) Job
AddPorts(num int) Job
@ -37,6 +41,15 @@ type Job interface {
AddNamedPorts(names ...string) Job
AddLimitConstraint(name string, limit int32) Job
AddValueConstraint(name string, negated bool, values ...string) Job
// From Aurora Docs:
// dedicated attribute. Aurora treats this specially, and only allows matching jobs
// to run on these machines, and will only schedule matching jobs on these machines.
// When a job is created, the scheduler requires that the $role component matches
// the role field in the job configuration, and will reject the job creation otherwise.
// A wildcard (*) may be used for the role portion of the dedicated attribute, which
// will allow any owner to elect for a job to run on the host(s)
AddDedicatedConstraint(role, name string) Job
AddURIs(extract bool, cache bool, values ...string) Job
JobKey() *aurora.JobKey
JobConfig() *aurora.JobConfiguration
@ -46,83 +59,92 @@ type Job interface {
GetInstanceCount() int32
MaxFailure(maxFail int32) Job
Container(container Container) Job
PartitionPolicy(policy *aurora.PartitionPolicy) Job
Tier(tier string) Job
SlaPolicy(policy *aurora.SlaPolicy) Job
Priority(priority int32) Job
}
// Structure to collect all information pertaining to an Aurora job.
type resourceType int
const (
CPU resourceType = iota
RAM
DISK
GPU
)
const portNamePrefix = "org.apache.aurora.port."
// AuroraJob is a structure to collect all information pertaining to an Aurora job.
type AuroraJob struct {
jobConfig *aurora.JobConfiguration
resources map[string]*aurora.Resource
portCount int
jobConfig *aurora.JobConfiguration
resources map[resourceType]*aurora.Resource
metadata map[string]*aurora.Metadata
constraints map[string]*aurora.Constraint
portCount int
}
// Create a Job object with everything initialized.
// NewJob is used to create a Job object with everything initialized.
func NewJob() Job {
jobConfig := aurora.NewJobConfiguration()
taskConfig := aurora.NewTaskConfig()
jobKey := aurora.NewJobKey()
//Job Config
// Job Config
jobConfig.Key = jobKey
jobConfig.TaskConfig = taskConfig
//Task Config
// Task Config
taskConfig.Job = jobKey
taskConfig.Container = aurora.NewContainer()
taskConfig.Container.Mesos = aurora.NewMesosContainer()
taskConfig.MesosFetcherUris = make(map[*aurora.MesosFetcherURI]bool)
taskConfig.Metadata = make(map[*aurora.Metadata]bool)
taskConfig.Constraints = make(map[*aurora.Constraint]bool)
//Resources
// Resources
numCpus := aurora.NewResource()
ramMb := aurora.NewResource()
diskMb := aurora.NewResource()
resources := make(map[string]*aurora.Resource)
resources["cpu"] = numCpus
resources["ram"] = ramMb
resources["disk"] = diskMb
taskConfig.Resources = make(map[*aurora.Resource]bool)
taskConfig.Resources[numCpus] = true
taskConfig.Resources[ramMb] = true
taskConfig.Resources[diskMb] = true
resources := map[resourceType]*aurora.Resource{CPU: numCpus, RAM: ramMb, DISK: diskMb}
taskConfig.Resources = []*aurora.Resource{numCpus, ramMb, diskMb}
numCpus.NumCpus = new(float64)
ramMb.RamMb = new(int64)
diskMb.DiskMb = new(int64)
return &AuroraJob{
jobConfig: jobConfig,
resources: resources,
portCount: 0,
jobConfig: jobConfig,
resources: resources,
metadata: make(map[string]*aurora.Metadata),
constraints: make(map[string]*aurora.Constraint),
portCount: 0,
}
}
// Set Job Key environment.
// Environment sets the Job Key environment.
func (j *AuroraJob) Environment(env string) Job {
j.jobConfig.Key.Environment = env
return j
}
// Set Job Key Role.
// Role sets the Job Key role.
func (j *AuroraJob) Role(role string) Job {
j.jobConfig.Key.Role = role
//Will be deprecated
// Will be deprecated
identity := &aurora.Identity{User: role}
j.jobConfig.Owner = identity
j.jobConfig.TaskConfig.Owner = identity
return j
}
// Set Job Key Name.
// Name sets the Job Key Name.
func (j *AuroraJob) Name(name string) Job {
j.jobConfig.Key.Name = name
return j
}
// Set name of the executor that will the task will be configured to.
// ExecutorName sets the name of the executor that will the task will be configured to.
func (j *AuroraJob) ExecutorName(name string) Job {
if j.jobConfig.TaskConfig.ExecutorConfig == nil {
@ -133,7 +155,7 @@ func (j *AuroraJob) ExecutorName(name string) Job {
return j
}
// Will be included as part of entire task inside the scheduler that will be serialized.
// ExecutorData sets the data blob that will be passed to the Mesos executor.
func (j *AuroraJob) ExecutorData(data string) Job {
if j.jobConfig.TaskConfig.ExecutorConfig == nil {
@ -144,106 +166,126 @@ func (j *AuroraJob) ExecutorData(data string) Job {
return j
}
// CPU sets the amount of CPU each task will use in an Aurora Job.
func (j *AuroraJob) CPU(cpus float64) Job {
*j.resources["cpu"].NumCpus = cpus
j.jobConfig.TaskConfig.NumCpus = cpus //Will be deprecated soon
*j.resources[CPU].NumCpus = cpus
return j
}
// RAM sets the amount of RAM each task will use in an Aurora Job.
func (j *AuroraJob) RAM(ram int64) Job {
*j.resources["ram"].RamMb = ram
j.jobConfig.TaskConfig.RamMb = ram //Will be deprecated soon
*j.resources[RAM].RamMb = ram
return j
}
// Disk sets the amount of Disk each task will use in an Aurora Job.
func (j *AuroraJob) Disk(disk int64) Job {
*j.resources["disk"].DiskMb = disk
j.jobConfig.TaskConfig.DiskMb = disk //Will be deprecated
*j.resources[DISK].DiskMb = disk
return j
}
// How many failures to tolerate before giving up.
// GPU sets the amount of GPU each task will use in an Aurora Job.
func (j *AuroraJob) GPU(gpu int64) Job {
// GPU resource must be set explicitly since the scheduler by default
// rejects jobs with GPU resources attached to it.
if _, ok := j.resources[GPU]; !ok {
j.resources[GPU] = &aurora.Resource{}
j.JobConfig().GetTaskConfig().Resources = append(
j.JobConfig().GetTaskConfig().Resources,
j.resources[GPU])
}
j.resources[GPU].NumGpus = &gpu
return j
}
// MaxFailure sets how many failures to tolerate before giving up per Job.
func (j *AuroraJob) MaxFailure(maxFail int32) Job {
j.jobConfig.TaskConfig.MaxTaskFailures = maxFail
return j
}
// How many instances of the job to run
// InstanceCount sets how many instances of the task to run for this Job.
func (j *AuroraJob) InstanceCount(instCount int32) Job {
j.jobConfig.InstanceCount = instCount
return j
}
// CronSchedule allows the user to configure a cron schedule for this job to run in.
func (j *AuroraJob) CronSchedule(cron string) Job {
j.jobConfig.CronSchedule = &cron
return j
}
// CronCollisionPolicy allows the user to decide what happens if two or more instances
// of the same Cron job need to run.
func (j *AuroraJob) CronCollisionPolicy(policy aurora.CronCollisionPolicy) Job {
j.jobConfig.CronCollisionPolicy = policy
return j
}
// How many instances of the job to run
// GetInstanceCount returns how many tasks this Job contains.
func (j *AuroraJob) GetInstanceCount() int32 {
return j.jobConfig.InstanceCount
}
// Restart the job's tasks if they fail
// IsService returns true if the job is a long term running job or false if it is an ad-hoc job.
func (j *AuroraJob) IsService(isService bool) Job {
j.jobConfig.TaskConfig.IsService = isService
return j
}
// Get the current job configurations key to use for some realis calls.
// JobKey returns the job's configuration key.
func (j *AuroraJob) JobKey() *aurora.JobKey {
return j.jobConfig.Key
}
// Get the current job configurations key to use for some realis calls.
// JobConfig returns the job's configuration.
func (j *AuroraJob) JobConfig() *aurora.JobConfiguration {
return j.jobConfig
}
// TaskConfig returns the job's task(shard) configuration.
func (j *AuroraJob) TaskConfig() *aurora.TaskConfig {
return j.jobConfig.TaskConfig
}
// Add a list of URIs with the same extract and cache configuration. Scheduler must have
// AddURIs adds a list of URIs with the same extract and cache configuration. Scheduler must have
// --enable_mesos_fetcher flag enabled. Currently there is no duplicate detection.
func (j *AuroraJob) AddURIs(extract bool, cache bool, values ...string) Job {
for _, value := range values {
j.jobConfig.TaskConfig.MesosFetcherUris[&aurora.MesosFetcherURI{
Value: value,
Extract: &extract,
Cache: &cache,
}] = true
j.jobConfig.TaskConfig.MesosFetcherUris = append(j.jobConfig.TaskConfig.MesosFetcherUris,
&aurora.MesosFetcherURI{Value: value, Extract: &extract, Cache: &cache})
}
return j
}
// Adds a Mesos label to the job. Note that Aurora will add the
// AddLabel adds a Mesos label to the job. Note that Aurora will add the
// prefix "org.apache.aurora.metadata." to the beginning of each key.
func (j *AuroraJob) AddLabel(key string, value string) Job {
j.jobConfig.TaskConfig.Metadata[&aurora.Metadata{Key: key, Value: value}] = true
if _, ok := j.metadata[key]; !ok {
j.metadata[key] = &aurora.Metadata{Key: key}
j.jobConfig.TaskConfig.Metadata = append(j.jobConfig.TaskConfig.Metadata, j.metadata[key])
}
j.metadata[key].Value = value
return j
}
// Add a named port to the job configuration These are random ports as it's
// AddNamedPorts adds a named port to the job configuration These are random ports as it's
// not currently possible to request specific ports using Aurora.
func (j *AuroraJob) AddNamedPorts(names ...string) Job {
j.portCount += len(names)
for _, name := range names {
j.jobConfig.TaskConfig.Resources[&aurora.Resource{NamedPort: &name}] = true
j.jobConfig.TaskConfig.Resources = append(
j.jobConfig.TaskConfig.Resources,
&aurora.Resource{NamedPort: &name})
}
return j
}
// Adds a request for a number of ports to the job configuration. The names chosen for these ports
// AddPorts adds a request for a number of ports to the job configuration. The names chosen for these ports
// will be org.apache.aurora.port.X, where X is the current port count for the job configuration
// starting at 0. These are random ports as it's not currently possible to request
// specific ports using Aurora.
@ -251,55 +293,99 @@ func (j *AuroraJob) AddPorts(num int) Job {
start := j.portCount
j.portCount += num
for i := start; i < j.portCount; i++ {
portName := "org.apache.aurora.port." + strconv.Itoa(i)
j.jobConfig.TaskConfig.Resources[&aurora.Resource{NamedPort: &portName}] = true
portName := portNamePrefix + strconv.Itoa(i)
j.jobConfig.TaskConfig.Resources = append(
j.jobConfig.TaskConfig.Resources,
&aurora.Resource{NamedPort: &portName})
}
return j
}
// AddValueConstraint allows the user to add a value constrain to the job to limit which agents the job's
// tasks can be run on. If the name matches a constraint that was previously set, the previous value will be
// overwritten. In case the previous constraint attached to the name was of type limit, the constraint will be clobbered
// by this new Value constraint.
// From Aurora Docs:
// Add a Value constraint
// name - Mesos slave attribute that the constraint is matched against.
// If negated = true , treat this as a 'not' - to avoid specific values.
// Values - list of values we look for in attribute name
func (j *AuroraJob) AddValueConstraint(name string, negated bool, values ...string) Job {
constraintValues := make(map[string]bool)
for _, value := range values {
constraintValues[value] = true
if _, ok := j.constraints[name]; !ok {
j.constraints[name] = &aurora.Constraint{Name: name}
j.jobConfig.TaskConfig.Constraints = append(j.jobConfig.TaskConfig.Constraints, j.constraints[name])
}
j.jobConfig.TaskConfig.Constraints[&aurora.Constraint{
Name: name,
Constraint: &aurora.TaskConstraint{
Value: &aurora.ValueConstraint{
Negated: negated,
Values: constraintValues,
},
Limit: nil,
j.constraints[name].Constraint = &aurora.TaskConstraint{
Value: &aurora.ValueConstraint{
Negated: negated,
Values: values,
},
}] = true
Limit: nil,
}
return j
}
// AddLimitConstraint allows the user to limit how many tasks form the same Job are run on a single host.
// If the name matches a constraint that was previously set, the previous value will be
// overwritten. In case the previous constraint attached to the name was of type Value, the constraint will be clobbered
// by this new Limit constraint.
// From Aurora Docs:
// A constraint that specifies the maximum number of active tasks on a host with
// a matching attribute that may be scheduled simultaneously.
func (j *AuroraJob) AddLimitConstraint(name string, limit int32) Job {
j.jobConfig.TaskConfig.Constraints[&aurora.Constraint{
Name: name,
Constraint: &aurora.TaskConstraint{
Value: nil,
Limit: &aurora.LimitConstraint{Limit: limit},
},
}] = true
if _, ok := j.constraints[name]; !ok {
j.constraints[name] = &aurora.Constraint{Name: name}
j.jobConfig.TaskConfig.Constraints = append(j.jobConfig.TaskConfig.Constraints, j.constraints[name])
}
j.constraints[name].Constraint = &aurora.TaskConstraint{
Value: nil,
Limit: &aurora.LimitConstraint{Limit: limit},
}
return j
}
// Set a container to run for the job configuration to run.
// AddDedicatedConstraint is a convenience function that allows the user to
// add a dedicated constraint to a Job configuration.
// In case a previous dedicated constraint was set, it will be clobbered by this new value.
func (j *AuroraJob) AddDedicatedConstraint(role, name string) Job {
j.AddValueConstraint("dedicated", false, role+"/"+name)
return j
}
// Container sets a container to run for the job configuration to run.
func (j *AuroraJob) Container(container Container) Job {
j.jobConfig.TaskConfig.Container = container.Build()
return j
}
// PartitionPolicy sets a partition policy for the job configuration to implement.
func (j *AuroraJob) PartitionPolicy(policy *aurora.PartitionPolicy) Job {
j.jobConfig.TaskConfig.PartitionPolicy = policy
return j
}
// Tier sets the Tier for the Job.
func (j *AuroraJob) Tier(tier string) Job {
j.jobConfig.TaskConfig.Tier = &tier
return j
}
// SlaPolicy sets an SlaPolicy for the Job.
func (j *AuroraJob) SlaPolicy(policy *aurora.SlaPolicy) Job {
j.jobConfig.TaskConfig.SlaPolicy = policy
return j
}
func (j *AuroraJob) Priority(priority int32) Job {
j.jobConfig.TaskConfig.Priority = priority
return j
}

View file

@ -14,16 +14,74 @@
package realis
type Logger interface {
type logger interface {
Println(v ...interface{})
Printf(format string, v ...interface{})
Print(v ...interface{})
}
// NoopLogger is a logger that can be attached to the client which will not print anything.
type NoopLogger struct{}
// Printf is a NOOP function here.
func (NoopLogger) Printf(format string, a ...interface{}) {}
// Print is a NOOP function here.
func (NoopLogger) Print(a ...interface{}) {}
// Println is a NOOP function here.
func (NoopLogger) Println(a ...interface{}) {}
// LevelLogger is a logger that can be configured to output different levels of information: Debug and Trace.
// Trace should only be enabled when very in depth information about the sequence of events a function took is needed.
type LevelLogger struct {
logger
debug bool
trace bool
}
// EnableDebug enables debug level logging for the LevelLogger
func (l *LevelLogger) EnableDebug(enable bool) {
l.debug = enable
}
// EnableTrace enables trace level logging for the LevelLogger
func (l *LevelLogger) EnableTrace(enable bool) {
l.trace = enable
}
func (l LevelLogger) debugPrintf(format string, a ...interface{}) {
if l.debug {
l.Printf("[DEBUG] "+format, a...)
}
}
func (l LevelLogger) debugPrint(a ...interface{}) {
if l.debug {
l.Print(append([]interface{}{"[DEBUG] "}, a...)...)
}
}
func (l LevelLogger) debugPrintln(a ...interface{}) {
if l.debug {
l.Println(append([]interface{}{"[DEBUG] "}, a...)...)
}
}
func (l LevelLogger) tracePrintf(format string, a ...interface{}) {
if l.trace {
l.Printf("[TRACE] "+format, a...)
}
}
func (l LevelLogger) tracePrint(a ...interface{}) {
if l.trace {
l.Print(append([]interface{}{"[TRACE] "}, a...)...)
}
}
func (l LevelLogger) tracePrintln(a ...interface{}) {
if l.trace {
l.Println(append([]interface{}{"[TRACE] "}, a...)...)
}
}

View file

@ -12,102 +12,210 @@
* limitations under the License.
*/
// Collection of monitors to create synchronicity
package realis
import (
"time"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/paypal/gorealis/response"
"github.com/pkg/errors"
)
const (
UpdateFailed = "update failed"
RolledBack = "update rolled back"
Timeout = "timeout"
)
// Monitor is a wrapper for the Realis client which allows us to have functions
// with the same name for Monitoring purposes.
// TODO(rdelvalle): Deprecate monitors and instead add prefix Monitor to
// all functions in this file like it is done in V2.
type Monitor struct {
Client Realis
}
// Polls the scheduler every certain amount of time to see if the update has succeeded
func (m *Monitor) JobUpdate(updateKey aurora.JobUpdateKey, interval int, timeout int) (bool, error) {
// JobUpdate polls the scheduler every certain amount of time to see if the update has entered a terminal state.
func (m *Monitor) JobUpdate(
updateKey aurora.JobUpdateKey,
interval int,
timeout int) (bool, error) {
updateQ := aurora.JobUpdateQuery{
Key: &updateKey,
Limit: 1,
Key: &updateKey,
Limit: 1,
UpdateStatuses: TerminalUpdateStates(),
}
ticker := time.NewTicker(time.Second * time.Duration(interval))
updateSummaries, err := m.JobUpdateQuery(
updateQ,
time.Duration(interval)*time.Second,
time.Duration(timeout)*time.Second)
status := updateSummaries[0].State.Status
if err != nil {
return false, err
}
m.Client.RealisConfig().logger.Printf("job update status: %v\n", status)
// Rolled forward is the only state in which an update has been successfully updated
// if we encounter an inactive state and it is not at rolled forward, update failed
switch status {
case aurora.JobUpdateStatus_ROLLED_FORWARD:
return true, nil
case aurora.JobUpdateStatus_ROLLED_BACK,
aurora.JobUpdateStatus_ABORTED,
aurora.JobUpdateStatus_ERROR,
aurora.JobUpdateStatus_FAILED:
return false, errors.Errorf("bad terminal state for update: %v", status)
default:
return false, errors.Errorf("unexpected update state: %v", status)
}
}
// JobUpdateStatus polls the scheduler every certain amount of time to see if the update has entered a specified state.
func (m *Monitor) JobUpdateStatus(updateKey aurora.JobUpdateKey,
desiredStatuses []aurora.JobUpdateStatus,
interval, timeout time.Duration) (aurora.JobUpdateStatus, error) {
updateQ := aurora.JobUpdateQuery{
Key: &updateKey,
Limit: 1,
UpdateStatuses: desiredStatuses,
}
summary, err := m.JobUpdateQuery(updateQ, interval, timeout)
if err != nil {
return 0, err
}
return summary[0].State.Status, nil
}
// JobUpdateQuery polls the scheduler every certain amount of time to see if the query call returns any results.
func (m *Monitor) JobUpdateQuery(
updateQuery aurora.JobUpdateQuery,
interval time.Duration,
timeout time.Duration) ([]*aurora.JobUpdateSummary, error) {
ticker := time.NewTicker(interval)
defer ticker.Stop()
timer := time.NewTimer(time.Second * time.Duration(timeout))
timer := time.NewTimer(timeout)
defer timer.Stop()
var cliErr error
var respDetail *aurora.Response
for {
select {
case <-ticker.C:
respDetail, cliErr = m.Client.JobUpdateDetails(updateQ)
respDetail, cliErr = m.Client.GetJobUpdateSummaries(&updateQuery)
if cliErr != nil {
return false, cliErr
return nil, cliErr
}
updateDetail := response.JobUpdateDetails(respDetail)
if len(updateDetail) == 0 {
m.Client.RealisConfig().logger.Println("No update found")
return false, errors.New("No update found for " + updateKey.String())
updateSummaries := respDetail.Result_.GetJobUpdateSummariesResult_.UpdateSummaries
if len(updateSummaries) >= 1 {
return updateSummaries, nil
}
status := updateDetail[0].Update.Summary.State.Status
if _, ok := aurora.ACTIVE_JOB_UPDATE_STATES[status]; !ok {
// Rolled forward is the only state in which an update has been successfully updated
// if we encounter an inactive state and it is not at rolled forward, update failed
switch status {
case aurora.JobUpdateStatus_ROLLED_FORWARD:
m.Client.RealisConfig().logger.Println("Update succeded")
return true, nil
case aurora.JobUpdateStatus_FAILED:
m.Client.RealisConfig().logger.Println("Update failed")
return false, errors.New(UpdateFailed)
case aurora.JobUpdateStatus_ROLLED_BACK:
m.Client.RealisConfig().logger.Println("rolled back")
return false, errors.New(RolledBack)
default:
return false, nil
}
}
case <-timer.C:
return false, errors.New(Timeout)
return nil, newTimedoutError(errors.New("job update monitor timed out"))
}
}
}
// Monitor a Job until all instances enter one of the LIVE_STATES
func (m *Monitor) Instances(key *aurora.JobKey, instances int32, interval, timeout int) (bool, error) {
return m.ScheduleStatus(key, instances, aurora.LIVE_STATES, interval, timeout)
// AutoPausedUpdateMonitor is a special monitor for auto pause enabled batch updates. This monitor ensures that the update
// being monitored is capable of auto pausing and has auto pausing enabled. After verifying this information,
// the monitor watches for the job to enter the ROLL_FORWARD_PAUSED state and calculates the current batch
// the update is in using information from the update configuration.
func (m *Monitor) AutoPausedUpdateMonitor(key aurora.JobUpdateKey, interval, timeout time.Duration) (int, error) {
key.Job = &aurora.JobKey{
Role: key.Job.Role,
Environment: key.Job.Environment,
Name: key.Job.Name,
}
query := aurora.JobUpdateQuery{
UpdateStatuses: aurora.ACTIVE_JOB_UPDATE_STATES,
Limit: 1,
Key: &key,
}
response, err := m.Client.JobUpdateDetails(query)
if err != nil {
return -1, errors.Wrap(err, "unable to get information about update")
}
// TODO (rdelvalle): check for possible nil values when going down the list of structs
updateDetails := response.Result_.GetJobUpdateDetailsResult_.DetailsList
if len(updateDetails) == 0 {
return -1, errors.Errorf("details for update could not be found")
}
updateStrategy := updateDetails[0].Update.Instructions.Settings.UpdateStrategy
var batchSizes []int32
switch {
case updateStrategy.IsSetVarBatchStrategy():
batchSizes = updateStrategy.VarBatchStrategy.GroupSizes
if !updateStrategy.VarBatchStrategy.AutopauseAfterBatch {
return -1, errors.Errorf("update does not have auto pause enabled")
}
case updateStrategy.IsSetBatchStrategy():
batchSizes = []int32{updateStrategy.BatchStrategy.GroupSize}
if !updateStrategy.BatchStrategy.AutopauseAfterBatch {
return -1, errors.Errorf("update does not have auto pause enabled")
}
default:
return -1, errors.Errorf("update is not using a batch update strategy")
}
query.UpdateStatuses = append(TerminalUpdateStates(), aurora.JobUpdateStatus_ROLL_FORWARD_PAUSED)
summary, err := m.JobUpdateQuery(query, interval, timeout)
if err != nil {
return -1, err
}
if !(summary[0].State.Status == aurora.JobUpdateStatus_ROLL_FORWARD_PAUSED ||
summary[0].State.Status == aurora.JobUpdateStatus_ROLLED_FORWARD) {
return -1, errors.Errorf("update is in a terminal state %v", summary[0].State.Status)
}
updatingInstances := make(map[int32]struct{})
for _, e := range updateDetails[0].InstanceEvents {
// We only care about INSTANCE_UPDATING actions because we only care that they've been attempted
if e != nil && e.GetAction() == aurora.JobUpdateAction_INSTANCE_UPDATING {
updatingInstances[e.GetInstanceId()] = struct{}{}
}
}
return calculateCurrentBatch(int32(len(updatingInstances)), batchSizes), nil
}
// Monitor a Job until all instances enter a desired status.
// Instances will monitor a Job until all instances enter one of the LIVE_STATES
func (m *Monitor) Instances(key *aurora.JobKey, instances int32, interval, timeout int) (bool, error) {
return m.ScheduleStatus(key, instances, LiveStates, interval, timeout)
}
// ScheduleStatus will monitor a Job until all instances enter a desired status.
// Defaults sets of desired statuses provided by the thrift API include:
// ACTIVE_STATES, SLAVE_ASSIGNED_STATES, LIVE_STATES, and TERMINAL_STATES
func (m *Monitor) ScheduleStatus(key *aurora.JobKey, instanceCount int32, desiredStatuses map[aurora.ScheduleStatus]bool, interval, timeout int) (bool, error) {
func (m *Monitor) ScheduleStatus(
key *aurora.JobKey,
instanceCount int32,
desiredStatuses map[aurora.ScheduleStatus]bool,
interval int,
timeout int) (bool, error) {
ticker := time.NewTicker(time.Second * time.Duration(interval))
defer ticker.Stop()
timer := time.NewTimer(time.Second * time.Duration(timeout))
defer timer.Stop()
wantedStatuses := make([]aurora.ScheduleStatus, 0)
for status := range desiredStatuses {
wantedStatuses = append(wantedStatuses, status)
}
for {
select {
case <-ticker.C:
// Query Aurora for the state of the job key ever interval
instCount, cliErr := m.Client.GetInstanceIds(key, desiredStatuses)
instCount, cliErr := m.Client.GetInstanceIds(key, wantedStatuses)
if cliErr != nil {
return false, errors.Wrap(cliErr, "Unable to communicate with Aurora")
}
@ -117,14 +225,18 @@ func (m *Monitor) ScheduleStatus(key *aurora.JobKey, instanceCount int32, desire
case <-timer.C:
// If the timer runs out, return a timeout error to user
return false, errors.New(Timeout)
return false, newTimedoutError(errors.New("schedule status monitor timed out"))
}
}
}
// Monitor host status until all hosts match the status provided. Returns a map where the value is true if the host
// HostMaintenance will monitor host status until all hosts match the status provided.
// Returns a map where the value is true if the host
// is in one of the desired mode(s) or false if it is not as of the time when the monitor exited.
func (m *Monitor) HostMaintenance(hosts []string, modes []aurora.MaintenanceMode, interval, timeout int) (map[string]bool, error) {
func (m *Monitor) HostMaintenance(
hosts []string,
modes []aurora.MaintenanceMode,
interval, timeout int) (map[string]bool, error) {
// Transform modes to monitor for into a set for easy lookup
desiredMode := make(map[aurora.MaintenanceMode]struct{})
@ -133,7 +245,8 @@ func (m *Monitor) HostMaintenance(hosts []string, modes []aurora.MaintenanceMode
}
// Turn slice into a host set to eliminate duplicates.
// We also can't use a simple count because multiple modes means we can have multiple matches for a single host.
// We also can't use a simple count because multiple modes means
// we can have multiple matches for a single host.
// I.e. host A transitions from ACTIVE to DRAINING to DRAINED while monitored
remainingHosts := make(map[string]struct{})
for _, host := range hosts {
@ -160,7 +273,7 @@ func (m *Monitor) HostMaintenance(hosts []string, modes []aurora.MaintenanceMode
return hostResult, errors.Wrap(err, "client error in monitor")
}
for status := range result.GetStatuses() {
for _, status := range result.GetStatuses() {
if _, ok := desiredMode[status.GetMode()]; ok {
hostResult[status.GetHost()] = true
@ -177,7 +290,7 @@ func (m *Monitor) HostMaintenance(hosts []string, modes []aurora.MaintenanceMode
hostResult[host] = false
}
return hostResult, errors.New(Timeout)
return hostResult, newTimedoutError(errors.New("host maintenance monitor timed out"))
}
}
}

1301
realis.go

File diff suppressed because it is too large Load diff

309
realis_admin.go Normal file
View file

@ -0,0 +1,309 @@
package realis
import (
"context"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/pkg/errors"
)
// TODO(rdelvalle): Consider moving these functions to another interface. It would be a backwards incompatible change,
// but would add safety.
// Set a list of nodes to DRAINING. This means nothing will be able to be scheduled on them and any existing
// tasks will be killed and re-scheduled elsewhere in the cluster. Tasks from DRAINING nodes are not guaranteed
// to return to running unless there is enough capacity in the cluster to run them.
func (r *realisClient) DrainHosts(hosts ...string) (*aurora.Response, *aurora.DrainHostsResult_, error) {
var result *aurora.DrainHostsResult_
if len(hosts) == 0 {
return nil, nil, errors.New("no hosts provided to drain")
}
drainList := aurora.NewHosts()
drainList.HostNames = hosts
r.logger.debugPrintf("DrainHosts Thrift Payload: %v\n", drainList)
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.DrainHosts(context.TODO(), drainList)
},
nil,
)
if retryErr != nil {
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil {
result = resp.GetResult_().GetDrainHostsResult_()
}
return resp, result, nil
}
// Start SLA Aware Drain.
// defaultSlaPolicy is the fallback SlaPolicy to use if a task does not have an SlaPolicy.
// After timeoutSecs, tasks will be forcefully drained without checking SLA.
func (r *realisClient) SLADrainHosts(
policy *aurora.SlaPolicy,
timeout int64,
hosts ...string) (*aurora.DrainHostsResult_, error) {
var result *aurora.DrainHostsResult_
if len(hosts) == 0 {
return nil, errors.New("no hosts provided to drain")
}
if policy == nil || policy.CountSetFieldsSlaPolicy() == 0 {
policy = &defaultSlaPolicy
r.logger.Printf("Warning: start draining with default sla policy %v", policy)
}
if timeout < 0 {
r.logger.Printf("Warning: timeout %d secs is invalid, draining with default timeout %d secs",
timeout,
defaultSlaDrainTimeoutSecs)
timeout = defaultSlaDrainTimeoutSecs
}
drainList := aurora.NewHosts()
drainList.HostNames = hosts
r.logger.debugPrintf("SLADrainHosts Thrift Payload: %v\n", drainList)
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.SlaDrainHosts(context.TODO(), drainList, policy, timeout)
},
nil,
)
if retryErr != nil {
return result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil {
result = resp.GetResult_().GetDrainHostsResult_()
}
return result, nil
}
func (r *realisClient) StartMaintenance(hosts ...string) (*aurora.Response, *aurora.StartMaintenanceResult_, error) {
var result *aurora.StartMaintenanceResult_
if len(hosts) == 0 {
return nil, nil, errors.New("no hosts provided to start maintenance on")
}
hostList := aurora.NewHosts()
hostList.HostNames = hosts
r.logger.debugPrintf("StartMaintenance Thrift Payload: %v\n", hostList)
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.StartMaintenance(context.TODO(), hostList)
},
nil,
)
if retryErr != nil {
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil {
result = resp.GetResult_().GetStartMaintenanceResult_()
}
return resp, result, nil
}
func (r *realisClient) EndMaintenance(hosts ...string) (*aurora.Response, *aurora.EndMaintenanceResult_, error) {
var result *aurora.EndMaintenanceResult_
if len(hosts) == 0 {
return nil, nil, errors.New("no hosts provided to end maintenance on")
}
hostList := aurora.NewHosts()
hostList.HostNames = hosts
r.logger.debugPrintf("EndMaintenance Thrift Payload: %v\n", hostList)
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.EndMaintenance(context.TODO(), hostList)
},
nil,
)
if retryErr != nil {
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil {
result = resp.GetResult_().GetEndMaintenanceResult_()
}
return resp, result, nil
}
func (r *realisClient) MaintenanceStatus(hosts ...string) (*aurora.Response, *aurora.MaintenanceStatusResult_, error) {
var result *aurora.MaintenanceStatusResult_
if len(hosts) == 0 {
return nil, nil, errors.New("no hosts provided to get maintenance status from")
}
hostList := aurora.NewHosts()
hostList.HostNames = hosts
r.logger.debugPrintf("MaintenanceStatus Thrift Payload: %v\n", hostList)
// Make thrift call. If we encounter an error sending the call, attempt to reconnect
// and continue trying to resend command until we run out of retries.
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.MaintenanceStatus(context.TODO(), hostList)
},
nil,
)
if retryErr != nil {
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil {
result = resp.GetResult_().GetMaintenanceStatusResult_()
}
return resp, result, nil
}
// SetQuota sets a quota aggregate for the given role
// TODO(zircote) Currently investigating an error that is returned
// from thrift calls that include resources for `NamedPort` and `NumGpu`
func (r *realisClient) SetQuota(role string, cpu *float64, ramMb *int64, diskMb *int64) (*aurora.Response, error) {
quota := &aurora.ResourceAggregate{
Resources: []*aurora.Resource{{NumCpus: cpu}, {RamMb: ramMb}, {DiskMb: diskMb}},
}
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.SetQuota(context.TODO(), role, quota)
},
nil,
)
if retryErr != nil {
return resp, errors.Wrap(retryErr, "Unable to set role quota")
}
return resp, retryErr
}
// GetQuota returns the resource aggregate for the given role
func (r *realisClient) GetQuota(role string) (*aurora.Response, error) {
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.GetQuota(context.TODO(), role)
},
nil,
)
if retryErr != nil {
return resp, errors.Wrap(retryErr, "Unable to get role quota")
}
return resp, retryErr
}
// Force Aurora Scheduler to perform a snapshot and write to Mesos log
func (r *realisClient) Snapshot() error {
_, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.Snapshot(context.TODO())
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil
}
// Force Aurora Scheduler to write backup file to a file in the backup directory
func (r *realisClient) PerformBackup() error {
_, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.PerformBackup(context.TODO())
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil
}
func (r *realisClient) ForceImplicitTaskReconciliation() error {
_, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.TriggerImplicitTaskReconciliation(context.TODO())
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil
}
func (r *realisClient) ForceExplicitTaskReconciliation(batchSize *int32) error {
if batchSize != nil && *batchSize < 1 {
return errors.New("invalid batch size")
}
settings := aurora.NewExplicitReconciliationSettings()
settings.BatchSize = batchSize
_, retryErr := r.thriftCallWithRetries(false,
func() (*aurora.Response, error) {
return r.adminClient.TriggerExplicitTaskReconciliation(context.TODO(), settings)
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil
}

File diff suppressed because it is too large Load diff

View file

@ -36,6 +36,10 @@ func ScheduleStatusResult(resp *aurora.Response) *aurora.ScheduleStatusResult_ {
}
func JobUpdateSummaries(resp *aurora.Response) []*aurora.JobUpdateSummary {
if resp.GetResult_() == nil || resp.GetResult_().GetGetJobUpdateSummariesResult_() == nil {
return nil
}
return resp.GetResult_().GetGetJobUpdateSummariesResult_().GetUpdateSummaries()
}

272
retry.go
View file

@ -1,28 +1,40 @@
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"errors"
"io"
"math/rand"
"net/url"
"time"
"math/rand"
"github.com/apache/thrift/lib/go/thrift"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/paypal/gorealis/response"
"github.com/pkg/errors"
)
// Backoff determines how the retry mechanism should react after each failure and how many failures it should
// tolerate.
type Backoff struct {
Duration time.Duration // the base duration
Factor float64 // Duration is multiplied by a factor each iteration
Jitter float64 // The amount of jitter applied each iteration
Steps int // Exit with error after this many steps
}
// Jitter returns a time.Duration between duration and duration + maxFactor *
// duration.
//
@ -40,43 +52,243 @@ func Jitter(duration time.Duration, maxFactor float64) time.Duration {
// if the loop should be aborted.
type ConditionFunc func() (done bool, err error)
// Modified version of the Kubernetes exponential-backoff code.
// ExponentialBackoff repeats a condition check with exponential backoff.
//
// It checks the condition up to Steps times, increasing the wait by multiplying
// the previous duration by Factor.
// ExponentialBackoff is a modified version of the Kubernetes exponential-backoff code.
// It repeats a condition check with exponential backoff and checks the condition up to
// Steps times, increasing the wait by multiplying the previous duration by Factor.
//
// If Jitter is greater than zero, a random amount of each duration is added
// (between duration and duration*(1+jitter)).
//
// If the condition never returns true, ErrWaitTimeout is returned. All other
// errors terminate immediately.
func ExponentialBackoff(backoff Backoff, condition ConditionFunc) error {
// If the condition never returns true, ErrWaitTimeout is returned. Errors
// do not cause the function to return.
func ExponentialBackoff(backoff Backoff, logger logger, condition ConditionFunc) error {
var err error
var ok bool
var curStep int
duration := backoff.Duration
for i := 0; i < backoff.Steps; i++ {
if i != 0 {
for curStep = 0; curStep < backoff.Steps; curStep++ {
// Only sleep if it's not the first iteration.
if curStep != 0 {
adjusted := duration
if backoff.Jitter > 0.0 {
adjusted = Jitter(duration, backoff.Jitter)
}
logger.Printf(
"A retryable error occurred during function call, backing off for %v before retrying\n", adjusted)
time.Sleep(adjusted)
duration = time.Duration(float64(duration) * backoff.Factor)
}
ok, err := condition()
// Execute function passed in.
ok, err = condition()
// If the function executed says it succeeded, stop retrying
if ok {
return nil
}
// Stop retrying if the error is NOT temporary.
if err != nil {
// If the error is temporary, continue retrying.
if !IsTemporary(err) {
return err
}
// Print out the temporary error we experienced.
logger.Println(err)
}
}
if curStep > 1 {
logger.Printf("retried this function call %d time(s)", curStep)
}
// Provide more information to the user wherever possible
if err != nil {
return newRetryError(errors.Wrap(err, "ran out of retries"), curStep)
}
return newRetryError(errors.New("ran out of retries"), curStep)
}
type auroraThriftCall func() (resp *aurora.Response, err error)
// verifyOntimeout defines the type of function that will be used to verify whether a Thirft call to the Scheduler
// made it to the scheduler or not. In general, these types of functions will have to interact with the scheduler
// through the very same Thrift API which previously encountered a time out from the client.
// This means that the functions themselves should be kept to a minimum number of Thrift calls.
// It should also be noted that this is a best effort mechanism and
// is likely to fail for the same reasons that the original call failed.
type verifyOnTimeout func() (*aurora.Response, bool)
// Duplicates the functionality of ExponentialBackoff but is specifically targeted towards ThriftCalls.
func (r *realisClient) thriftCallWithRetries(
returnOnTimeout bool,
thriftCall auroraThriftCall,
verifyOnTimeout verifyOnTimeout) (*aurora.Response, error) {
var resp *aurora.Response
var clientErr error
var curStep int
timeouts := 0
backoff := r.config.backoff
duration := backoff.Duration
for curStep = 0; curStep < backoff.Steps; curStep++ {
// If this isn't our first try, backoff before the next try.
if curStep != 0 {
adjusted := duration
if backoff.Jitter > 0.0 {
adjusted = Jitter(duration, backoff.Jitter)
}
r.logger.Printf(
"A retryable error occurred during thrift call, backing off for %v before retry %v",
adjusted,
curStep)
time.Sleep(adjusted)
duration = time.Duration(float64(duration) * backoff.Factor)
}
// Only allow one go-routine make use or modify the thrift client connection.
// Placing this in an anonymous function in order to create a new, short-lived stack allowing unlock
// to be run in case of a panic inside of thriftCall.
func() {
r.lock.Lock()
defer r.lock.Unlock()
resp, clientErr = thriftCall()
r.logger.tracePrintf("Aurora Thrift Call ended resp: %v clientErr: %v", resp, clientErr)
}()
// Check if our thrift call is returning an error.
if clientErr != nil {
// Print out the error to the user
r.logger.Printf("Client Error: %v", clientErr)
temporary, timedout := isConnectionError(clientErr)
if !temporary && r.RealisConfig().failOnPermanentErrors {
return nil, errors.Wrap(clientErr, "permanent connection error")
}
// There exists a corner case where thrift payload was received by Aurora but
// connection timed out before Aurora was able to reply.
// Users can take special action on a timeout by using IsTimedout and reacting accordingly
// if they have configured the client to return on a timeout.
if timedout && returnOnTimeout {
return resp, newTimedoutError(errors.New("client connection closed before server answer"))
}
// In the future, reestablish connection should be able to check if it is actually possible
// to make a thrift call to Aurora. For now, a reconnect should always lead to a retry.
// Ignoring error due to the fact that an error should be retried regardless
reestablishErr := r.ReestablishConn()
if reestablishErr != nil {
r.logger.debugPrintf("error re-establishing connection ", reestablishErr)
}
// If users did not opt for a return on timeout in order to react to a timedout error,
// attempt to verify that the call made it to the scheduler after the connection was re-established.
if timedout {
timeouts++
r.logger.debugPrintf(
"Client closed connection %d times before server responded, "+
"consider increasing connection timeout",
timeouts)
// Allow caller to provide a function which checks if the original call was successful before
// it timed out.
if verifyOnTimeout != nil {
if verifyResp, ok := verifyOnTimeout(); ok {
r.logger.Print("verified that the call went through successfully after a client timeout")
// Response here might be different than the original as it is no longer constructed
// by the scheduler but mimicked.
// This is OK since the scheduler is very unlikely to change responses at this point in its
// development cycle but we must be careful to not return an incorrectly constructed response.
return verifyResp, nil
}
}
}
// Retry the thrift payload
continue
}
// If there was no client error, but the response is nil, something went wrong.
// Ideally, we'll never encounter this but we're placing a safeguard here.
if resp == nil {
return nil, errors.New("response from aurora is nil")
}
// Check Response Code from thrift and make a decision to continue retrying or not.
switch responseCode := resp.GetResponseCode(); responseCode {
// If the thrift call succeeded, stop retrying
case aurora.ResponseCode_OK:
return resp, nil
// If the response code is transient, continue retrying
case aurora.ResponseCode_ERROR_TRANSIENT:
r.logger.Println("Aurora replied with Transient error code, retrying")
continue
// Failure scenarios, these indicate a bad payload or a bad config. Stop retrying.
case aurora.ResponseCode_INVALID_REQUEST,
aurora.ResponseCode_ERROR,
aurora.ResponseCode_AUTH_FAILED,
aurora.ResponseCode_JOB_UPDATING_ERROR:
r.logger.Printf("Terminal Response Code %v from Aurora, won't retry\n", resp.GetResponseCode().String())
return resp, errors.New(response.CombineMessage(resp))
// The only case that should fall down to here is a WARNING response code.
// It is currently not used as a response in the scheduler so it is unknown how to handle it.
default:
r.logger.debugPrintf("unhandled response code %v received from Aurora\n", responseCode)
return nil, errors.Errorf("unhandled response code from Aurora %v", responseCode.String())
}
}
return NewTimeoutError(errors.New("Timed out while retrying"))
if curStep > 1 {
r.config.logger.Printf("this thrift call was retried %d time(s)", curStep)
}
// Provide more information to the user wherever possible.
if clientErr != nil {
return nil, newRetryError(errors.Wrap(clientErr, "ran out of retries, including latest error"), curStep)
}
return nil, newRetryError(errors.New("ran out of retries"), curStep)
}
// isConnectionError processes the error received by the client.
// The return values indicate weather this was determined to be a temporary error
// and weather it was determined to be a timeout error
func isConnectionError(err error) (bool, bool) {
// Determine if error is a temporary URL error by going up the stack
transportException, ok := err.(thrift.TTransportException)
if !ok {
return false, false
}
urlError, ok := transportException.Err().(*url.Error)
if !ok {
return false, false
}
// EOF error occurs when the server closes the read buffer of the client. This is common
// when the server is overloaded and we consider it temporary.
// All other which are not temporary as per the member function Temporary(),
// are considered not temporary (permanent).
if urlError.Err != io.EOF && !urlError.Temporary() {
return false, false
}
return true, urlError.Timeout()
}

13
runTests.sh Executable file
View file

@ -0,0 +1,13 @@
#!/bin/bash
docker-compose up -d
# If running docker-compose up gives any error, don't do anything.
if [ $? -ne 0 ]; then
exit
fi
# Since we run our docker compose setup in bridge mode to be able to run on MacOS, we have to launch a Docker container within the bridge network in order to avoid any routing issues.
docker run --rm -t -v $(pwd):/go/src/github.com/paypal/gorealis --network gorealis_aurora_cluster golang:1.10-stretch go test -v github.com/paypal/gorealis $@
docker-compose down

4
runTestsMac.sh Normal file
View file

@ -0,0 +1,4 @@
#!/bin/bash
# Since we run our docker compose setup in bridge mode to be able to run on MacOS, we have to launch a Docker container within the bridge network in order to avoid any routing issues.
docker run --rm -t -w /gorealis -v $GOPATH/pkg:/go/pkg -v $(pwd):/gorealis --network gorealis_aurora_cluster golang:1.16-buster go test -v github.com/paypal/gorealis $@

View file

@ -18,43 +18,51 @@ import (
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
// Structure to collect all information required to create job update
// UpdateJob is a structure to collect all information required to create job update.
type UpdateJob struct {
Job // SetInstanceCount for job is hidden, access via full qualifier
req *aurora.JobUpdateRequest
}
// Create a default UpdateJob object.
// NewDefaultUpdateJob creates an UpdateJob object with opinionated default settings.
func NewDefaultUpdateJob(config *aurora.TaskConfig) *UpdateJob {
req := aurora.NewJobUpdateRequest()
req.TaskConfig = config
s := NewUpdateSettings().Settings()
req.Settings = &s
req.Settings = NewUpdateSettings()
job, ok := NewJob().(*AuroraJob)
if !ok {
// This should never happen but it is here as a safeguard
return nil
}
job := NewJob().(*AuroraJob)
job.jobConfig.TaskConfig = config
// Rebuild resource map from TaskConfig
for ptr := range config.Resources {
for _, ptr := range config.Resources {
if ptr.NumCpus != nil {
job.resources["cpu"].NumCpus = ptr.NumCpus
job.resources[CPU].NumCpus = ptr.NumCpus
continue // Guard against Union violations that Go won't enforce
}
if ptr.RamMb != nil {
job.resources["ram"].RamMb = ptr.RamMb
job.resources[RAM].RamMb = ptr.RamMb
continue
}
if ptr.DiskMb != nil {
job.resources["disk"].DiskMb = ptr.DiskMb
job.resources[DISK].DiskMb = ptr.DiskMb
continue
}
if ptr.NumGpus != nil {
job.resources[GPU] = &aurora.Resource{NumGpus: ptr.NumGpus}
continue
}
}
// Mirrors defaults set by Pystachio
req.Settings.UpdateOnlyTheseInstances = make(map[*aurora.Range]bool)
req.Settings.UpdateGroupSize = 1
req.Settings.WaitForBatchCompletion = false
req.Settings.MinWaitInInstanceRunningMs = 45000
@ -66,137 +74,115 @@ func NewDefaultUpdateJob(config *aurora.TaskConfig) *UpdateJob {
return &UpdateJob{Job: job, req: req}
}
// NewUpdateJob creates an UpdateJob object wihtout default settings.
func NewUpdateJob(config *aurora.TaskConfig, settings *aurora.JobUpdateSettings) *UpdateJob {
req := aurora.NewJobUpdateRequest()
req.TaskConfig = config
req.Settings = settings
job := NewJob().(*AuroraJob)
job, ok := NewJob().(*AuroraJob)
if !ok {
// This should never happen but it is here as a safeguard
return nil
}
job.jobConfig.TaskConfig = config
// Rebuild resource map from TaskConfig
for ptr := range config.Resources {
for _, ptr := range config.Resources {
if ptr.NumCpus != nil {
job.resources["cpu"].NumCpus = ptr.NumCpus
job.resources[CPU].NumCpus = ptr.NumCpus
continue // Guard against Union violations that Go won't enforce
}
if ptr.RamMb != nil {
job.resources["ram"].RamMb = ptr.RamMb
job.resources[RAM].RamMb = ptr.RamMb
continue
}
if ptr.DiskMb != nil {
job.resources["disk"].DiskMb = ptr.DiskMb
job.resources[DISK].DiskMb = ptr.DiskMb
continue
}
if ptr.NumGpus != nil {
job.resources[GPU] = &aurora.Resource{}
job.resources[GPU].NumGpus = ptr.NumGpus
continue // Guard against Union violations that Go won't enforce
}
}
//TODO(rdelvalle): Deep copy job struct to avoid unexpected behavior
return &UpdateJob{Job: job, req: req}
}
// Set instance count the job will have after the update.
// InstanceCount sets instance count the job will have after the update.
func (u *UpdateJob) InstanceCount(inst int32) *UpdateJob {
u.req.InstanceCount = inst
return u
}
// Max number of instances being updated at any given moment.
// BatchSize sets the max number of instances being updated at any given moment.
func (u *UpdateJob) BatchSize(size int32) *UpdateJob {
u.req.Settings.UpdateGroupSize = size
return u
}
// Minimum number of seconds a shard must remain in RUNNING state before considered a success.
// WatchTime sets the minimum number of seconds a shard must remain in RUNNING state before considered a success.
func (u *UpdateJob) WatchTime(ms int32) *UpdateJob {
u.req.Settings.MinWaitInInstanceRunningMs = ms
return u
}
// Wait for all instances in a group to be done before moving on.
// WaitForBatchCompletion configures the job update to wait for all instances in a group to be done before moving on.
func (u *UpdateJob) WaitForBatchCompletion(batchWait bool) *UpdateJob {
u.req.Settings.WaitForBatchCompletion = batchWait
return u
}
// Max number of instance failures to tolerate before marking instance as FAILED.
// MaxPerInstanceFailures sets the max number of instance failures to tolerate before marking instance as FAILED.
func (u *UpdateJob) MaxPerInstanceFailures(inst int32) *UpdateJob {
u.req.Settings.MaxPerInstanceFailures = inst
return u
}
// Max number of FAILED instances to tolerate before terminating the update.
// MaxFailedInstances sets the max number of FAILED instances to tolerate before terminating the update.
func (u *UpdateJob) MaxFailedInstances(inst int32) *UpdateJob {
u.req.Settings.MaxFailedInstances = inst
return u
}
// When False, prevents auto rollback of a failed update.
// RollbackOnFail configure the job to rollback automatically after a job update fails.
func (u *UpdateJob) RollbackOnFail(rollback bool) *UpdateJob {
u.req.Settings.RollbackOnFailure = rollback
return u
}
// TODO(rdelvalle): Integrate this struct with the JobUpdate struct so that we don't repeat code
type UpdateSettings struct {
settings aurora.JobUpdateSettings
// NewUpdateSettings return an opinionated set of job update settings.
func (u *UpdateJob) BatchUpdateStrategy(strategy aurora.BatchJobUpdateStrategy) *UpdateJob {
u.req.Settings.UpdateStrategy = &aurora.JobUpdateStrategy{BatchStrategy: &strategy}
return u
}
func NewUpdateSettings() *UpdateSettings {
func (u *UpdateJob) QueueUpdateStrategy(strategy aurora.QueueJobUpdateStrategy) *UpdateJob {
u.req.Settings.UpdateStrategy = &aurora.JobUpdateStrategy{QueueStrategy: &strategy}
return u
}
us := new(UpdateSettings)
func (u *UpdateJob) VariableBatchStrategy(strategy aurora.VariableBatchJobUpdateStrategy) *UpdateJob {
u.req.Settings.UpdateStrategy = &aurora.JobUpdateStrategy{VarBatchStrategy: &strategy}
return u
}
func NewUpdateSettings() *aurora.JobUpdateSettings {
us := new(aurora.JobUpdateSettings)
// Mirrors defaults set by Pystachio
us.settings.UpdateOnlyTheseInstances = make(map[*aurora.Range]bool)
us.settings.UpdateGroupSize = 1
us.settings.WaitForBatchCompletion = false
us.settings.MinWaitInInstanceRunningMs = 45000
us.settings.MaxPerInstanceFailures = 0
us.settings.MaxFailedInstances = 0
us.settings.RollbackOnFailure = true
us.UpdateGroupSize = 1
us.WaitForBatchCompletion = false
us.MinWaitInInstanceRunningMs = 45000
us.MaxPerInstanceFailures = 0
us.MaxFailedInstances = 0
us.RollbackOnFailure = true
return us
}
// Max number of instances being updated at any given moment.
func (u *UpdateSettings) BatchSize(size int32) *UpdateSettings {
u.settings.UpdateGroupSize = size
return u
}
// Minimum number of seconds a shard must remain in RUNNING state before considered a success.
func (u *UpdateSettings) WatchTime(ms int32) *UpdateSettings {
u.settings.MinWaitInInstanceRunningMs = ms
return u
}
// Wait for all instances in a group to be done before moving on.
func (u *UpdateSettings) WaitForBatchCompletion(batchWait bool) *UpdateSettings {
u.settings.WaitForBatchCompletion = batchWait
return u
}
// Max number of instance failures to tolerate before marking instance as FAILED.
func (u *UpdateSettings) MaxPerInstanceFailures(inst int32) *UpdateSettings {
u.settings.MaxPerInstanceFailures = inst
return u
}
// Max number of FAILED instances to tolerate before terminating the update.
func (u *UpdateSettings) MaxFailedInstances(inst int32) *UpdateSettings {
u.settings.MaxFailedInstances = inst
return u
}
// When False, prevents auto rollback of a failed update.
func (u *UpdateSettings) RollbackOnFail(rollback bool) *UpdateSettings {
u.settings.RollbackOnFailure = rollback
return u
}
// Return internal Thrift API structure
func (u UpdateSettings) Settings() aurora.JobUpdateSettings {
return u.settings
}

167
util.go Normal file
View file

@ -0,0 +1,167 @@
package realis
import (
"crypto/x509"
"io/ioutil"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/pkg/errors"
)
const apiPath = "/api"
// ActiveStates - States a task may be in when active.
var ActiveStates = make(map[aurora.ScheduleStatus]bool)
// SlaveAssignedStates - States a task may be in when it has already been assigned to a Mesos agent.
var SlaveAssignedStates = make(map[aurora.ScheduleStatus]bool)
// LiveStates - States a task may be in when it is live (e.g. able to take traffic)
var LiveStates = make(map[aurora.ScheduleStatus]bool)
// TerminalStates - Set of states a task may not transition away from.
var TerminalStates = make(map[aurora.ScheduleStatus]bool)
// ActiveJobUpdateStates - States a Job Update may be in where it is considered active.
var ActiveJobUpdateStates = make(map[aurora.JobUpdateStatus]bool)
// TerminalUpdateStates returns a slice containing all the terminal states an update may be in.
// This is a function in order to avoid having a slice that can be accidentally mutated.
func TerminalUpdateStates() []aurora.JobUpdateStatus {
return []aurora.JobUpdateStatus{
aurora.JobUpdateStatus_ROLLED_FORWARD,
aurora.JobUpdateStatus_ROLLED_BACK,
aurora.JobUpdateStatus_ABORTED,
aurora.JobUpdateStatus_ERROR,
aurora.JobUpdateStatus_FAILED,
}
}
// AwaitingPulseJobUpdateStates - States a job update may be in where it is waiting for a pulse.
var AwaitingPulseJobUpdateStates = make(map[aurora.JobUpdateStatus]bool)
func init() {
for _, status := range aurora.ACTIVE_STATES {
ActiveStates[status] = true
}
for _, status := range aurora.SLAVE_ASSIGNED_STATES {
SlaveAssignedStates[status] = true
}
for _, status := range aurora.LIVE_STATES {
LiveStates[status] = true
}
for _, status := range aurora.TERMINAL_STATES {
TerminalStates[status] = true
}
for _, status := range aurora.ACTIVE_JOB_UPDATE_STATES {
ActiveJobUpdateStates[status] = true
}
for _, status := range aurora.AWAITNG_PULSE_JOB_UPDATE_STATES {
AwaitingPulseJobUpdateStates[status] = true
}
}
// createCertPool will attempt to load certificates into a certificate pool from a given directory.
// Only files with an extension contained in the extension map are considered.
// This function ignores any files that cannot be read successfully or cannot be added to the certPool
// successfully.
func createCertPool(path string, extensions map[string]struct{}) (*x509.CertPool, error) {
_, err := os.Stat(path)
if err != nil {
return nil, errors.Wrap(err, "unable to load certificates")
}
caFiles, err := ioutil.ReadDir(path)
if err != nil {
return nil, err
}
certPool := x509.NewCertPool()
loadedCerts := 0
for _, cert := range caFiles {
// Skip directories
if cert.IsDir() {
continue
}
// Skip any files that do not contain the right extension
if _, ok := extensions[filepath.Ext(cert.Name())]; !ok {
continue
}
pem, err := ioutil.ReadFile(filepath.Join(path, cert.Name()))
if err != nil {
continue
}
if certPool.AppendCertsFromPEM(pem) {
loadedCerts++
}
}
if loadedCerts == 0 {
return nil, errors.New("no certificates were able to be successfully loaded")
}
return certPool, nil
}
func validateAuroraURL(location string) (string, error) {
// If no protocol defined, assume http
if !strings.Contains(location, "://") {
location = "http://" + location
}
u, err := url.Parse(location)
if err != nil {
return "", errors.Wrap(err, "error parsing url")
}
// If no path provided assume /api
if u.Path == "" {
u.Path = "/api"
}
// If no port provided, assume default 8081
if u.Port() == "" {
u.Host = u.Host + ":8081"
}
if !(u.Scheme == "http" || u.Scheme == "https") {
return "", errors.Errorf("only protocols http and https are supported %v\n", u.Scheme)
}
// This could theoretically be elsewhere but we'll be strict for the sake of simplicity
if u.Path != apiPath {
return "", errors.Errorf("expected /api path %v\n", u.Path)
}
return u.String(), nil
}
func calculateCurrentBatch(updatingInstances int32, batchSizes []int32) int {
for i, size := range batchSizes {
updatingInstances -= size
if updatingInstances <= 0 {
return i
}
}
// Overflow batches
batchCount := len(batchSizes) - 1
lastBatchIndex := len(batchSizes) - 1
batchCount += int(updatingInstances / batchSizes[lastBatchIndex])
if updatingInstances%batchSizes[lastBatchIndex] != 0 {
batchCount++
}
return batchCount
}

114
util_test.go Normal file
View file

@ -0,0 +1,114 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestAuroraURLValidator(t *testing.T) {
t.Run("badURL", func(t *testing.T) {
url, err := validateAuroraURL("http://badurl.com/badpath")
assert.Empty(t, url)
assert.Error(t, err)
})
t.Run("URLHttp", func(t *testing.T) {
url, err := validateAuroraURL("http://goodurl.com:8081/api")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLHttps", func(t *testing.T) {
url, err := validateAuroraURL("https://goodurl.com:8081/api")
assert.Equal(t, "https://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoPath", func(t *testing.T) {
url, err := validateAuroraURL("http://goodurl.com:8081")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("ipAddrNoPath", func(t *testing.T) {
url, err := validateAuroraURL("http://192.168.1.33:8081")
assert.Equal(t, "http://192.168.1.33:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoProtocol", func(t *testing.T) {
url, err := validateAuroraURL("goodurl.com:8081/api")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoProtocolNoPathNoPort", func(t *testing.T) {
url, err := validateAuroraURL("goodurl.com")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
}
func TestCurrentBatchCalculator(t *testing.T) {
t.Run("singleBatchOverflow", func(t *testing.T) {
curBatch := calculateCurrentBatch(10, []int32{2})
assert.Equal(t, 4, curBatch)
})
t.Run("noInstancesUpdating", func(t *testing.T) {
curBatch := calculateCurrentBatch(0, []int32{2})
assert.Equal(t, 0, curBatch)
})
t.Run("evenMatchSingleBatch", func(t *testing.T) {
curBatch := calculateCurrentBatch(2, []int32{2})
assert.Equal(t, 0, curBatch)
})
t.Run("moreInstancesThanBatches", func(t *testing.T) {
curBatch := calculateCurrentBatch(5, []int32{1, 2})
assert.Equal(t, 2, curBatch)
})
t.Run("moreInstancesThanBatchesDecreasing", func(t *testing.T) {
curBatch := calculateCurrentBatch(5, []int32{2, 1})
assert.Equal(t, 3, curBatch)
})
t.Run("unevenFit", func(t *testing.T) {
curBatch := calculateCurrentBatch(2, []int32{1, 2})
assert.Equal(t, 1, curBatch)
})
t.Run("halfWay", func(t *testing.T) {
curBatch := calculateCurrentBatch(1, []int32{1, 2})
assert.Equal(t, 0, curBatch)
})
}
func TestCertPoolCreator(t *testing.T) {
extensions := map[string]struct{}{".crt": {}}
_, err := createCertPool("examples/certs", extensions)
assert.NoError(t, err)
t.Run("badDir", func(t *testing.T) {
_, err := createCertPool("idontexist", extensions)
assert.Error(t, err)
})
}

View file

@ -1,56 +0,0 @@
---
Language: Cpp
# BasedOnStyle: LLVM
AccessModifierOffset: -2
ConstructorInitializerIndentWidth: 2
AlignEscapedNewlinesLeft: false
AlignTrailingComments: true
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortBlocksOnASingleLine: false
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AllowShortFunctionsOnASingleLine: Inline
AlwaysBreakTemplateDeclarations: true
AlwaysBreakBeforeMultilineStrings: true
BreakBeforeBinaryOperators: true
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BinPackParameters: false
ColumnLimit: 100
ConstructorInitializerAllOnOneLineOrOnePerLine: true
DerivePointerAlignment: false
IndentCaseLabels: false
IndentWrappedFunctionNames: false
IndentFunctionDeclarationAfterType: false
MaxEmptyLinesToKeep: 1
KeepEmptyLinesAtTheStartOfBlocks: true
NamespaceIndentation: None
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakBeforeFirstCallParameter: 190
PenaltyBreakComment: 300
PenaltyBreakString: 10000
PenaltyBreakFirstLessLess: 120
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 1200
PointerAlignment: Left
SpacesBeforeTrailingComments: 1
Cpp11BracedListStyle: true
Standard: Auto
IndentWidth: 2
TabWidth: 4
UseTab: Never
BreakBeforeBraces: Attach
SpacesInParentheses: false
SpacesInAngles: false
SpaceInEmptyParentheses: false
SpacesInCStyleCastParentheses: false
SpacesInContainerLiterals: true
SpaceBeforeAssignmentOperators: true
ContinuationIndentWidth: 4
CommentPragmas: '^ IWYU pragma:'
ForEachMacros: [ foreach, Q_FOREACH, BOOST_FOREACH ]
SpaceBeforeParens: ControlStatements
DisableFormat: false
...

View file

@ -1 +0,0 @@
.git/

View file

@ -1,112 +0,0 @@
#
## Licensed to the Apache Software Foundation (ASF) under one
## or more contributor license agreements. See the NOTICE file
## distributed with this work for additional information
## regarding copyright ownership. The ASF licenses this file
## to you under the Apache License, Version 2.0 (the
## "License"); you may not use this file except in compliance
## with the License. You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing,
## software distributed under the License is distributed on an
## "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
## KIND, either express or implied. See the License for the
## specific language governing permissions and limitations
## under the License.
##
#
# EditorConfig: http://editorconfig.org
# see doc/coding_standards.md
root = true
[*]
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
# ActionScript
# [*.as]
# C
# [*.c]
# C++
[*.cpp]
indent_style = space
indent_size = 2
# C-Sharp
# [*.cs]
# D
# [*.d]
# Erlang
# [*.erl]
# Go-lang
[*.go]
indent_style = tab
indent_size = 8
# C header files
# [*.h]
# Haskell
# [*.hs]
# Haxe
# [*.hx]
# Java
# [*.java]
# Javascript
[*.js]
indent_style = space
indent_size = 2
# JSON
[*.json]
indent_style = space
indent_size = 2
# Lua
# [*.lua]
[*.markdown]
indent_style = space
trim_trailing_whitespace = false
[*.md]
indent_style = space
trim_trailing_whitespace = false
# OCaml
# [*.ml]
# Delphi Pascal
# [*.pas]
# PHP
# [*.php]
# Perl
# [*.pm]
# Python
# [*.py]
# Ruby
# [*.rb]
# Typescript
# [*.ts]
# XML
# [*.xml]

View file

@ -1 +0,0 @@
* text=auto

View file

@ -1,326 +0,0 @@
# generic ignores
*.la
*.lo
*.o
*.deps
*.dirstamp
*.libs
*.log
*.trs
*.suo
*.pyc
*.cache
*.user
*.ipch
*.sdf
*.jar
*.exe
*.dll
*_ReSharper*
*.opensdf
*.swp
*.hi
*~
.*project
junit*.properties
.idea
gen-*
Makefile
Makefile.in
aclocal.m4
acinclude.m4
autom4te.cache
cmake-*
node_modules
compile
test-driver
erl_crash.dump
.sonar
.DS_Store
.svn
.vagrant
/contrib/.vagrant/
/aclocal/libtool.m4
/aclocal/lt*.m4
/autoscan.log
/autoscan-*.log
/cmake_*
/compiler/cpp/compiler.VC.db
/compiler/cpp/compiler.VC.VC.opendb
/compiler/cpp/test/plugin/t_cpp_generator.cc
/compiler/cpp/src/thrift/plugin/plugin_constants.cpp
/compiler/cpp/src/thrift/plugin/plugin_constants.h
/compiler/cpp/src/thrift/plugin/plugin_types.cpp
/compiler/cpp/src/thrift/plugin/plugin_types.h
/compiler/cpp/test/*test
/compiler/cpp/test/thrift-gen-*
/compiler/cpp/src/thrift/thrift-bootstrap
/compiler/cpp/src/thrift/plugin/gen.stamp
/compiler/cpp/Debug
/compiler/cpp/Release
/compiler/cpp/src/thrift/libparse.a
/compiler/cpp/src/thrift/thriftl.cc
/compiler/cpp/src/thrift/thrifty.cc
/compiler/cpp/src/thrift/thrifty.hh
/compiler/cpp/src/thrift/windows/version.h
/compiler/cpp/thrift
/compiler/cpp/thriftl.cc
/compiler/cpp/thrifty.cc
/compiler/cpp/lex.yythriftl.cc
/compiler/cpp/thrifty.h
/compiler/cpp/thrifty.hh
/compiler/cpp/src/thrift/version.h
/config.*
/configure
/configure.lineno
/configure.scan
/contrib/fb303/config.cache
/contrib/fb303/config.log
/contrib/fb303/config.status
/contrib/fb303/configure
/contrib/fb303/cpp/libfb303.a
/contrib/fb303/java/build/
/contrib/fb303/py/build/
/contrib/fb303/py/fb303/FacebookService-remote
/contrib/fb303/py/fb303/FacebookService.py
/contrib/fb303/py/fb303/__init__.py
/contrib/fb303/py/fb303/constants.py
/contrib/fb303/py/fb303/ttypes.py
/depcomp
/install-sh
/lib/cpp/Debug/
/lib/cpp/Debug-mt/
/lib/cpp/Release/
/lib/cpp/Release-mt/
/lib/cpp/src/thrift/qt/moc_TQTcpServer.cpp
/lib/cpp/src/thrift/qt/moc__TQTcpServer.cpp
/lib/cpp/src/thrift/config.h
/lib/cpp/src/thrift/stamp-h2
/lib/cpp/test/Benchmark
/lib/cpp/test/AllProtocolsTest
/lib/cpp/test/DebugProtoTest
/lib/cpp/test/DenseProtoTest
/lib/cpp/test/EnumTest
/lib/cpp/test/JSONProtoTest
/lib/cpp/test/OptionalRequiredTest
/lib/cpp/test/SecurityTest
/lib/cpp/test/SpecializationTest
/lib/cpp/test/ReflectionTest
/lib/cpp/test/RecursiveTest
/lib/cpp/test/TFDTransportTest
/lib/cpp/test/TFileTransportTest
/lib/cpp/test/TInterruptTest
/lib/cpp/test/TNonblockingServerTest
/lib/cpp/test/TPipedTransportTest
/lib/cpp/test/TServerIntegrationTest
/lib/cpp/test/TSocketInterruptTest
/lib/cpp/test/TransportTest
/lib/cpp/test/UnitTests
/lib/cpp/test/ZlibTest
/lib/cpp/test/OpenSSLManualInitTest
/lib/cpp/test/concurrency_test
/lib/cpp/test/link_test
/lib/cpp/test/processor_test
/lib/cpp/test/tests.xml
/lib/cpp/concurrency_test
/lib/cpp/*.pc
/lib/cpp/x64/Debug/
/lib/cpp/x64/Debug-mt/
/lib/cpp/x64/Release
/lib/cpp/x64/Release-mt
/lib/c_glib/*.gcda
/lib/c_glib/*.gcno
/lib/c_glib/*.loT
/lib/c_glib/src/thrift/config.h
/lib/c_glib/src/thrift/stamp-h3
/lib/c_glib/test/*.gcno
/lib/c_glib/test/testwrapper.sh
/lib/c_glib/test/testwrapper-test*
/lib/c_glib/test/testapplicationexception
/lib/c_glib/test/testbinaryprotocol
/lib/c_glib/test/testcompactprotocol
/lib/c_glib/test/testbufferedtransport
/lib/c_glib/test/testcontainertest
/lib/c_glib/test/testdebugproto
/lib/c_glib/test/testfdtransport
/lib/c_glib/test/testframedtransport
/lib/c_glib/test/testmemorybuffer
/lib/c_glib/test/testoptionalrequired
/lib/c_glib/test/testsimpleserver
/lib/c_glib/test/teststruct
/lib/c_glib/test/testthrifttest
/lib/c_glib/test/testthrifttestclient
/lib/c_glib/test/testtransportsocket
/lib/c_glib/test/testserialization
/lib/c_glib/thriftc.pc
/lib/c_glib/thrift_c_glib.pc
/lib/csharp/**/bin/
/lib/csharp/**/obj/
/lib/csharp/src/packages
/lib/d/test/*.pem
/lib/d/libthriftd*.a
/lib/d/test/async_test
/lib/d/test/client_pool_test
/lib/d/test/serialization_benchmark
/lib/d/test/stress_test_server
/lib/d/test/thrift_test_client
/lib/d/test/thrift_test_server
/lib/d/test/transport_test
/lib/d/unittest/
/lib/dart/coverage
/lib/dart/**/.packages
/lib/dart/**/packages
/lib/dart/**/.pub/
/lib/dart/**/pubspec.lock
/lib/delphi/src/*.dcu
/lib/delphi/test/*.identcache
/lib/delphi/test/*.local
/lib/delphi/test/*.dcu
/lib/delphi/test/*.2007
/lib/delphi/test/*.dproj
/lib/delphi/test/*.dproj
/lib/delphi/test/codegen/*.bat
/lib/delphi/test/skip/*.local
/lib/delphi/test/skip/*.identcache
/lib/delphi/test/skip/*.identcache
/lib/delphi/test/skip/*.dproj
/lib/delphi/test/skip/*.dproj
/lib/delphi/test/skip/*.2007
/lib/delphi/test/serializer/*.identcache
/lib/delphi/test/serializer/*.dproj
/lib/delphi/test/serializer/*.local
/lib/delphi/test/serializer/*.2007
/lib/delphi/test/serializer/*.dcu
/lib/delphi/test/multiplexed/*.dproj
/lib/delphi/test/multiplexed/*.2007
/lib/delphi/test/multiplexed/*.local
/lib/delphi/test/multiplexed/*.identcache
/lib/delphi/test/multiplexed/*.dcu
/lib/delphi/test/typeregistry/*.2007
/lib/delphi/test/typeregistry/*.dproj
/lib/delphi/test/typeregistry/*.identcache
/lib/delphi/test/typeregistry/*.local
/lib/delphi/test/typeregistry/*.dcu
/lib/erl/.generated
/lib/erl/.eunit
/lib/erl/ebin
/lib/erl/deps/
/lib/erl/src/thrift.app.src
/lib/erl/test/*.hrl
/lib/erl/test/*.beam
/lib/haxe/test/bin
/lib/hs/dist
/lib/java/build
/lib/js/test/build
/lib/nodejs/coverage
/lib/nodejs/node_modules/
/lib/perl/MANIFEST
/lib/perl/MYMETA.json
/lib/perl/MYMETA.yml
/lib/perl/Makefile-perl.mk
/lib/perl/blib
/lib/perl/pm_to_blib
/lib/py/build
/lib/py/thrift.egg-info/
/lib/rb/Gemfile.lock
/lib/rb/debug_proto_test
/lib/rb/.config
/lib/rb/ext/conftest.dSYM/
/lib/rb/ext/mkmf.log
/lib/rb/ext/thrift_native.bundle
/lib/rb/ext/thrift_native.so
/lib/rb/test/
/lib/rb/thrift-*.gem
/lib/php/src/ext/thrift_protocol/Makefile.*
/lib/php/src/ext/thrift_protocol/build/
/lib/php/src/ext/thrift_protocol/config.*
/lib/php/src/ext/thrift_protocol/configure
/lib/php/src/ext/thrift_protocol/configure.in
/lib/php/src/ext/thrift_protocol/install-sh
/lib/php/src/ext/thrift_protocol/libtool
/lib/php/src/ext/thrift_protocol/ltmain.sh
/lib/php/src/ext/thrift_protocol/missing
/lib/php/src/ext/thrift_protocol/mkinstalldirs
/lib/php/src/ext/thrift_protocol/modules/
/lib/php/src/ext/thrift_protocol/php_thrift_protocol.lo
/lib/php/src/ext/thrift_protocol/run-tests.php
/lib/php/src/ext/thrift_protocol/thrift_protocol.la
/lib/php/src/ext/thrift_protocol/tmp-php.ini
/lib/php/src/packages/
/lib/php/test/TEST-*.xml
/lib/php/test/packages/
/lib/py/dist/
/lib/erl/logs/
/lib/go/test/gopath/
/lib/go/test/ThriftTest.thrift
/libtool
/ltmain.sh
/missing
/node_modules/
/stamp-h1
/test/features/results.json
/test/results.json
/test/c_glib/test_client
/test/c_glib/test_server
/test/cpp/StressTest
/test/cpp/StressTestNonBlocking
/test/cpp/TestClient
/test/cpp/TestServer
/test/dart/**/.packages
/test/dart/**/packages
/test/dart/**/.pub/
/test/dart/**/pubspec.lock
/test/log/
/test/test.log
/test/erl/.generated
/test/erl/ebin
/test/go/bin/
/test/go/ThriftTest.thrift
/test/go/gopath
/test/go/pkg/
/test/go/src/code.google.com/
/test/go/src/github.com/golang/
/test/go/src/gen/
/test/go/src/thrift
/test/haxe/bin
/test/hs/TestClient
/test/hs/TestServer
/test/py.twisted/_trial_temp/
/test/rb/Gemfile.lock
/tutorial/cpp/TutorialClient
/tutorial/cpp/TutorialServer
/tutorial/c_glib/tutorial_client
/tutorial/c_glib/tutorial_server
/tutorial/csharp/CsharpServer/obj
/tutorial/csharp/CsharpServer/bin
/tutorial/csharp/CsharpClient/obj
/tutorial/csharp/CsharpClient/bin
/tutorial/d/async_client
/tutorial/d/client
/tutorial/d/server
/tutorial/dart/**/.packages
/tutorial/dart/**/packages
/tutorial/dart/**/.pub/
/tutorial/dart/**/pubspec.lock
/tutorial/delphi/*.dsk
/tutorial/delphi/*.local
/tutorial/delphi/*.tvsconfig
/tutorial/delphi/DelphiClient/dcu
/tutorial/delphi/DelphiServer/dcu
/tutorial/delphi/DelphiClient/*.local
/tutorial/delphi/DelphiClient/*.identcache
/tutorial/delphi/DelphiServer/*.identcache
/tutorial/delphi/DelphiServer/*.local
/tutorial/go/go-tutorial
/tutorial/go/calculator-remote
/tutorial/go/src/shared
/tutorial/go/src/tutorial
/tutorial/go/src/git.apache.org
/tutorial/haxe/bin
/tutorial/hs/dist/
/tutorial/java/build/
/tutorial/js/build/
/ylwrap

View file

@ -1,199 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# build Apache Thrift on Travis CI - https://travis-ci.org/
sudo: required
dist: trusty
services:
- docker
install:
- (travis_wait ./build/docker/check_unmodified.sh $DISTRO && touch .unmodified) || true
- if [ ! -f .unmodified ]; then travis_retry travis_wait docker build -q -t thrift-build:$DISTRO build/docker/$DISTRO; fi
script:
- docker run --net=host -e BUILD_LIBS="$BUILD_LIBS" $BUILD_ENV -v $(pwd):/thrift/src -it thrift-build:$DISTRO build/docker/scripts/$SCRIPT $BUILD_ARG
env:
global:
- TEST_NAME=""
- SCRIPT="cmake.sh"
- BUILD_ARG=""
- BUILD_ENV="-e CC=clang -e CXX=clang++"
- DISTRO=ubuntu
- BUILD_LIBS="CPP C_GLIB HASKELL JAVA PYTHON TESTING TUTORIALS" # only meaningful for CMake builds
matrix:
- TEST_NAME="Cross Language Tests (Binary and Header Protocols)"
SCRIPT="cross-test.sh"
BUILD_ARG="-'(binary|header)'"
BUILD_ENV="-e CC=clang -e CXX=clang++ -e THRIFT_CROSSTEST_CONCURRENCY=4"
- TEST_NAME="Cross Language Tests (Debian) (Binary and Header Protocols)"
SCRIPT="cross-test.sh"
BUILD_ARG="-'(binary|header)'"
BUILD_ENV="-e CC=clang -e CXX=clang++ -e THRIFT_CROSSTEST_CONCURRENCY=4"
DISTRO=debian
- TEST_NAME="Cross Language Tests (Compact and JSON Protocols)"
SCRIPT="cross-test.sh"
BUILD_ARG="-'(compact|json)'"
BUILD_ENV="-e CC=clang -e CXX=clang++ -e THRIFT_CROSSTEST_CONCURRENCY=4"
- TEST_NAME="Cross Language Tests (Debian) (Compact and JSON Protocols)"
SCRIPT="cross-test.sh"
BUILD_ARG="-'(compact|json)'"
BUILD_ENV="-e CC=clang -e CXX=clang++ -e THRIFT_CROSSTEST_CONCURRENCY=4"
DISTRO=debian
# TODO: Remove them once migrated to CMake
# Autotools builds
- TEST_NAME="C C++ C# D Erlang Haxe Go (automake)"
SCRIPT="autotools.sh"
BUILD_ARG="--without-dart --without-haskell --without-java --without-lua --without-nodejs --without-perl --without-php --without-php_extension --without-python --without-ruby"
- TEST_NAME="C C++ - GCC (automake)"
SCRIPT="autotools.sh"
BUILD_ARG="--without-csharp --without-java --without-erlang --without-nodejs --without-lua --without-python --without-perl --without-php --without-php_extension --without-dart --without-ruby --without-haskell --without-go --without-haxe --without-d"
BUILD_ENV="-e CC=gcc -e CXX=g++"
- TEST_NAME="Java Lua PHP Ruby Dart (automake)"
SCRIPT="autotools.sh"
BUILD_ARG="--without-cpp --without-haskell --without-c_glib --without-csharp --without-d --without-erlang --without-go --without-haxe --without-nodejs --without-python --without-perl"
# These are flaky (due to cabal and npm network/server failures) and also have lengthy output
- TEST_NAME="Haskell Node.js Python Perl (automake)"
SCRIPT="autotools.sh"
BUILD_ARG="--without-cpp --without-c_glib --without-csharp --without-d --without-dart --without-erlang --without-go --without-haxe --without-java --without-lua --without-php --without-php_extension --without-ruby"
# CMake build
- TEST_NAME="All"
- TEST_NAME="All (Debian)"
DISTRO=debian
- TEST_NAME="C C++ - GCC"
BUILD_LIBS="CPP C_GLIB TESTING TUTORIALS"
BUILD_ARG="-DWITH_PYTHON=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
BUILD_ENV="-e CC=gcc -e CXX=g++"
- TEST_NAME="C++ (Boost Thread)"
BUILD_LIBS="CPP TESTING TUTORIALS"
BUILD_ARG="-DWITH_BOOSTTHREADS=ON -DWITH_PYTHON=OFF -DWITH_C_GLIB=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
- TEST_NAME="C++ (Boost Thread - GCC)"
BUILD_LIBS="CPP TESTING TUTORIALS"
BUILD_ARG="-DWITH_BOOSTTHREADS=ON -DWITH_PYTHON=OFF -DWITH_C_GLIB=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
BUILD_ENV="-e CC=gcc -e CXX=g++"
- TEST_NAME="C++ (Std Thread)"
BUILD_LIBS="CPP TESTING TUTORIALS"
BUILD_ARG="-DWITH_STDTHREADS=ON -DCMAKE_CXX_FLAGS='-std=c++11' -DWITH_PYTHON=OFF -DWITH_C_GLIB=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
- TEST_NAME="C++ (Std Thread - GCC)"
BUILD_LIBS="CPP TESTING TUTORIALS"
BUILD_ARG="-DWITH_STDTHREADS=ON -DCMAKE_CXX_FLAGS='-std=c++11' -DWITH_PYTHON=OFF -DWITH_C_GLIB=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
BUILD_ENV="-e CC=gcc -e CXX=g++"
- TEST_NAME="Compiler (mingw)"
BUILD_LIBS=""
BUILD_ARG="-DCMAKE_TOOLCHAIN_FILE=../build/cmake/mingw32-toolchain.cmake -DBUILD_COMPILER=ON -DBUILD_LIBRARIES=OFF -DBUILD_TESTING=OFF -DBUILD_EXAMPLES=OFF"
BUILD_ENV=""
- TEST_NAME="All - GCC (CentOS)"
BUILD_ENV="-e CC=gcc -e CXX=g++"
DISTRO=centos
- TEST_NAME="C C++ - Clang (CentOS)"
BUILD_LIBS="CPP C_GLIB TESTING TUTORIALS"
BUILD_ARG="-DWITH_PYTHON=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
DISTRO=centos
- TEST_NAME="Python 2.6 (CentOS 6)"
BUILD_LIBS="PYTHON TESTING TUTORIALS"
BUILD_ARG="-DWITH_PYTHON=ON -DWITH_CPP=OFF -DWITH_JAVA=OFF -DWITH_HASKELL=OFF"
BUILD_ENV="-e CC=gcc -e CXX=g++"
DISTRO=centos6
# Distribution
- TEST_NAME="make dist"
SCRIPT="make-dist.sh"
BUILD_ENV="-e CC=gcc -e CXX=g++"
- TEST_NAME="Debian Packages"
SCRIPT="dpkg.sh"
BUILD_ENV="-e CC=gcc -e CXX=g++"
- TEST_NAME="make dist (Debian)"
SCRIPT="make-dist.sh"
BUILD_ENV="-e CC=gcc -e CXX=g++"
DISTRO=debian
- TEST_NAME="Debian Packages (Debian)"
SCRIPT="dpkg.sh"
BUILD_ENV="-e CC=gcc -e CXX=g++"
DISTRO=debian
matrix:
include:
# QA jobs for code analytics and metrics
#
# C/C++ static code analysis with cppcheck
# add --error-exitcode=1 to --enable=all as soon as everything is fixed
#
# Python code style check with flake8
#
# search for TODO etc within source tree
# some statistics about the code base
# some info about the build machine
- env: TEST_NAME="cppcheck, flake8, TODO FIXME HACK, LoC and system info"
install:
- travis_retry sudo apt-get update
- travis_retry sudo apt-get install -ym cppcheck sloccount python-flake8
script:
# Compiler cppcheck (All)
- cppcheck --force --quiet --inline-suppr --enable=all -j2 compiler/cpp/src
# C++ cppcheck (All)
- cppcheck --force --quiet --inline-suppr --enable=all -j2 lib/cpp/src lib/cpp/test test/cpp tutorial/cpp
# C Glib cppcheck (All)
- cppcheck --force --quiet --inline-suppr --enable=all -j2 lib/c_glib/src lib/c_glib/test test/c_glib/src tutorial/c_glib
# Silent error checks
- cppcheck --force --quiet --inline-suppr --error-exitcode=1 -j2 compiler/cpp/src
- cppcheck --force --quiet --inline-suppr --error-exitcode=1 -j2 lib/cpp/src lib/cpp/test test/cpp tutorial/cpp
- cppcheck --force --quiet --inline-suppr --error-exitcode=1 -j2 lib/c_glib/src lib/c_glib/test test/c_glib/src tutorial/c_glib
# Python code style
- flake8 --ignore=E501 lib/py
- flake8 tutorial/py
- flake8 --ignore=E501 test/py
- flake8 test/py.twisted
- flake8 test/py.tornado
- flake8 --ignore=E501 test/test.py
- flake8 --ignore=E501 test/crossrunner
- flake8 test/features
# TODO etc
- grep -r TODO *
- grep -r FIXME *
- grep -r HACK *
# LoC
- sloccount .
# System Info
- dpkg -l
- uname -a

File diff suppressed because it is too large Load diff

View file

@ -1,117 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
cmake_minimum_required(VERSION 2.8.12)
project("Apache Thrift")
set(CMAKE_MODULE_PATH "${CMAKE_MODULE_PATH}" "${CMAKE_CURRENT_SOURCE_DIR}/build/cmake")
# TODO: add `git rev-parse --short HEAD`
# Read the version information from the Autoconf file
file (STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/configure.ac" CONFIGURE_AC REGEX "AC_INIT\\(.*\\)" )
# The following variable is used in the version.h.in file
string(REGEX REPLACE "AC_INIT\\(\\[.*\\], \\[([0-9]+\\.[0-9]+\\.[0-9]+(-dev)?)\\]\\)" "\\1" PACKAGE_VERSION ${CONFIGURE_AC})
message(STATUS "Parsed Thrift package version: ${PACKAGE_VERSION}")
# These are internal to CMake
string(REGEX REPLACE "([0-9]+\\.[0-9]+\\.[0-9]+)(-dev)?" "\\1" thrift_VERSION ${PACKAGE_VERSION})
string(REGEX REPLACE "([0-9]+)\\.[0-9]+\\.[0-9]+" "\\1" thrift_VERSION_MAJOR ${thrift_VERSION})
string(REGEX REPLACE "[0-9]+\\.([0-9])+\\.[0-9]+" "\\1" thrift_VERSION_MINOR ${thrift_VERSION})
string(REGEX REPLACE "[0-9]+\\.[0-9]+\\.([0-9]+)" "\\1" thrift_VERSION_PATCH ${thrift_VERSION})
message(STATUS "Parsed Thrift version: ${thrift_VERSION} (${thrift_VERSION_MAJOR}.${thrift_VERSION_MINOR}.${thrift_VERSION_PATCH})")
# Some default settings
include(DefineCMakeDefaults)
# Build time options are defined here
include(DefineOptions)
include(DefineInstallationPaths)
# Based on the options set some platform specifics
include(DefinePlatformSpecifc)
# Generate the config.h file
include(ConfigureChecks)
# Package it
include(CPackConfig)
find_package(Threads)
include(CTest)
if(BUILD_TESTING)
message(STATUS "Building with unittests")
enable_testing()
# Define "make check" as alias for "make test"
add_custom_target(check COMMAND ctest)
else ()
message(STATUS "Building without tests")
endif ()
if(BUILD_COMPILER)
if(NOT EXISTS ${THRIFT_COMPILER})
set(THRIFT_COMPILER $<TARGET_FILE:thrift-compiler>)
endif()
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/compiler/cpp)
elseif(EXISTS ${THRIFT_COMPILER})
add_executable(thrift-compiler IMPORTED)
set_property(TARGET thrift-compiler PROPERTY IMPORTED_LOCATION ${THRIFT_COMPILER})
endif()
if(BUILD_CPP)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/lib/cpp)
if(BUILD_TUTORIALS)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/tutorial/cpp)
endif()
if(BUILD_TESTING)
if(WITH_LIBEVENT AND WITH_ZLIB AND WITH_OPENSSL)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/test/cpp)
else()
message(WARNING "libevent and/or ZLIB and/or OpenSSL not found or disabled; will not build some tests")
endif()
endif()
endif()
if(BUILD_C_GLIB)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/lib/c_glib)
endif()
if(BUILD_JAVA)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/lib/java)
endif()
if(BUILD_PYTHON)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/lib/py)
if(BUILD_TESTING)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/test/py)
endif()
endif()
if(BUILD_HASKELL)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/lib/hs)
if(BUILD_TESTING)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/test/hs)
endif()
endif()
PRINT_CONFIG_SUMMARY()

View file

@ -1,49 +0,0 @@
## How to contribute
1. Help to review and verify existing patches
1. Make sure your issue is not all ready in the [Jira issue tracker](http://issues.apache.org/jira/browse/THRIFT)
1. If not, create a ticket describing the change you're proposing in the [Jira issue tracker](http://issues.apache.org/jira/browse/THRIFT)
1. Contribute your patch using one of the two methods below
### Contributing via a patch
1. Check out the latest version of the source code
* git clone https://git-wip-us.apache.org/repos/asf/thrift.git thrift
1. Modify the source to include the improvement/bugfix
* Remember to provide *tests* for all submited changes
* When bugfixing: add test that will isolate bug *before* applying change that fixes it
* Verify that you follow [Thrift Coding Standards](/docs/coding_standards) (you can run 'make style', which ensures proper format for some languages)
1. Create a patch from project root directory (e.g. you@dev:~/thrift $ ):
* git diff > ../thrift-XXX-my-new-feature.patch
1. Attach the newly generated patch to the issue
1. Wait for other contributors or committers to review your new addition
1. Wait for a committer to commit your patch
### Contributing via GitHub pull requests
1. Create a fork for http://github.com/apache/thrift
1. Create a branch for your changes(best practice is issue as branch name, e.g. THRIFT-9999)
1. Modify the source to include the improvement/bugfix
* Remember to provide *tests* for all submited changes
* When bugfixing: add test that will isolate bug *before* applying change that fixes it
* Verify that you follow [Thrift Coding Standards](/docs/coding_standards) (you can run 'make style', which ensures proper format for some languages)
* Verify that your change works on other platforms by adding a GitHub service hook to [Travis CI](http://docs.travis-ci.com/user/getting-started/#Step-one%3A-Sign-in) and [AppVeyor](http://www.appveyor.com/docs)
1. Commit and push changes to your branch (please use issue name and description as commit title, e.g. THRIFT-9999 make it perfect)
1. Issue a pull request with the jira ticket number you are working on in it's name
1. Wait for other contributors or committers to review your new addition
1. Wait for a committer to commit your patch
### More info
Plenty of information on why and how to contribute is available on the Apache Software Foundation (ASF) web site. In particular, we recommend the following:
* [Contributors Tech Guide](http://www.apache.org/dev/contributors)
* [Get involved!](http://www.apache.org/foundation/getinvolved.html)
* [Legal aspects on Submission of Contributions (Patches)](http://www.apache.org/licenses/LICENSE-2.0.html#contributions)

View file

@ -1,61 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Goal: provide a thrift-compiler Docker image
#
# Usage:
# docker run -v "${PWD}:/data" thrift/thrift-compiler -gen cpp -o /data/ /data/test/ThriftTest.thrift
#
# further details on docker for thrift is here build/docker/
#
# TODO: push to apache/thrift-compiler instead of thrift/thrift-compiler
FROM debian:jessie
MAINTAINER Apache Thrift <dev@thrift.apache.org>
ENV DEBIAN_FRONTEND noninteractive
ADD . /thrift
RUN buildDeps=" \
flex \
bison \
g++ \
make \
cmake \
curl \
"; \
apt-get update && apt-get install -y --no-install-recommends $buildDeps \
&& mkdir /tmp/cmake-build && cd /tmp/cmake-build \
&& cmake \
-DBUILD_COMPILER=ON \
-DBUILD_LIBRARIES=OFF \
-DBUILD_TESTING=OFF \
-DBUILD_EXAMPLES=OFF \
/thrift \
&& cmake --build . --config Release \
&& make install \
&& curl -k -sSL "https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz" -o /tmp/go.tar.gz \
&& tar xzf /tmp/go.tar.gz -C /tmp \
&& cp /tmp/go/bin/gofmt /usr/bin/gofmt \
&& apt-get purge -y --auto-remove $buildDeps \
&& apt-get clean \
&& rm -rf /tmp/* \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["thrift"]

View file

@ -1,239 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--------------------------------------------------
SOFTWARE DISTRIBUTED WITH THRIFT:
The Apache Thrift software includes a number of subcomponents with
separate copyright notices and license terms. Your use of the source
code for the these subcomponents is subject to the terms and
conditions of the following licenses.
--------------------------------------------------
Portions of the following files are licensed under the MIT License:
lib/erl/src/Makefile.am
Please see doc/otp-base-license.txt for the full terms of this license.
--------------------------------------------------
For the aclocal/ax_boost_base.m4 and contrib/fb303/aclocal/ax_boost_base.m4 components:
# Copyright (c) 2007 Thomas Porschberg <thomas@randspringer.de>
#
# Copying and distribution of this file, with or without
# modification, are permitted in any medium without royalty provided
# the copyright notice and this notice are preserved.
--------------------------------------------------
For the lib/nodejs/lib/thrift/json_parse.js:
/*
json_parse.js
2015-05-02
Public Domain.
NO WARRANTY EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
*/
(By Douglas Crockford <douglas@crockford.com>)
--------------------------------------------------

View file

@ -1,131 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
ACLOCAL_AMFLAGS = -I ./aclocal
if WITH_PLUGIN
# To enable bootstrap, build order is lib/cpp -> compiler -> others
SUBDIRS = lib/cpp compiler/cpp lib
if WITH_TESTS
SUBDIRS += lib/cpp/test
endif
else
SUBDIRS = compiler/cpp lib
endif
if WITH_TESTS
SUBDIRS += test
endif
if WITH_TUTORIAL
SUBDIRS += tutorial
endif
dist-hook:
find $(distdir) -type f \( -iname ".DS_Store" -or -iname "._*" -or -iname ".gitignore" \) | xargs rm -rf
find $(distdir) -type d \( -iname ".deps" -or -iname ".libs" \) | xargs rm -rf
find $(distdir) -type d \( -iname ".svn" -or -iname ".git" \) | xargs rm -rf
print-version:
@echo $(VERSION)
.PHONY: precross cross
precross-%: all
$(MAKE) -C $* precross
precross: all precross-test precross-lib
empty :=
space := $(empty) $(empty)
comma := ,
CROSS_LANGS = @MAYBE_CPP@ @MAYBE_C_GLIB@ @MAYBE_D@ @MAYBE_JAVA@ @MAYBE_CSHARP@ @MAYBE_PYTHON@ @MAYBE_PY3@ @MAYBE_RUBY@ @MAYBE_HASKELL@ @MAYBE_PERL@ @MAYBE_PHP@ @MAYBE_GO@ @MAYBE_NODEJS@ @MAYBE_DART@ @MAYBE_ERLANG@ @MAYBE_LUA@
CROSS_LANGS_COMMA_SEPARATED = $(subst $(space),$(comma),$(CROSS_LANGS))
if WITH_PY3
CROSS_PY=$(PYTHON3)
else
CROSS_PY=$(PYTHON)
endif
if WITH_PYTHON
crossfeature: precross
$(CROSS_PY) test/test.py --retry-count 3 --features .* --skip-known-failures --server $(CROSS_LANGS_COMMA_SEPARATED)
else
# feature test needs python build
crossfeature:
endif
cross-%: precross crossfeature
$(CROSS_PY) test/test.py --retry-count 3 --skip-known-failures --server $(CROSS_LANGS_COMMA_SEPARATED) --client $(CROSS_LANGS_COMMA_SEPARATED) --regex "$*"
cross: cross-.*
TIMES = 1 2 3
fail: precross
$(CROSS_PY) test/test.py || true
$(CROSS_PY) test/test.py --update-expected-failures=overwrite
$(foreach var,$(TIMES),test/test.py -s || true;test/test.py --update-expected-failures=merge;)
codespell_skip_files = \
*.jar \
*.class \
*.so \
*.a \
*.la \
*.o \
*.p12 \
*OCamlMakefile \
.keystore \
.truststore \
CHANGES \
config.sub \
configure \
depcomp \
libtool.m4 \
output.* \
rebar \
thrift
skipped_files = $(subst $(space),$(comma),$(codespell_skip_files))
style-local:
codespell --write-changes --skip=$(skipped_files) --disable-colors
EXTRA_DIST = \
.clang-format \
.editorconfig \
.travis.yml \
appveyor.yml \
bower.json \
build \
CMakeLists.txt \
composer.json \
contrib \
CONTRIBUTING.md \
debian \
doc \
doap.rdf \
package.json \
sonar-project.properties \
Dockerfile \
LICENSE \
CHANGES \
NOTICE \
README.md \
Thrift.podspec

View file

@ -1,5 +0,0 @@
Apache Thrift
Copyright 2006-2010 The Apache Software Foundation.
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

View file

@ -1,166 +0,0 @@
Apache Thrift
=============
+[![Build Status](https://travis-ci.org/apache/thrift.svg?branch=master)](https://travis-ci.org/apache/thrift)
- +[![AppVeyor Build status](https://ci.appveyor.com/api/projects/status/e2qks7enyp9gw7ma?svg=true)](https://ci.appveyor.com/project/apache/thrift)
Introduction
============
Thrift is a lightweight, language-independent software stack with an
associated code generation mechanism for RPC. Thrift provides clean
abstractions for data transport, data serialization, and application
level processing. The code generation system takes a simple definition
language as its input and generates code across programming languages that
uses the abstracted stack to build interoperable RPC clients and servers.
Thrift is specifically designed to support non-atomic version changes
across client and server code.
For more details on Thrift's design and implementation, take a gander at
the Thrift whitepaper included in this distribution or at the README.md files
in your particular subdirectory of interest.
Hierarchy
=========
thrift/
compiler/
Contains the Thrift compiler, implemented in C++.
lib/
Contains the Thrift software library implementation, subdivided by
language of implementation.
cpp/
go/
java/
php/
py/
rb/
test/
Contains sample Thrift files and test code across the target programming
languages.
tutorial/
Contains a basic tutorial that will teach you how to develop software
using Thrift.
Requirements
============
See http://thrift.apache.org/docs/install for an up-to-date list of build requirements.
Resources
=========
More information about Thrift can be obtained on the Thrift webpage at:
http://thrift.apache.org
Acknowledgments
===============
Thrift was inspired by pillar, a lightweight RPC tool written by Adam D'Angelo,
and also by Google's protocol buffers.
Installation
============
If you are building from the first time out of the source repository, you will
need to generate the configure scripts. (This is not necessary if you
downloaded a tarball.) From the top directory, do:
./bootstrap.sh
Once the configure scripts are generated, thrift can be configured.
From the top directory, do:
./configure
You may need to specify the location of the boost files explicitly.
If you installed boost in /usr/local, you would run configure as follows:
./configure --with-boost=/usr/local
Note that by default the thrift C++ library is typically built with debugging
symbols included. If you want to customize these options you should use the
CXXFLAGS option in configure, as such:
./configure CXXFLAGS='-g -O2'
./configure CFLAGS='-g -O2'
./configure CPPFLAGS='-DDEBUG_MY_FEATURE'
To enable gcov required options -fprofile-arcs -ftest-coverage enable them:
./configure --enable-coverage
Run ./configure --help to see other configuration options
Please be aware that the Python library will ignore the --prefix option
and just install wherever Python's distutils puts it (usually along
the lines of /usr/lib/pythonX.Y/site-packages/). If you need to control
where the Python modules are installed, set the PY_PREFIX variable.
(DESTDIR is respected for Python and C++.)
Make thrift:
make
From the top directory, become superuser and do:
make install
Note that some language packages must be installed manually using build tools
better suited to those languages (at the time of this writing, this applies
to Java, Ruby, PHP).
Look for the README.md file in the lib/<language>/ folder for more details on the
installation of each language library package.
Testing
=======
There are a large number of client library tests that can all be run
from the top-level directory.
make -k check
This will make all of the libraries (as necessary), and run through
the unit tests defined in each of the client libraries. If a single
language fails, the make check will continue on and provide a synopsis
at the end.
To run the cross-language test suite, please run:
make cross
This will run a set of tests that use different language clients and
servers.
License
=======
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.

View file

@ -1,18 +0,0 @@
Pod::Spec.new do |s|
s.name = "Thrift"
s.version = "0.10.0"
s.summary = "Apache Thrift is a lightweight, language-independent software stack with an associated code generation mechanism for RPC."
s.description = <<-DESC
The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
DESC
s.homepage = "http://thrift.apache.org"
s.license = { :type => 'Apache License, Version 2.0', :url => 'https://raw.github.com/apache/thrift/thrift-0.9.0/LICENSE' }
s.author = { "The Apache Software Foundation" => "apache@apache.org" }
s.requires_arc = true
s.ios.deployment_target = '7.0'
s.osx.deployment_target = '10.8'
s.ios.framework = 'CFNetwork'
s.osx.framework = 'CoreServices'
s.source = { :git => "https://github.com/apache/thrift.git", :tag => "thrift-0.10.0" }
s.source_files = 'lib/cocoa/src/**/*.{h,m,swift}'
end

View file

@ -1,54 +0,0 @@
dnl
dnl Check Bison version
dnl AC_PROG_BISON([MIN_VERSION=2.4])
dnl
dnl Will define BISON_USE_PARSER_H_EXTENSION if Automake is < 1.11
dnl for use with .h includes.
dnl
AC_DEFUN([AC_PROG_BISON], [
if test "x$1" = "x" ; then
bison_required_version="2.4"
else
bison_required_version="$1"
fi
AC_CHECK_PROG(have_prog_bison, [bison], [yes],[no])
AC_DEFINE_UNQUOTED([BISON_VERSION], [0.0], [Bison version if bison is not available])
#Do not use *.h extension for parser header files, use newer *.hh
bison_use_parser_h_extension=false
if test "$have_prog_bison" = "yes" ; then
AC_MSG_CHECKING([for bison version >= $bison_required_version])
bison_version=`bison --version | head -n 1 | cut '-d ' -f 4`
AC_DEFINE_UNQUOTED([BISON_VERSION], [$bison_version], [Defines bison version])
if test "$bison_version" \< "$bison_required_version" ; then
BISON=:
AC_MSG_RESULT([no])
AC_MSG_ERROR([Bison version $bison_required_version or higher must be installed on the system!])
else
AC_MSG_RESULT([yes])
BISON=bison
AC_SUBST(BISON)
#Verify automake version 1.11 headers for yy files are .h, > 1.12 uses .hh
automake_version=`automake --version | head -n 1 | cut '-d ' -f 4`
AC_DEFINE_UNQUOTED([AUTOMAKE_VERSION], [$automake_version], [Defines automake version])
if test "$automake_version" \< "1.12" ; then
#Use *.h extension for parser header file
bison_use_parser_h_extension=true
echo "Automake version < 1.12"
AC_DEFINE([BISON_USE_PARSER_H_EXTENSION], [1], [Use *.h extension for parser header file])
fi
fi
else
BISON=:
AC_MSG_RESULT([NO])
fi
AM_CONDITIONAL([BISON_USE_PARSER_H_EXTENSION], [test x$bison_use_parser_h_extension = xtrue])
AC_SUBST(BISON)
])

View file

@ -1,272 +0,0 @@
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_boost_base.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_BOOST_BASE([MINIMUM-VERSION], [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
#
# DESCRIPTION
#
# Test for the Boost C++ libraries of a particular version (or newer)
#
# If no path to the installed boost library is given the macro searchs
# under /usr, /usr/local, /opt and /opt/local and evaluates the
# $BOOST_ROOT environment variable. Further documentation is available at
# <http://randspringer.de/boost/index.html>.
#
# This macro calls:
#
# AC_SUBST(BOOST_CPPFLAGS) / AC_SUBST(BOOST_LDFLAGS)
#
# And sets:
#
# HAVE_BOOST
#
# LICENSE
#
# Copyright (c) 2008 Thomas Porschberg <thomas@randspringer.de>
# Copyright (c) 2009 Peter Adolphs
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 23
AC_DEFUN([AX_BOOST_BASE],
[
AC_ARG_WITH([boost],
[AS_HELP_STRING([--with-boost@<:@=ARG@:>@],
[use Boost library from a standard location (ARG=yes),
from the specified location (ARG=<path>),
or disable it (ARG=no)
@<:@ARG=yes@:>@ ])],
[
if test "$withval" = "no"; then
want_boost="no"
elif test "$withval" = "yes"; then
want_boost="yes"
ac_boost_path=""
else
want_boost="yes"
ac_boost_path="$withval"
fi
],
[want_boost="yes"])
AC_ARG_WITH([boost-libdir],
AS_HELP_STRING([--with-boost-libdir=LIB_DIR],
[Force given directory for boost libraries. Note that this will override library path detection, so use this parameter only if default library detection fails and you know exactly where your boost libraries are located.]),
[
if test -d "$withval"
then
ac_boost_lib_path="$withval"
else
AC_MSG_ERROR(--with-boost-libdir expected directory name)
fi
],
[ac_boost_lib_path=""]
)
if test "x$want_boost" = "xyes"; then
boost_lib_version_req=ifelse([$1], ,1.20.0,$1)
boost_lib_version_req_shorten=`expr $boost_lib_version_req : '\([[0-9]]*\.[[0-9]]*\)'`
boost_lib_version_req_major=`expr $boost_lib_version_req : '\([[0-9]]*\)'`
boost_lib_version_req_minor=`expr $boost_lib_version_req : '[[0-9]]*\.\([[0-9]]*\)'`
boost_lib_version_req_sub_minor=`expr $boost_lib_version_req : '[[0-9]]*\.[[0-9]]*\.\([[0-9]]*\)'`
if test "x$boost_lib_version_req_sub_minor" = "x" ; then
boost_lib_version_req_sub_minor="0"
fi
WANT_BOOST_VERSION=`expr $boost_lib_version_req_major \* 100000 \+ $boost_lib_version_req_minor \* 100 \+ $boost_lib_version_req_sub_minor`
AC_MSG_CHECKING(for boostlib >= $boost_lib_version_req)
succeeded=no
dnl On 64-bit systems check for system libraries in both lib64 and lib.
dnl The former is specified by FHS, but e.g. Debian does not adhere to
dnl this (as it rises problems for generic multi-arch support).
dnl The last entry in the list is chosen by default when no libraries
dnl are found, e.g. when only header-only libraries are installed!
libsubdirs="lib"
ax_arch=`uname -m`
case $ax_arch in
x86_64|ppc64|s390x|sparc64|aarch64)
libsubdirs="lib64 lib lib64"
;;
esac
dnl allow for real multi-arch paths e.g. /usr/lib/x86_64-linux-gnu. Give
dnl them priority over the other paths since, if libs are found there, they
dnl are almost assuredly the ones desired.
AC_REQUIRE([AC_CANONICAL_HOST])
libsubdirs="lib/${host_cpu}-${host_os} $libsubdirs"
case ${host_cpu} in
i?86)
libsubdirs="lib/i386-${host_os} $libsubdirs"
;;
esac
dnl first we check the system location for boost libraries
dnl this location ist chosen if boost libraries are installed with the --layout=system option
dnl or if you install boost with RPM
if test "$ac_boost_path" != ""; then
BOOST_CPPFLAGS="-I$ac_boost_path/include"
for ac_boost_path_tmp in $libsubdirs; do
if test -d "$ac_boost_path"/"$ac_boost_path_tmp" ; then
BOOST_LDFLAGS="-L$ac_boost_path/$ac_boost_path_tmp"
break
fi
done
elif test "$cross_compiling" != yes; then
for ac_boost_path_tmp in $lt_sysroot/usr $lt_sysroot/usr/local $lt_sysroot/opt $lt_sysroot/opt/local ; do
if test -d "$ac_boost_path_tmp/include/boost" && test -r "$ac_boost_path_tmp/include/boost"; then
for libsubdir in $libsubdirs ; do
if ls "$ac_boost_path_tmp/$libsubdir/libboost_"* >/dev/null 2>&1 ; then break; fi
done
BOOST_LDFLAGS="-L$ac_boost_path_tmp/$libsubdir"
BOOST_CPPFLAGS="-I$ac_boost_path_tmp/include"
break;
fi
done
fi
dnl overwrite ld flags if we have required special directory with
dnl --with-boost-libdir parameter
if test "$ac_boost_lib_path" != ""; then
BOOST_LDFLAGS="-L$ac_boost_lib_path"
fi
CPPFLAGS_SAVED="$CPPFLAGS"
CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS"
export CPPFLAGS
LDFLAGS_SAVED="$LDFLAGS"
LDFLAGS="$LDFLAGS $BOOST_LDFLAGS"
export LDFLAGS
AC_REQUIRE([AC_PROG_CXX])
AC_LANG_PUSH(C++)
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[
@%:@include <boost/version.hpp>
]], [[
#if BOOST_VERSION >= $WANT_BOOST_VERSION
// Everything is okay
#else
# error Boost version is too old
#endif
]])],[
AC_MSG_RESULT(yes)
succeeded=yes
found_system=yes
],[
])
AC_LANG_POP([C++])
dnl if we found no boost with system layout we search for boost libraries
dnl built and installed without the --layout=system option or for a staged(not installed) version
if test "x$succeeded" != "xyes"; then
_version=0
if test "$ac_boost_path" != ""; then
if test -d "$ac_boost_path" && test -r "$ac_boost_path"; then
for i in `ls -d $ac_boost_path/include/boost-* 2>/dev/null`; do
_version_tmp=`echo $i | sed "s#$ac_boost_path##" | sed 's/\/include\/boost-//' | sed 's/_/./'`
V_CHECK=`expr $_version_tmp \> $_version`
if test "$V_CHECK" = "1" ; then
_version=$_version_tmp
fi
VERSION_UNDERSCORE=`echo $_version | sed 's/\./_/'`
BOOST_CPPFLAGS="-I$ac_boost_path/include/boost-$VERSION_UNDERSCORE"
done
fi
else
if test "$cross_compiling" != yes; then
for ac_boost_path in $lt_sysroot/usr $lt_sysroot/usr/local $lt_sysroot/opt $lt_sysroot/opt/local ; do
if test -d "$ac_boost_path" && test -r "$ac_boost_path"; then
for i in `ls -d $ac_boost_path/include/boost-* 2>/dev/null`; do
_version_tmp=`echo $i | sed "s#$ac_boost_path##" | sed 's/\/include\/boost-//' | sed 's/_/./'`
V_CHECK=`expr $_version_tmp \> $_version`
if test "$V_CHECK" = "1" ; then
_version=$_version_tmp
best_path=$ac_boost_path
fi
done
fi
done
VERSION_UNDERSCORE=`echo $_version | sed 's/\./_/'`
BOOST_CPPFLAGS="-I$best_path/include/boost-$VERSION_UNDERSCORE"
if test "$ac_boost_lib_path" = ""; then
for libsubdir in $libsubdirs ; do
if ls "$best_path/$libsubdir/libboost_"* >/dev/null 2>&1 ; then break; fi
done
BOOST_LDFLAGS="-L$best_path/$libsubdir"
fi
fi
if test "x$BOOST_ROOT" != "x"; then
for libsubdir in $libsubdirs ; do
if ls "$BOOST_ROOT/stage/$libsubdir/libboost_"* >/dev/null 2>&1 ; then break; fi
done
if test -d "$BOOST_ROOT" && test -r "$BOOST_ROOT" && test -d "$BOOST_ROOT/stage/$libsubdir" && test -r "$BOOST_ROOT/stage/$libsubdir"; then
version_dir=`expr //$BOOST_ROOT : '.*/\(.*\)'`
stage_version=`echo $version_dir | sed 's/boost_//' | sed 's/_/./g'`
stage_version_shorten=`expr $stage_version : '\([[0-9]]*\.[[0-9]]*\)'`
V_CHECK=`expr $stage_version_shorten \>\= $_version`
if test "$V_CHECK" = "1" -a "$ac_boost_lib_path" = "" ; then
AC_MSG_NOTICE(We will use a staged boost library from $BOOST_ROOT)
BOOST_CPPFLAGS="-I$BOOST_ROOT"
BOOST_LDFLAGS="-L$BOOST_ROOT/stage/$libsubdir"
fi
fi
fi
fi
CPPFLAGS="$CPPFLAGS $BOOST_CPPFLAGS"
export CPPFLAGS
LDFLAGS="$LDFLAGS $BOOST_LDFLAGS"
export LDFLAGS
AC_LANG_PUSH(C++)
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[
@%:@include <boost/version.hpp>
]], [[
#if BOOST_VERSION >= $WANT_BOOST_VERSION
// Everything is okay
#else
# error Boost version is too old
#endif
]])],[
AC_MSG_RESULT(yes)
succeeded=yes
found_system=yes
],[
])
AC_LANG_POP([C++])
fi
if test "$succeeded" != "yes" ; then
if test "$_version" = "0" ; then
AC_MSG_NOTICE([[We could not detect the boost libraries (version $boost_lib_version_req_shorten or higher). If you have a staged boost library (still not installed) please specify \$BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in <boost/version.hpp>. See http://randspringer.de/boost for more documentation.]])
else
AC_MSG_NOTICE([Your boost libraries seems to old (version $_version).])
fi
# execute ACTION-IF-NOT-FOUND (if present):
ifelse([$3], , :, [$3])
else
AC_SUBST(BOOST_CPPFLAGS)
AC_SUBST(BOOST_LDFLAGS)
AC_DEFINE(HAVE_BOOST,,[define if the Boost library is available])
# execute ACTION-IF-FOUND (if present):
ifelse([$2], , :, [$2])
fi
CPPFLAGS="$CPPFLAGS_SAVED"
LDFLAGS="$LDFLAGS_SAVED"
fi
])

View file

@ -1,124 +0,0 @@
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_check_openssl.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_CHECK_OPENSSL([action-if-found[, action-if-not-found]])
#
# DESCRIPTION
#
# Look for OpenSSL in a number of default spots, or in a user-selected
# spot (via --with-openssl). Sets
#
# OPENSSL_INCLUDES to the include directives required
# OPENSSL_LIBS to the -l directives required
# OPENSSL_LDFLAGS to the -L or -R flags required
#
# and calls ACTION-IF-FOUND or ACTION-IF-NOT-FOUND appropriately
#
# This macro sets OPENSSL_INCLUDES such that source files should use the
# openssl/ directory in include directives:
#
# #include <openssl/hmac.h>
#
# LICENSE
#
# Copyright (c) 2009,2010 Zmanda Inc. <http://www.zmanda.com/>
# Copyright (c) 2009,2010 Dustin J. Mitchell <dustin@zmanda.com>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 8
AU_ALIAS([CHECK_SSL], [AX_CHECK_OPENSSL])
AC_DEFUN([AX_CHECK_OPENSSL], [
found=false
AC_ARG_WITH([openssl],
[AS_HELP_STRING([--with-openssl=DIR],
[root of the OpenSSL directory])],
[
case "$withval" in
"" | y | ye | yes | n | no)
AC_MSG_ERROR([Invalid --with-openssl value])
;;
*) ssldirs="$withval"
;;
esac
], [
# if pkg-config is installed and openssl has installed a .pc file,
# then use that information and don't search ssldirs
AC_PATH_PROG([PKG_CONFIG], [pkg-config])
if test x"$PKG_CONFIG" != x""; then
OPENSSL_LDFLAGS=`$PKG_CONFIG openssl --libs-only-L 2>/dev/null`
if test $? = 0; then
OPENSSL_LIBS=`$PKG_CONFIG openssl --libs-only-l 2>/dev/null`
OPENSSL_INCLUDES=`$PKG_CONFIG openssl --cflags-only-I 2>/dev/null`
found=true
fi
fi
# no such luck; use some default ssldirs
if ! $found; then
ssldirs="/usr/local/ssl /usr/lib/ssl /usr/ssl /usr/pkg /usr/local /usr"
fi
]
)
# note that we #include <openssl/foo.h>, so the OpenSSL headers have to be in
# an 'openssl' subdirectory
if ! $found; then
OPENSSL_INCLUDES=
for ssldir in $ssldirs; do
AC_MSG_CHECKING([for openssl/ssl.h in $ssldir])
if test -f "$ssldir/include/openssl/ssl.h"; then
OPENSSL_INCLUDES="-I$ssldir/include"
OPENSSL_LDFLAGS="-L$ssldir/lib"
OPENSSL_LIBS="-lssl -lcrypto"
found=true
AC_MSG_RESULT([yes])
break
else
AC_MSG_RESULT([no])
fi
done
# if the file wasn't found, well, go ahead and try the link anyway -- maybe
# it will just work!
fi
# try the preprocessor and linker with our new flags,
# being careful not to pollute the global LIBS, LDFLAGS, and CPPFLAGS
AC_MSG_CHECKING([whether compiling and linking against OpenSSL works])
echo "Trying link with OPENSSL_LDFLAGS=$OPENSSL_LDFLAGS;" \
"OPENSSL_LIBS=$OPENSSL_LIBS; OPENSSL_INCLUDES=$OPENSSL_INCLUDES" >&AS_MESSAGE_LOG_FD
save_LIBS="$LIBS"
save_LDFLAGS="$LDFLAGS"
save_CPPFLAGS="$CPPFLAGS"
LDFLAGS="$LDFLAGS $OPENSSL_LDFLAGS"
LIBS="$OPENSSL_LIBS $LIBS"
CPPFLAGS="$OPENSSL_INCLUDES $CPPFLAGS"
AC_LINK_IFELSE(
[AC_LANG_PROGRAM([#include <openssl/ssl.h>], [SSL_new(NULL)])],
[
AC_MSG_RESULT([yes])
$1
], [
AC_MSG_RESULT([no])
$2
])
CPPFLAGS="$save_CPPFLAGS"
LDFLAGS="$save_LDFLAGS"
LIBS="$save_LIBS"
AC_SUBST([OPENSSL_INCLUDES])
AC_SUBST([OPENSSL_LIBS])
AC_SUBST([OPENSSL_LDFLAGS])
])

View file

@ -1,165 +0,0 @@
# ============================================================================
# http://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx_11.html
# ============================================================================
#
# SYNOPSIS
#
# AX_CXX_COMPILE_STDCXX_11([ext|noext],[mandatory|optional])
#
# DESCRIPTION
#
# Check for baseline language coverage in the compiler for the C++11
# standard; if necessary, add switches to CXXFLAGS to enable support.
#
# The first argument, if specified, indicates whether you insist on an
# extended mode (e.g. -std=gnu++11) or a strict conformance mode (e.g.
# -std=c++11). If neither is specified, you get whatever works, with
# preference for an extended mode.
#
# The second argument, if specified 'mandatory' or if left unspecified,
# indicates that baseline C++11 support is required and that the macro
# should error out if no mode with that support is found. If specified
# 'optional', then configuration proceeds regardless, after defining
# HAVE_CXX11 if and only if a supporting mode is found.
#
# LICENSE
#
# Copyright (c) 2008 Benjamin Kosnik <bkoz@redhat.com>
# Copyright (c) 2012 Zack Weinberg <zackw@panix.com>
# Copyright (c) 2013 Roy Stogner <roystgnr@ices.utexas.edu>
# Copyright (c) 2014, 2015 Google Inc.; contributed by Alexey Sokolov <sokolov@google.com>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 10
m4_define([_AX_CXX_COMPILE_STDCXX_11_testbody], [[
template <typename T>
struct check
{
static_assert(sizeof(int) <= sizeof(T), "not big enough");
};
struct Base {
virtual void f() {}
};
struct Child : public Base {
virtual void f() override {}
};
typedef check<check<bool>> right_angle_brackets;
int a;
decltype(a) b;
typedef check<int> check_type;
check_type c;
check_type&& cr = static_cast<check_type&&>(c);
auto d = a;
auto l = [](){};
// Prevent Clang error: unused variable 'l' [-Werror,-Wunused-variable]
struct use_l { use_l() { l(); } };
// http://stackoverflow.com/questions/13728184/template-aliases-and-sfinae
// Clang 3.1 fails with headers of libstd++ 4.8.3 when using std::function because of this
namespace test_template_alias_sfinae {
struct foo {};
template<typename T>
using member = typename T::member_type;
template<typename T>
void func(...) {}
template<typename T>
void func(member<T>*) {}
void test();
void test() {
func<foo>(0);
}
}
]])
AC_DEFUN([AX_CXX_COMPILE_STDCXX_11], [dnl
m4_if([$1], [], [],
[$1], [ext], [],
[$1], [noext], [],
[m4_fatal([invalid argument `$1' to AX_CXX_COMPILE_STDCXX_11])])dnl
m4_if([$2], [], [ax_cxx_compile_cxx11_required=true],
[$2], [mandatory], [ax_cxx_compile_cxx11_required=true],
[$2], [optional], [ax_cxx_compile_cxx11_required=false],
[m4_fatal([invalid second argument `$2' to AX_CXX_COMPILE_STDCXX_11])])
AC_LANG_PUSH([C++])dnl
ac_success=no
AC_CACHE_CHECK(whether $CXX supports C++11 features by default,
ax_cv_cxx_compile_cxx11,
[AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_11_testbody])],
[ax_cv_cxx_compile_cxx11=yes],
[ax_cv_cxx_compile_cxx11=no])])
if test x$ax_cv_cxx_compile_cxx11 = xyes; then
ac_success=yes
fi
m4_if([$1], [noext], [], [dnl
if test x$ac_success = xno; then
for switch in -std=gnu++11 -std=gnu++0x; do
cachevar=AS_TR_SH([ax_cv_cxx_compile_cxx11_$switch])
AC_CACHE_CHECK(whether $CXX supports C++11 features with $switch,
$cachevar,
[ac_save_CXXFLAGS="$CXXFLAGS"
CXXFLAGS="$CXXFLAGS $switch"
AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_11_testbody])],
[eval $cachevar=yes],
[eval $cachevar=no])
CXXFLAGS="$ac_save_CXXFLAGS"])
if eval test x\$$cachevar = xyes; then
CXXFLAGS="$CXXFLAGS $switch"
ac_success=yes
break
fi
done
fi])
m4_if([$1], [ext], [], [dnl
if test x$ac_success = xno; then
for switch in -std=c++11 -std=c++0x; do
cachevar=AS_TR_SH([ax_cv_cxx_compile_cxx11_$switch])
AC_CACHE_CHECK(whether $CXX supports C++11 features with $switch,
$cachevar,
[ac_save_CXXFLAGS="$CXXFLAGS"
CXXFLAGS="$CXXFLAGS $switch"
AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_11_testbody])],
[eval $cachevar=yes],
[eval $cachevar=no])
CXXFLAGS="$ac_save_CXXFLAGS"])
if eval test x\$$cachevar = xyes; then
CXXFLAGS="$CXXFLAGS $switch"
ac_success=yes
break
fi
done
fi])
AC_LANG_POP([C++])
if test x$ax_cxx_compile_cxx11_required = xtrue; then
if test x$ac_success = xno; then
AC_MSG_ERROR([*** A compiler with support for C++11 language features is required.])
fi
else
if test x$ac_success = xno; then
HAVE_CXX11=0
AC_MSG_NOTICE([No compiler with C++11 support was found])
else
HAVE_CXX11=1
AC_DEFINE(HAVE_CXX11,1,
[define if the compiler supports basic C++11 syntax])
fi
AC_SUBST(HAVE_CXX11)
fi
])

View file

@ -1,107 +0,0 @@
dnl @synopsis AX_DMD
dnl
dnl Test for the presence of a DMD-compatible D2 compiler, and (optionally)
dnl specified modules on the import path.
dnl
dnl If "DMD" is defined in the environment, that will be the only
dnl dmd command tested. Otherwise, a hard-coded list will be used.
dnl
dnl After AX_DMD runs, the shell variables "success" and "ax_dmd" are set to
dnl "yes" or "no", and "DMD" is set to the appropriate command. Furthermore,
dnl "dmd_optlink" will be set to "yes" or "no" depending on whether OPTLINK is
dnl used as the linker (DMD/Windows), and "dmd_of_dirsep" will be set to the
dnl directory separator to use when passing -of to DMD (OPTLINK requires a
dnl backslash).
dnl
dnl AX_CHECK_D_MODULE must be run after AX_DMD. It tests for the presence of a
dnl module in the import path of the chosen compiler, and sets the shell
dnl variable "success" to "yes" or "no".
dnl
dnl @category D
dnl @version 2011-05-31
dnl @license AllPermissive
dnl
dnl Copyright (C) 2009 David Reiss
dnl Copyright (C) 2011 David Nadlinger
dnl Copying and distribution of this file, with or without modification,
dnl are permitted in any medium without royalty provided the copyright
dnl notice and this notice are preserved.
AC_DEFUN([AX_DMD],
[
dnl Hard-coded default commands to test.
DMD_PROGS="dmd,gdmd,ldmd"
dnl Allow the user to specify an alternative.
if test -n "$DMD" ; then
DMD_PROGS="$DMD"
fi
AC_MSG_CHECKING(for DMD)
# std.algorithm as a quick way to check for D2/Phobos.
echo "import std.algorithm; void main() {}" > configtest_ax_dmd.d
success=no
oIFS="$IFS"
IFS=","
for DMD in $DMD_PROGS ; do
IFS="$oIFS"
echo "Running \"$DMD configtest_ax_dmd.d\"" >&AS_MESSAGE_LOG_FD
if $DMD configtest_ax_dmd.d >&AS_MESSAGE_LOG_FD 2>&1 ; then
success=yes
break
fi
done
if test "$success" != "yes" ; then
AC_MSG_RESULT(no)
DMD=""
else
AC_MSG_RESULT(yes)
fi
ax_dmd="$success"
# Test whether OPTLINK is used by trying if DMD accepts -L/? without
# erroring out.
if test "$success" == "yes" ; then
AC_MSG_CHECKING(whether DMD uses OPTLINK)
echo "Running \”$DMD -L/? configtest_ax_dmd.d\"" >&AS_MESSAGE_LOG_FD
if $DMD -L/? configtest_ax_dmd.d >&AS_MESSAGE_LOG_FD 2>&1 ; then
AC_MSG_RESULT(yes)
dmd_optlink="yes"
# This actually produces double slashes in the final configure
# output, but at least it works.
dmd_of_dirsep="\\\\"
else
AC_MSG_RESULT(no)
dmd_optlink="no"
dmd_of_dirsep="/"
fi
fi
rm -f configtest_ax_dmd*
])
AC_DEFUN([AX_CHECK_D_MODULE],
[
AC_MSG_CHECKING(for D module [$1])
echo "import $1; void main() {}" > configtest_ax_dmd.d
echo "Running \"$DMD configtest_ax_dmd.d\"" >&AS_MESSAGE_LOG_FD
if $DMD -c configtest_ax_dmd.d >&AS_MESSAGE_LOG_FD 2>&1 ; then
AC_MSG_RESULT(yes)
success=yes
else
AC_MSG_RESULT(no)
success=no
fi
rm -f configtest_ax_dmd*
])

View file

@ -1,129 +0,0 @@
dnl @synopsis AX_JAVAC_AND_JAVA
dnl @synopsis AX_CHECK_JAVA_CLASS(CLASSNAME)
dnl
dnl Test for the presence of a JDK, and (optionally) specific classes.
dnl
dnl If "JAVA" is defined in the environment, that will be the only
dnl java command tested. Otherwise, a hard-coded list will be used.
dnl Similarly for "JAVAC".
dnl
dnl AX_JAVAC_AND_JAVA does not currently support testing for a particular
dnl Java version, testing for only one of "java" and "javac", or
dnl compiling or running user-provided Java code.
dnl
dnl After AX_JAVAC_AND_JAVA runs, the shell variables "success" and
dnl "ax_javac_and_java" are set to "yes" or "no", and "JAVAC" and
dnl "JAVA" are set to the appropriate commands.
dnl
dnl AX_CHECK_JAVA_CLASS must be run after AX_JAVAC_AND_JAVA.
dnl It tests for the presence of a class based on a fully-qualified name.
dnl It sets the shell variable "success" to "yes" or "no".
dnl
dnl @category Java
dnl @version 2009-02-09
dnl @license AllPermissive
dnl
dnl Copyright (C) 2009 David Reiss
dnl Copying and distribution of this file, with or without modification,
dnl are permitted in any medium without royalty provided the copyright
dnl notice and this notice are preserved.
AC_DEFUN([AX_JAVAC_AND_JAVA],
[
dnl Hard-coded default commands to test.
JAVAC_PROGS="javac,jikes,gcj -C"
JAVA_PROGS="java,kaffe"
dnl Allow the user to specify an alternative.
if test -n "$JAVAC" ; then
JAVAC_PROGS="$JAVAC"
fi
if test -n "$JAVA" ; then
JAVA_PROGS="$JAVA"
fi
AC_MSG_CHECKING(for javac and java)
echo "public class configtest_ax_javac_and_java { public static void main(String args@<:@@:>@) { } }" > configtest_ax_javac_and_java.java
success=no
oIFS="$IFS"
IFS=","
for JAVAC in $JAVAC_PROGS ; do
IFS="$oIFS"
echo "Running \"$JAVAC configtest_ax_javac_and_java.java\"" >&AS_MESSAGE_LOG_FD
if $JAVAC configtest_ax_javac_and_java.java >&AS_MESSAGE_LOG_FD 2>&1 ; then
# prevent $JAVA VM issues with UTF-8 path names (THRIFT-3271)
oLC_ALL="$LC_ALL"
LC_ALL=""
IFS=","
for JAVA in $JAVA_PROGS ; do
IFS="$oIFS"
echo "Running \"$JAVA configtest_ax_javac_and_java\"" >&AS_MESSAGE_LOG_FD
if $JAVA configtest_ax_javac_and_java >&AS_MESSAGE_LOG_FD 2>&1 ; then
success=yes
break 2
fi
done
# restore LC_ALL
LC_ALL="$oLC_ALL"
oLC_ALL=""
fi
done
rm -f configtest_ax_javac_and_java.java configtest_ax_javac_and_java.class
if test "$success" != "yes" ; then
AC_MSG_RESULT(no)
JAVAC=""
JAVA=""
else
AC_MSG_RESULT(yes)
fi
ax_javac_and_java="$success"
])
AC_DEFUN([AX_CHECK_JAVA_CLASS],
[
AC_MSG_CHECKING(for Java class [$1])
echo "import $1; public class configtest_ax_javac_and_java { public static void main(String args@<:@@:>@) { } }" > configtest_ax_javac_and_java.java
echo "Running \"$JAVAC configtest_ax_javac_and_java.java\"" >&AS_MESSAGE_LOG_FD
if $JAVAC configtest_ax_javac_and_java.java >&AS_MESSAGE_LOG_FD 2>&1 ; then
AC_MSG_RESULT(yes)
success=yes
else
AC_MSG_RESULT(no)
success=no
fi
rm -f configtest_ax_javac_and_java.java configtest_ax_javac_and_java.class
])
AC_DEFUN([AX_CHECK_ANT_VERSION],
[
AC_MSG_CHECKING(for ant version > $2)
ANT_VALID=`expr $($1 -version 2>/dev/null | sed -n 's/.*version \(@<:@0-9\.@:>@*\).*/\1/p') \>= $2`
if test "x$ANT_VALID" = "x1" ; then
AC_MSG_RESULT(yes)
else
AC_MSG_RESULT(no)
ANT=""
fi
])

View file

@ -1,194 +0,0 @@
dnl @synopsis AX_LIB_EVENT([MINIMUM-VERSION])
dnl
dnl Test for the libevent library of a particular version (or newer).
dnl
dnl If no path to the installed libevent is given, the macro will first try
dnl using no -I or -L flags, then searches under /usr, /usr/local, /opt,
dnl and /opt/libevent.
dnl If these all fail, it will try the $LIBEVENT_ROOT environment variable.
dnl
dnl This macro requires that #include <sys/types.h> works and defines u_char.
dnl
dnl This macro calls:
dnl AC_SUBST(LIBEVENT_CPPFLAGS)
dnl AC_SUBST(LIBEVENT_LDFLAGS)
dnl AC_SUBST(LIBEVENT_LIBS)
dnl
dnl And (if libevent is found):
dnl AC_DEFINE(HAVE_LIBEVENT)
dnl
dnl It also leaves the shell variables "success" and "ax_have_libevent"
dnl set to "yes" or "no".
dnl
dnl NOTE: This macro does not currently work for cross-compiling,
dnl but it can be easily modified to allow it. (grep "cross").
dnl
dnl @category InstalledPackages
dnl @category C
dnl @version 2007-09-12
dnl @license AllPermissive
dnl
dnl Copyright (C) 2009 David Reiss
dnl Copying and distribution of this file, with or without modification,
dnl are permitted in any medium without royalty provided the copyright
dnl notice and this notice are preserved.
dnl Input: ax_libevent_path, WANT_LIBEVENT_VERSION
dnl Output: success=yes/no
AC_DEFUN([AX_LIB_EVENT_DO_CHECK],
[
# Save our flags.
CPPFLAGS_SAVED="$CPPFLAGS"
LDFLAGS_SAVED="$LDFLAGS"
LIBS_SAVED="$LIBS"
LD_LIBRARY_PATH_SAVED="$LD_LIBRARY_PATH"
# Set our flags if we are checking a specific directory.
if test -n "$ax_libevent_path" ; then
LIBEVENT_CPPFLAGS="-I$ax_libevent_path/include"
LIBEVENT_LDFLAGS="-L$ax_libevent_path/lib"
LD_LIBRARY_PATH="$ax_libevent_path/lib:$LD_LIBRARY_PATH"
else
LIBEVENT_CPPFLAGS=""
LIBEVENT_LDFLAGS=""
fi
# Required flag for libevent.
LIBEVENT_LIBS="-levent"
# Prepare the environment for compilation.
CPPFLAGS="$CPPFLAGS $LIBEVENT_CPPFLAGS"
LDFLAGS="$LDFLAGS $LIBEVENT_LDFLAGS"
LIBS="$LIBS $LIBEVENT_LIBS"
export CPPFLAGS
export LDFLAGS
export LIBS
export LD_LIBRARY_PATH
success=no
# Compile, link, and run the program. This checks:
# - event.h is available for including.
# - event_get_version() is available for linking.
# - The event version string is lexicographically greater
# than the required version.
AC_LANG_PUSH([C])
dnl This can be changed to AC_LINK_IFELSE if you are cross-compiling,
dnl but then the version cannot be checked.
AC_LINK_IFELSE([AC_LANG_PROGRAM([[
#include <sys/types.h>
#include <event.h>
]], [[
const char* lib_version = event_get_version();
const char* wnt_version = "$WANT_LIBEVENT_VERSION";
int lib_digits;
int wnt_digits;
for (;;) {
/* If we reached the end of the want version. We have it. */
if (*wnt_version == '\0' || *wnt_version == '-') {
return 0;
}
/* If the want version continues but the lib version does not, */
/* we are missing a letter. We don't have it. */
if (*lib_version == '\0' || *lib_version == '-') {
return 1;
}
/* In the 1.4 version numbering style, if there are more digits */
/* in one version than the other, that one is higher. */
for (lib_digits = 0;
lib_version[lib_digits] >= '0' &&
lib_version[lib_digits] <= '9';
lib_digits++)
;
for (wnt_digits = 0;
wnt_version[wnt_digits] >= '0' &&
wnt_version[wnt_digits] <= '9';
wnt_digits++)
;
if (lib_digits > wnt_digits) {
return 0;
}
if (lib_digits < wnt_digits) {
return 1;
}
/* If we have greater than what we want. We have it. */
if (*lib_version > *wnt_version) {
return 0;
}
/* If we have less, we don't. */
if (*lib_version < *wnt_version) {
return 1;
}
lib_version++;
wnt_version++;
}
return 0;
]])], [
success=yes
])
AC_LANG_POP([C])
# Restore flags.
CPPFLAGS="$CPPFLAGS_SAVED"
LDFLAGS="$LDFLAGS_SAVED"
LIBS="$LIBS_SAVED"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH_SAVED"
])
AC_DEFUN([AX_LIB_EVENT],
[
dnl Allow search path to be overridden on the command line.
AC_ARG_WITH([libevent],
AS_HELP_STRING([--with-libevent@<:@=DIR@:>@], [use libevent [default=yes]. Optionally specify the root prefix dir where libevent is installed]),
[
if test "x$withval" = "xno"; then
want_libevent="no"
elif test "x$withval" = "xyes"; then
want_libevent="yes"
ax_libevent_path=""
else
want_libevent="yes"
ax_libevent_path="$withval"
fi
],
[ want_libevent="yes" ; ax_libevent_path="" ])
if test "$want_libevent" = "yes"; then
WANT_LIBEVENT_VERSION=ifelse([$1], ,1.2,$1)
AC_MSG_CHECKING(for libevent >= $WANT_LIBEVENT_VERSION)
# Run tests.
if test -n "$ax_libevent_path"; then
AX_LIB_EVENT_DO_CHECK
else
for ax_libevent_path in "" $lt_sysroot/usr $lt_sysroot/usr/local $lt_sysroot/opt $lt_sysroot/opt/local $lt_sysroot/opt/libevent "$LIBEVENT_ROOT" ; do
AX_LIB_EVENT_DO_CHECK
if test "$success" = "yes"; then
break;
fi
done
fi
if test "$success" != "yes" ; then
AC_MSG_RESULT(no)
LIBEVENT_CPPFLAGS=""
LIBEVENT_LDFLAGS=""
LIBEVENT_LIBS=""
else
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_LIBEVENT,,[define if libevent is available])
ax_have_libevent_[]m4_translit([$1], [.], [_])="yes"
fi
ax_have_libevent="$success"
AC_SUBST(LIBEVENT_CPPFLAGS)
AC_SUBST(LIBEVENT_LDFLAGS)
AC_SUBST(LIBEVENT_LIBS)
fi
])

View file

@ -1,173 +0,0 @@
dnl @synopsis AX_LIB_ZLIB([MINIMUM-VERSION])
dnl
dnl Test for the libz library of a particular version (or newer).
dnl
dnl If no path to the installed zlib is given, the macro will first try
dnl using no -I or -L flags, then searches under /usr, /usr/local, /opt,
dnl and /opt/zlib.
dnl If these all fail, it will try the $ZLIB_ROOT environment variable.
dnl
dnl This macro calls:
dnl AC_SUBST(ZLIB_CPPFLAGS)
dnl AC_SUBST(ZLIB_LDFLAGS)
dnl AC_SUBST(ZLIB_LIBS)
dnl
dnl And (if zlib is found):
dnl AC_DEFINE(HAVE_ZLIB)
dnl
dnl It also leaves the shell variables "success" and "ax_have_zlib"
dnl set to "yes" or "no".
dnl
dnl NOTE: This macro does not currently work for cross-compiling,
dnl but it can be easily modified to allow it. (grep "cross").
dnl
dnl @category InstalledPackages
dnl @category C
dnl @version 2007-09-12
dnl @license AllPermissive
dnl
dnl Copyright (C) 2009 David Reiss
dnl Copying and distribution of this file, with or without modification,
dnl are permitted in any medium without royalty provided the copyright
dnl notice and this notice are preserved.
dnl Input: ax_zlib_path, WANT_ZLIB_VERSION
dnl Output: success=yes/no
AC_DEFUN([AX_LIB_ZLIB_DO_CHECK],
[
# Save our flags.
CPPFLAGS_SAVED="$CPPFLAGS"
LDFLAGS_SAVED="$LDFLAGS"
LIBS_SAVED="$LIBS"
LD_LIBRARY_PATH_SAVED="$LD_LIBRARY_PATH"
# Set our flags if we are checking a specific directory.
if test -n "$ax_zlib_path" ; then
ZLIB_CPPFLAGS="-I$ax_zlib_path/include"
ZLIB_LDFLAGS="-L$ax_zlib_path/lib"
LD_LIBRARY_PATH="$ax_zlib_path/lib:$LD_LIBRARY_PATH"
else
ZLIB_CPPFLAGS=""
ZLIB_LDFLAGS=""
fi
# Required flag for zlib.
ZLIB_LIBS="-lz"
# Prepare the environment for compilation.
CPPFLAGS="$CPPFLAGS $ZLIB_CPPFLAGS"
LDFLAGS="$LDFLAGS $ZLIB_LDFLAGS"
LIBS="$LIBS $ZLIB_LIBS"
export CPPFLAGS
export LDFLAGS
export LIBS
export LD_LIBRARY_PATH
success=no
# Compile, link, and run the program. This checks:
# - zlib.h is available for including.
# - zlibVersion() is available for linking.
# - ZLIB_VERNUM is greater than or equal to the desired version.
# - ZLIB_VERSION (defined in zlib.h) matches zlibVersion()
# (defined in the library).
AC_LANG_PUSH([C])
dnl This can be changed to AC_LINK_IFELSE if you are cross-compiling.
AC_LINK_IFELSE([AC_LANG_PROGRAM([[
#include <zlib.h>
#if ZLIB_VERNUM >= 0x$WANT_ZLIB_VERSION
#else
# error zlib is too old
#endif
]], [[
const char* lib_version = zlibVersion();
const char* hdr_version = ZLIB_VERSION;
for (;;) {
if (*lib_version != *hdr_version) {
/* If this happens, your zlib header doesn't match your zlib */
/* library. That is really bad. */
return 1;
}
if (*lib_version == '\0') {
break;
}
lib_version++;
hdr_version++;
}
return 0;
]])], [
success=yes
])
AC_LANG_POP([C])
# Restore flags.
CPPFLAGS="$CPPFLAGS_SAVED"
LDFLAGS="$LDFLAGS_SAVED"
LIBS="$LIBS_SAVED"
LD_LIBRARY_PATH="$LD_LIBRARY_PATH_SAVED"
])
AC_DEFUN([AX_LIB_ZLIB],
[
dnl Allow search path to be overridden on the command line.
AC_ARG_WITH([zlib],
AS_HELP_STRING([--with-zlib@<:@=DIR@:>@], [use zlib (default is yes) - it is possible to specify an alternate root directory for zlib]),
[
if test "x$withval" = "xno"; then
want_zlib="no"
elif test "x$withval" = "xyes"; then
want_zlib="yes"
ax_zlib_path=""
else
want_zlib="yes"
ax_zlib_path="$withval"
fi
],
[want_zlib="yes" ; ax_zlib_path="" ])
if test "$want_zlib" = "yes"; then
# Parse out the version.
zlib_version_req=ifelse([$1], ,1.2.3,$1)
zlib_version_req_major=`expr $zlib_version_req : '\([[0-9]]*\)'`
zlib_version_req_minor=`expr $zlib_version_req : '[[0-9]]*\.\([[0-9]]*\)'`
zlib_version_req_patch=`expr $zlib_version_req : '[[0-9]]*\.[[0-9]]*\.\([[0-9]]*\)'`
if test -z "$zlib_version_req_patch" ; then
zlib_version_req_patch="0"
fi
WANT_ZLIB_VERSION=`expr $zlib_version_req_major \* 1000 \+ $zlib_version_req_minor \* 100 \+ $zlib_version_req_patch \* 10`
AC_MSG_CHECKING(for zlib >= $zlib_version_req)
# Run tests.
if test -n "$ax_zlib_path"; then
AX_LIB_ZLIB_DO_CHECK
else
for ax_zlib_path in "" /usr /usr/local /opt /opt/zlib "$ZLIB_ROOT" ; do
AX_LIB_ZLIB_DO_CHECK
if test "$success" = "yes"; then
break;
fi
done
fi
if test "$success" != "yes" ; then
AC_MSG_RESULT(no)
ZLIB_CPPFLAGS=""
ZLIB_LDFLAGS=""
ZLIB_LIBS=""
else
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_ZLIB,,[define if zlib is available])
fi
ax_have_zlib="$success"
AC_SUBST(ZLIB_CPPFLAGS)
AC_SUBST(ZLIB_LDFLAGS)
AC_SUBST(ZLIB_LIBS)
fi
])

View file

@ -1,664 +0,0 @@
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_lua.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_PROG_LUA[([MINIMUM-VERSION], [TOO-BIG-VERSION], [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])]
# AX_LUA_HEADERS[([ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])]
# AX_LUA_LIBS[([ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])]
# AX_LUA_READLINE[([ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])]
#
# DESCRIPTION
#
# Detect a Lua interpreter, optionally specifying a minimum and maximum
# version number. Set up important Lua paths, such as the directories in
# which to install scripts and modules (shared libraries).
#
# Also detect Lua headers and libraries. The Lua version contained in the
# header is checked to match the Lua interpreter version exactly. When
# searching for Lua libraries, the version number is used as a suffix.
# This is done with the goal of supporting multiple Lua installs (5.1,
# 5.2, and 5.3 side-by-side).
#
# A note on compatibility with previous versions: This file has been
# mostly rewritten for serial 18. Most developers should be able to use
# these macros without needing to modify configure.ac. Care has been taken
# to preserve each macro's behavior, but there are some differences:
#
# 1) AX_WITH_LUA is deprecated; it now expands to the exact same thing as
# AX_PROG_LUA with no arguments.
#
# 2) AX_LUA_HEADERS now checks that the version number defined in lua.h
# matches the interpreter version. AX_LUA_HEADERS_VERSION is therefore
# unnecessary, so it is deprecated and does not expand to anything.
#
# 3) The configure flag --with-lua-suffix no longer exists; the user
# should instead specify the LUA precious variable on the command line.
# See the AX_PROG_LUA description for details.
#
# Please read the macro descriptions below for more information.
#
# This file was inspired by Andrew Dalke's and James Henstridge's
# python.m4 and Tom Payne's, Matthieu Moy's, and Reuben Thomas's ax_lua.m4
# (serial 17). Basically, this file is a mash-up of those two files. I
# like to think it combines the best of the two!
#
# AX_PROG_LUA: Search for the Lua interpreter, and set up important Lua
# paths. Adds precious variable LUA, which may contain the path of the Lua
# interpreter. If LUA is blank, the user's path is searched for an
# suitable interpreter.
#
# If MINIMUM-VERSION is supplied, then only Lua interpreters with a
# version number greater or equal to MINIMUM-VERSION will be accepted. If
# TOO-BIG-VERSION is also supplied, then only Lua interpreters with a
# version number greater or equal to MINIMUM-VERSION and less than
# TOO-BIG-VERSION will be accepted.
#
# The Lua version number, LUA_VERSION, is found from the interpreter, and
# substituted. LUA_PLATFORM is also found, but not currently supported (no
# standard representation).
#
# Finally, the macro finds four paths:
#
# luadir Directory to install Lua scripts.
# pkgluadir $luadir/$PACKAGE
# luaexecdir Directory to install Lua modules.
# pkgluaexecdir $luaexecdir/$PACKAGE
#
# These paths are found based on $prefix, $exec_prefix, Lua's
# package.path, and package.cpath. The first path of package.path
# beginning with $prefix is selected as luadir. The first path of
# package.cpath beginning with $exec_prefix is used as luaexecdir. This
# should work on all reasonable Lua installations. If a path cannot be
# determined, a default path is used. Of course, the user can override
# these later when invoking make.
#
# luadir Default: $prefix/share/lua/$LUA_VERSION
# luaexecdir Default: $exec_prefix/lib/lua/$LUA_VERSION
#
# These directories can be used by Automake as install destinations. The
# variable name minus 'dir' needs to be used as a prefix to the
# appropriate Automake primary, e.g. lua_SCRIPS or luaexec_LIBRARIES.
#
# If an acceptable Lua interpreter is found, then ACTION-IF-FOUND is
# performed, otherwise ACTION-IF-NOT-FOUND is preformed. If ACTION-IF-NOT-
# FOUND is blank, then it will default to printing an error. To prevent
# the default behavior, give ':' as an action.
#
# AX_LUA_HEADERS: Search for Lua headers. Requires that AX_PROG_LUA be
# expanded before this macro. Adds precious variable LUA_INCLUDE, which
# may contain Lua specific include flags, e.g. -I/usr/include/lua5.1. If
# LUA_INCLUDE is blank, then this macro will attempt to find suitable
# flags.
#
# LUA_INCLUDE can be used by Automake to compile Lua modules or
# executables with embedded interpreters. The *_CPPFLAGS variables should
# be used for this purpose, e.g. myprog_CPPFLAGS = $(LUA_INCLUDE).
#
# This macro searches for the header lua.h (and others). The search is
# performed with a combination of CPPFLAGS, CPATH, etc, and LUA_INCLUDE.
# If the search is unsuccessful, then some common directories are tried.
# If the headers are then found, then LUA_INCLUDE is set accordingly.
#
# The paths automatically searched are:
#
# * /usr/include/luaX.Y
# * /usr/include/lua/X.Y
# * /usr/include/luaXY
# * /usr/local/include/luaX.Y
# * /usr/local/include/lua-X.Y
# * /usr/local/include/lua/X.Y
# * /usr/local/include/luaXY
#
# (Where X.Y is the Lua version number, e.g. 5.1.)
#
# The Lua version number found in the headers is always checked to match
# the Lua interpreter's version number. Lua headers with mismatched
# version numbers are not accepted.
#
# If headers are found, then ACTION-IF-FOUND is performed, otherwise
# ACTION-IF-NOT-FOUND is performed. If ACTION-IF-NOT-FOUND is blank, then
# it will default to printing an error. To prevent the default behavior,
# set the action to ':'.
#
# AX_LUA_LIBS: Search for Lua libraries. Requires that AX_PROG_LUA be
# expanded before this macro. Adds precious variable LUA_LIB, which may
# contain Lua specific linker flags, e.g. -llua5.1. If LUA_LIB is blank,
# then this macro will attempt to find suitable flags.
#
# LUA_LIB can be used by Automake to link Lua modules or executables with
# embedded interpreters. The *_LIBADD and *_LDADD variables should be used
# for this purpose, e.g. mymod_LIBADD = $(LUA_LIB).
#
# This macro searches for the Lua library. More technically, it searches
# for a library containing the function lua_load. The search is performed
# with a combination of LIBS, LIBRARY_PATH, and LUA_LIB.
#
# If the search determines that some linker flags are missing, then those
# flags will be added to LUA_LIB.
#
# If libraries are found, then ACTION-IF-FOUND is performed, otherwise
# ACTION-IF-NOT-FOUND is performed. If ACTION-IF-NOT-FOUND is blank, then
# it will default to printing an error. To prevent the default behavior,
# set the action to ':'.
#
# AX_LUA_READLINE: Search for readline headers and libraries. Requires the
# AX_LIB_READLINE macro, which is provided by ax_lib_readline.m4 from the
# Autoconf Archive.
#
# If a readline compatible library is found, then ACTION-IF-FOUND is
# performed, otherwise ACTION-IF-NOT-FOUND is performed.
#
# LICENSE
#
# Copyright (c) 2015 Reuben Thomas <rrt@sc3d.org>
# Copyright (c) 2014 Tim Perkins <tprk77@gmail.com>
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see <http://www.gnu.org/licenses/>.
#
# As a special exception, the respective Autoconf Macro's copyright owner
# gives unlimited permission to copy, distribute and modify the configure
# scripts that are the output of Autoconf when processing the Macro. You
# need not follow the terms of the GNU General Public License when using
# or distributing such scripts, even though portions of the text of the
# Macro appear in them. The GNU General Public License (GPL) does govern
# all other use of the material that constitutes the Autoconf Macro.
#
# This special exception to the GPL applies to versions of the Autoconf
# Macro released by the Autoconf Archive. When you make and distribute a
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 39
dnl =========================================================================
dnl AX_PROG_LUA([MINIMUM-VERSION], [TOO-BIG-VERSION],
dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl =========================================================================
AC_DEFUN([AX_PROG_LUA],
[
dnl Check for required tools.
AC_REQUIRE([AC_PROG_GREP])
AC_REQUIRE([AC_PROG_SED])
dnl Make LUA a precious variable.
AC_ARG_VAR([LUA], [The Lua interpreter, e.g. /usr/bin/lua5.1])
dnl Find a Lua interpreter.
m4_define_default([_AX_LUA_INTERPRETER_LIST],
[lua lua5.3 lua53 lua5.2 lua52 lua5.1 lua51 lua50])
m4_if([$1], [],
[ dnl No version check is needed. Find any Lua interpreter.
AS_IF([test "x$LUA" = 'x'],
[AC_PATH_PROGS([LUA], [_AX_LUA_INTERPRETER_LIST], [:])])
ax_display_LUA='lua'
AS_IF([test "x$LUA" != 'x:'],
[ dnl At least check if this is a Lua interpreter.
AC_MSG_CHECKING([if $LUA is a Lua interpreter])
_AX_LUA_CHK_IS_INTRP([$LUA],
[AC_MSG_RESULT([yes])],
[ AC_MSG_RESULT([no])
AC_MSG_ERROR([not a Lua interpreter])
])
])
],
[ dnl A version check is needed.
AS_IF([test "x$LUA" != 'x'],
[ dnl Check if this is a Lua interpreter.
AC_MSG_CHECKING([if $LUA is a Lua interpreter])
_AX_LUA_CHK_IS_INTRP([$LUA],
[AC_MSG_RESULT([yes])],
[ AC_MSG_RESULT([no])
AC_MSG_ERROR([not a Lua interpreter])
])
dnl Check the version.
m4_if([$2], [],
[_ax_check_text="whether $LUA version >= $1"],
[_ax_check_text="whether $LUA version >= $1, < $2"])
AC_MSG_CHECKING([$_ax_check_text])
_AX_LUA_CHK_VER([$LUA], [$1], [$2],
[AC_MSG_RESULT([yes])],
[ AC_MSG_RESULT([no])
AC_MSG_ERROR([version is out of range for specified LUA])])
ax_display_LUA=$LUA
],
[ dnl Try each interpreter until we find one that satisfies VERSION.
m4_if([$2], [],
[_ax_check_text="for a Lua interpreter with version >= $1"],
[_ax_check_text="for a Lua interpreter with version >= $1, < $2"])
AC_CACHE_CHECK([$_ax_check_text],
[ax_cv_pathless_LUA],
[ for ax_cv_pathless_LUA in _AX_LUA_INTERPRETER_LIST none; do
test "x$ax_cv_pathless_LUA" = 'xnone' && break
_AX_LUA_CHK_IS_INTRP([$ax_cv_pathless_LUA], [], [continue])
_AX_LUA_CHK_VER([$ax_cv_pathless_LUA], [$1], [$2], [break])
done
])
dnl Set $LUA to the absolute path of $ax_cv_pathless_LUA.
AS_IF([test "x$ax_cv_pathless_LUA" = 'xnone'],
[LUA=':'],
[AC_PATH_PROG([LUA], [$ax_cv_pathless_LUA])])
ax_display_LUA=$ax_cv_pathless_LUA
])
])
AS_IF([test "x$LUA" = 'x:'],
[ dnl Run any user-specified action, or abort.
m4_default([$4], [AC_MSG_ERROR([cannot find suitable Lua interpreter])])
],
[ dnl Query Lua for its version number.
AC_CACHE_CHECK([for $ax_display_LUA version],
[ax_cv_lua_version],
[ dnl Get the interpreter version in X.Y format. This should work for
dnl interpreters version 5.0 and beyond.
ax_cv_lua_version=[`$LUA -e '
-- return a version number in X.Y format
local _, _, ver = string.find(_VERSION, "^Lua (%d+%.%d+)")
print(ver)'`]
])
AS_IF([test "x$ax_cv_lua_version" = 'x'],
[AC_MSG_ERROR([invalid Lua version number])])
AC_SUBST([LUA_VERSION], [$ax_cv_lua_version])
AC_SUBST([LUA_SHORT_VERSION], [`echo "$LUA_VERSION" | $SED 's|\.||'`])
dnl The following check is not supported:
dnl At times (like when building shared libraries) you may want to know
dnl which OS platform Lua thinks this is.
AC_CACHE_CHECK([for $ax_display_LUA platform],
[ax_cv_lua_platform],
[ax_cv_lua_platform=[`$LUA -e 'print("unknown")'`]])
AC_SUBST([LUA_PLATFORM], [$ax_cv_lua_platform])
dnl Use the values of $prefix and $exec_prefix for the corresponding
dnl values of LUA_PREFIX and LUA_EXEC_PREFIX. These are made distinct
dnl variables so they can be overridden if need be. However, the general
dnl consensus is that you shouldn't need this ability.
AC_SUBST([LUA_PREFIX], ['${prefix}'])
AC_SUBST([LUA_EXEC_PREFIX], ['${exec_prefix}'])
dnl Lua provides no way to query the script directory, and instead
dnl provides LUA_PATH. However, we should be able to make a safe educated
dnl guess. If the built-in search path contains a directory which is
dnl prefixed by $prefix, then we can store scripts there. The first
dnl matching path will be used.
AC_CACHE_CHECK([for $ax_display_LUA script directory],
[ax_cv_lua_luadir],
[ AS_IF([test "x$prefix" = 'xNONE'],
[ax_lua_prefix=$ac_default_prefix],
[ax_lua_prefix=$prefix])
dnl Initialize to the default path.
ax_cv_lua_luadir="$LUA_PREFIX/share/lua/$LUA_VERSION"
dnl Try to find a path with the prefix.
_AX_LUA_FND_PRFX_PTH([$LUA], [$ax_lua_prefix], [script])
AS_IF([test "x$ax_lua_prefixed_path" != 'x'],
[ dnl Fix the prefix.
_ax_strip_prefix=`echo "$ax_lua_prefix" | $SED 's|.|.|g'`
ax_cv_lua_luadir=`echo "$ax_lua_prefixed_path" | \
$SED "s|^$_ax_strip_prefix|$LUA_PREFIX|"`
])
])
AC_SUBST([luadir], [$ax_cv_lua_luadir])
AC_SUBST([pkgluadir], [\${luadir}/$PACKAGE])
dnl Lua provides no way to query the module directory, and instead
dnl provides LUA_PATH. However, we should be able to make a safe educated
dnl guess. If the built-in search path contains a directory which is
dnl prefixed by $exec_prefix, then we can store modules there. The first
dnl matching path will be used.
AC_CACHE_CHECK([for $ax_display_LUA module directory],
[ax_cv_lua_luaexecdir],
[ AS_IF([test "x$exec_prefix" = 'xNONE'],
[ax_lua_exec_prefix=$ax_lua_prefix],
[ax_lua_exec_prefix=$exec_prefix])
dnl Initialize to the default path.
ax_cv_lua_luaexecdir="$LUA_EXEC_PREFIX/lib/lua/$LUA_VERSION"
dnl Try to find a path with the prefix.
_AX_LUA_FND_PRFX_PTH([$LUA],
[$ax_lua_exec_prefix], [module])
AS_IF([test "x$ax_lua_prefixed_path" != 'x'],
[ dnl Fix the prefix.
_ax_strip_prefix=`echo "$ax_lua_exec_prefix" | $SED 's|.|.|g'`
ax_cv_lua_luaexecdir=`echo "$ax_lua_prefixed_path" | \
$SED "s|^$_ax_strip_prefix|$LUA_EXEC_PREFIX|"`
])
])
AC_SUBST([luaexecdir], [$ax_cv_lua_luaexecdir])
AC_SUBST([pkgluaexecdir], [\${luaexecdir}/$PACKAGE])
dnl Run any user specified action.
$3
])
])
dnl AX_WITH_LUA is now the same thing as AX_PROG_LUA.
AC_DEFUN([AX_WITH_LUA],
[
AC_MSG_WARN([[$0 is deprecated, please use AX_PROG_LUA instead]])
AX_PROG_LUA
])
dnl =========================================================================
dnl _AX_LUA_CHK_IS_INTRP(PROG, [ACTION-IF-TRUE], [ACTION-IF-FALSE])
dnl =========================================================================
AC_DEFUN([_AX_LUA_CHK_IS_INTRP],
[
dnl A minimal Lua factorial to prove this is an interpreter. This should work
dnl for Lua interpreters version 5.0 and beyond.
_ax_lua_factorial=[`$1 2>/dev/null -e '
-- a simple factorial
function fact (n)
if n == 0 then
return 1
else
return n * fact(n-1)
end
end
print("fact(5) is " .. fact(5))'`]
AS_IF([test "$_ax_lua_factorial" = 'fact(5) is 120'],
[$2], [$3])
])
dnl =========================================================================
dnl _AX_LUA_CHK_VER(PROG, MINIMUM-VERSION, [TOO-BIG-VERSION],
dnl [ACTION-IF-TRUE], [ACTION-IF-FALSE])
dnl =========================================================================
AC_DEFUN([_AX_LUA_CHK_VER],
[
dnl Check that the Lua version is within the bounds. Only the major and minor
dnl version numbers are considered. This should work for Lua interpreters
dnl version 5.0 and beyond.
_ax_lua_good_version=[`$1 -e '
-- a script to compare versions
function verstr2num(verstr)
local _, _, majorver, minorver = string.find(verstr, "^(%d+)%.(%d+)")
if majorver and minorver then
return tonumber(majorver) * 100 + tonumber(minorver)
end
end
local minver = verstr2num("$2")
local _, _, trimver = string.find(_VERSION, "^Lua (.*)")
local ver = verstr2num(trimver)
local maxver = verstr2num("$3") or 1e9
if minver <= ver and ver < maxver then
print("yes")
else
print("no")
end'`]
AS_IF([test "x$_ax_lua_good_version" = "xyes"],
[$4], [$5])
])
dnl =========================================================================
dnl _AX_LUA_FND_PRFX_PTH(PROG, PREFIX, SCRIPT-OR-MODULE-DIR)
dnl =========================================================================
AC_DEFUN([_AX_LUA_FND_PRFX_PTH],
[
dnl Get the script or module directory by querying the Lua interpreter,
dnl filtering on the given prefix, and selecting the shallowest path. If no
dnl path is found matching the prefix, the result will be an empty string.
dnl The third argument determines the type of search, it can be 'script' or
dnl 'module'. Supplying 'script' will perform the search with package.path
dnl and LUA_PATH, and supplying 'module' will search with package.cpath and
dnl LUA_CPATH. This is done for compatibility with Lua 5.0.
ax_lua_prefixed_path=[`$1 -e '
-- get the path based on search type
local searchtype = "$3"
local paths = ""
if searchtype == "script" then
paths = (package and package.path) or LUA_PATH
elseif searchtype == "module" then
paths = (package and package.cpath) or LUA_CPATH
end
-- search for the prefix
local prefix = "'$2'"
local minpath = ""
local mindepth = 1e9
string.gsub(paths, "(@<:@^;@:>@+)",
function (path)
path = string.gsub(path, "%?.*$", "")
path = string.gsub(path, "/@<:@^/@:>@*$", "")
if string.find(path, prefix) then
local depth = string.len(string.gsub(path, "@<:@^/@:>@", ""))
if depth < mindepth then
minpath = path
mindepth = depth
end
end
end)
print(minpath)'`]
])
dnl =========================================================================
dnl AX_LUA_HEADERS([ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl =========================================================================
AC_DEFUN([AX_LUA_HEADERS],
[
dnl Check for LUA_VERSION.
AC_MSG_CHECKING([if LUA_VERSION is defined])
AS_IF([test "x$LUA_VERSION" != 'x'],
[AC_MSG_RESULT([yes])],
[ AC_MSG_RESULT([no])
AC_MSG_ERROR([cannot check Lua headers without knowing LUA_VERSION])
])
dnl Make LUA_INCLUDE a precious variable.
AC_ARG_VAR([LUA_INCLUDE], [The Lua includes, e.g. -I/usr/include/lua5.1])
dnl Some default directories to search.
LUA_SHORT_VERSION=`echo "$LUA_VERSION" | $SED 's|\.||'`
m4_define_default([_AX_LUA_INCLUDE_LIST],
[ /usr/include/lua$LUA_VERSION \
/usr/include/lua-$LUA_VERSION \
/usr/include/lua/$LUA_VERSION \
/usr/include/lua$LUA_SHORT_VERSION \
/usr/local/include/lua$LUA_VERSION \
/usr/local/include/lua-$LUA_VERSION \
/usr/local/include/lua/$LUA_VERSION \
/usr/local/include/lua$LUA_SHORT_VERSION \
])
dnl Try to find the headers.
_ax_lua_saved_cppflags=$CPPFLAGS
CPPFLAGS="$CPPFLAGS $LUA_INCLUDE"
AC_CHECK_HEADERS([lua.h lualib.h lauxlib.h luaconf.h])
CPPFLAGS=$_ax_lua_saved_cppflags
dnl Try some other directories if LUA_INCLUDE was not set.
AS_IF([test "x$LUA_INCLUDE" = 'x' &&
test "x$ac_cv_header_lua_h" != 'xyes'],
[ dnl Try some common include paths.
for _ax_include_path in _AX_LUA_INCLUDE_LIST; do
test ! -d "$_ax_include_path" && continue
AC_MSG_CHECKING([for Lua headers in])
AC_MSG_RESULT([$_ax_include_path])
AS_UNSET([ac_cv_header_lua_h])
AS_UNSET([ac_cv_header_lualib_h])
AS_UNSET([ac_cv_header_lauxlib_h])
AS_UNSET([ac_cv_header_luaconf_h])
_ax_lua_saved_cppflags=$CPPFLAGS
CPPFLAGS="$CPPFLAGS -I$_ax_include_path"
AC_CHECK_HEADERS([lua.h lualib.h lauxlib.h luaconf.h])
CPPFLAGS=$_ax_lua_saved_cppflags
AS_IF([test "x$ac_cv_header_lua_h" = 'xyes'],
[ LUA_INCLUDE="-I$_ax_include_path"
break
])
done
])
AS_IF([test "x$ac_cv_header_lua_h" = 'xyes'],
[ dnl Make a program to print LUA_VERSION defined in the header.
dnl TODO It would be really nice if we could do this without compiling a
dnl program, then it would work when cross compiling. But I'm not sure how
dnl to do this reliably. For now, assume versions match when cross compiling.
AS_IF([test "x$cross_compiling" != 'xyes'],
[ AC_CACHE_CHECK([for Lua header version],
[ax_cv_lua_header_version],
[ _ax_lua_saved_cppflags=$CPPFLAGS
CPPFLAGS="$CPPFLAGS $LUA_INCLUDE"
AC_RUN_IFELSE(
[ AC_LANG_SOURCE([[
#include <lua.h>
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char ** argv)
{
if(argc > 1) printf("%s", LUA_VERSION);
exit(EXIT_SUCCESS);
}
]])
],
[ ax_cv_lua_header_version=`./conftest$EXEEXT p | \
$SED -n "s|^Lua \(@<:@0-9@:>@\{1,\}\.@<:@0-9@:>@\{1,\}\).\{0,\}|\1|p"`
],
[ax_cv_lua_header_version='unknown'])
CPPFLAGS=$_ax_lua_saved_cppflags
])
dnl Compare this to the previously found LUA_VERSION.
AC_MSG_CHECKING([if Lua header version matches $LUA_VERSION])
AS_IF([test "x$ax_cv_lua_header_version" = "x$LUA_VERSION"],
[ AC_MSG_RESULT([yes])
ax_header_version_match='yes'
],
[ AC_MSG_RESULT([no])
ax_header_version_match='no'
])
],
[ AC_MSG_WARN([cross compiling so assuming header version number matches])
ax_header_version_match='yes'
])
])
dnl Was LUA_INCLUDE specified?
AS_IF([test "x$ax_header_version_match" != 'xyes' &&
test "x$LUA_INCLUDE" != 'x'],
[AC_MSG_ERROR([cannot find headers for specified LUA_INCLUDE])])
dnl Test the final result and run user code.
AS_IF([test "x$ax_header_version_match" = 'xyes'], [$1],
[m4_default([$2], [AC_MSG_ERROR([cannot find Lua includes])])])
])
dnl AX_LUA_HEADERS_VERSION no longer exists, use AX_LUA_HEADERS.
AC_DEFUN([AX_LUA_HEADERS_VERSION],
[
AC_MSG_WARN([[$0 is deprecated, please use AX_LUA_HEADERS instead]])
])
dnl =========================================================================
dnl AX_LUA_LIBS([ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl =========================================================================
AC_DEFUN([AX_LUA_LIBS],
[
dnl TODO Should this macro also check various -L flags?
dnl Check for LUA_VERSION.
AC_MSG_CHECKING([if LUA_VERSION is defined])
AS_IF([test "x$LUA_VERSION" != 'x'],
[AC_MSG_RESULT([yes])],
[ AC_MSG_RESULT([no])
AC_MSG_ERROR([cannot check Lua libs without knowing LUA_VERSION])
])
dnl Make LUA_LIB a precious variable.
AC_ARG_VAR([LUA_LIB], [The Lua library, e.g. -llua5.1])
AS_IF([test "x$LUA_LIB" != 'x'],
[ dnl Check that LUA_LIBS works.
_ax_lua_saved_libs=$LIBS
LIBS="$LIBS $LUA_LIB"
AC_SEARCH_LIBS([lua_load], [],
[_ax_found_lua_libs='yes'],
[_ax_found_lua_libs='no'])
LIBS=$_ax_lua_saved_libs
dnl Check the result.
AS_IF([test "x$_ax_found_lua_libs" != 'xyes'],
[AC_MSG_ERROR([cannot find libs for specified LUA_LIB])])
],
[ dnl First search for extra libs.
_ax_lua_extra_libs=''
_ax_lua_saved_libs=$LIBS
LIBS="$LIBS $LUA_LIB"
AC_SEARCH_LIBS([exp], [m])
AC_SEARCH_LIBS([dlopen], [dl])
LIBS=$_ax_lua_saved_libs
AS_IF([test "x$ac_cv_search_exp" != 'xno' &&
test "x$ac_cv_search_exp" != 'xnone required'],
[_ax_lua_extra_libs="$_ax_lua_extra_libs $ac_cv_search_exp"])
AS_IF([test "x$ac_cv_search_dlopen" != 'xno' &&
test "x$ac_cv_search_dlopen" != 'xnone required'],
[_ax_lua_extra_libs="$_ax_lua_extra_libs $ac_cv_search_dlopen"])
dnl Try to find the Lua libs.
_ax_lua_saved_libs=$LIBS
LIBS="$LIBS $LUA_LIB"
AC_SEARCH_LIBS([lua_load],
[ lua$LUA_VERSION \
lua$LUA_SHORT_VERSION \
lua-$LUA_VERSION \
lua-$LUA_SHORT_VERSION \
lua \
],
[_ax_found_lua_libs='yes'],
[_ax_found_lua_libs='no'],
[$_ax_lua_extra_libs])
LIBS=$_ax_lua_saved_libs
AS_IF([test "x$ac_cv_search_lua_load" != 'xno' &&
test "x$ac_cv_search_lua_load" != 'xnone required'],
[LUA_LIB="$ac_cv_search_lua_load $_ax_lua_extra_libs"])
])
dnl Test the result and run user code.
AS_IF([test "x$_ax_found_lua_libs" = 'xyes'], [$1],
[m4_default([$2], [AC_MSG_ERROR([cannot find Lua libs])])])
])
dnl =========================================================================
dnl AX_LUA_READLINE([ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl =========================================================================
AC_DEFUN([AX_LUA_READLINE],
[
AX_LIB_READLINE
AS_IF([test "x$ac_cv_header_readline_readline_h" != 'x' &&
test "x$ac_cv_header_readline_history_h" != 'x'],
[ LUA_LIBS_CFLAGS="-DLUA_USE_READLINE $LUA_LIBS_CFLAGS"
$1
],
[$2])
])

View file

@ -1,60 +0,0 @@
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_prog_haxe_version.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_PROG_HAXE_VERSION([VERSION],[ACTION-IF-TRUE],[ACTION-IF-FALSE])
#
# DESCRIPTION
#
# Makes sure that haxe supports the version indicated. If true the shell
# commands in ACTION-IF-TRUE are executed. If not the shell commands in
# ACTION-IF-FALSE are run. The $HAXE_VERSION variable will be filled with
# the detected version.
#
# This macro uses the $HAXE variable to perform the check. If $HAXE is not
# set prior to calling this macro, the macro will fail.
#
# Example:
#
# AC_PATH_PROG([HAXE],[haxe])
# AC_PROG_HAXE_VERSION([3.1.3],[ ... ],[ ... ])
#
# Searches for Haxe, then checks if at least version 3.1.3 is present.
#
# LICENSE
#
# Copyright (c) 2015 Jens Geyer <jensg@apache.org>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 1
AC_DEFUN([AX_PROG_HAXE_VERSION],[
AC_REQUIRE([AC_PROG_SED])
AS_IF([test -n "$HAXE"],[
ax_haxe_version="$1"
AC_MSG_CHECKING([for haxe version])
haxe_version=`$HAXE -version 2>&1 | $SED -e 's/^.* \( @<:@0-9@:>@*\.@<:@0-9@:>@*\.@<:@0-9@:>@*\) .*/\1/'`
AC_MSG_RESULT($haxe_version)
AC_SUBST([HAXE_VERSION],[$haxe_version])
AX_COMPARE_VERSION([$ax_haxe_version],[le],[$haxe_version],[
:
$2
],[
:
$3
])
],[
AC_MSG_WARN([could not find Haxe])
$3
])
])

View file

@ -1,77 +0,0 @@
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_prog_perl_modules.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_PROG_PERL_MODULES([MODULES], [ACTION-IF-TRUE], [ACTION-IF-FALSE])
#
# DESCRIPTION
#
# Checks to see if the given perl modules are available. If true the shell
# commands in ACTION-IF-TRUE are executed. If not the shell commands in
# ACTION-IF-FALSE are run. Note if $PERL is not set (for example by
# calling AC_CHECK_PROG, or AC_PATH_PROG), AC_CHECK_PROG(PERL, perl, perl)
# will be run.
#
# MODULES is a space separated list of module names. To check for a
# minimum version of a module, append the version number to the module
# name, separated by an equals sign.
#
# Example:
#
# AX_PROG_PERL_MODULES( Text::Wrap Net::LDAP=1.0.3, ,
# AC_MSG_WARN(Need some Perl modules)
#
# LICENSE
#
# Copyright (c) 2009 Dean Povey <povey@wedgetail.com>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 7
AU_ALIAS([AC_PROG_PERL_MODULES], [AX_PROG_PERL_MODULES])
AC_DEFUN([AX_PROG_PERL_MODULES],[dnl
m4_define([ax_perl_modules])
m4_foreach([ax_perl_module], m4_split(m4_normalize([$1])),
[
m4_append([ax_perl_modules],
[']m4_bpatsubst(ax_perl_module,=,[ ])[' ])
])
# Make sure we have perl
if test -z "$PERL"; then
AC_CHECK_PROG(PERL,perl,perl)
fi
if test "x$PERL" != x; then
ax_perl_modules_failed=0
for ax_perl_module in ax_perl_modules; do
AC_MSG_CHECKING(for perl module $ax_perl_module)
# Would be nice to log result here, but can't rely on autoconf internals
$PERL -e "use $ax_perl_module; exit" > /dev/null 2>&1
if test $? -ne 0; then
AC_MSG_RESULT(no);
ax_perl_modules_failed=1
else
AC_MSG_RESULT(ok);
fi
done
# Run optional shell commands
if test "$ax_perl_modules_failed" = 0; then
:
$2
else
:
$3
fi
else
AC_MSG_WARN(could not find perl)
fi])dnl

View file

@ -1,127 +0,0 @@
dnl @synopsis AX_SIGNED_RIGHT_SHIFT
dnl
dnl Tests the behavior of a right shift on a negative signed int.
dnl
dnl This macro calls:
dnl AC_DEFINE(SIGNED_RIGHT_SHIFT_IS)
dnl AC_DEFINE(ARITHMETIC_RIGHT_SHIFT)
dnl AC_DEFINE(LOGICAL_RIGHT_SHIFT)
dnl AC_DEFINE(UNKNOWN_RIGHT_SHIFT)
dnl
dnl SIGNED_RIGHT_SHIFT_IS will be equal to one of the other macros.
dnl It also leaves the shell variables "ax_signed_right_shift"
dnl set to "arithmetic", "logical", or "unknown".
dnl
dnl NOTE: This macro does not work for cross-compiling.
dnl
dnl @category C
dnl @version 2009-03-25
dnl @license AllPermissive
dnl
dnl Copyright (C) 2009 David Reiss
dnl Copying and distribution of this file, with or without modification,
dnl are permitted in any medium without royalty provided the copyright
dnl notice and this notice are preserved.
AC_DEFUN([AX_SIGNED_RIGHT_SHIFT],
[
AC_MSG_CHECKING(the behavior of a signed right shift)
success_arithmetic=no
AC_RUN_IFELSE([AC_LANG_PROGRAM([[]], [[
return
/* 0xffffffff */
-1 >> 1 != -1 ||
-1 >> 2 != -1 ||
-1 >> 3 != -1 ||
-1 >> 4 != -1 ||
-1 >> 8 != -1 ||
-1 >> 16 != -1 ||
-1 >> 24 != -1 ||
-1 >> 31 != -1 ||
/* 0x80000000 */
(-2147483647 - 1) >> 1 != -1073741824 ||
(-2147483647 - 1) >> 2 != -536870912 ||
(-2147483647 - 1) >> 3 != -268435456 ||
(-2147483647 - 1) >> 4 != -134217728 ||
(-2147483647 - 1) >> 8 != -8388608 ||
(-2147483647 - 1) >> 16 != -32768 ||
(-2147483647 - 1) >> 24 != -128 ||
(-2147483647 - 1) >> 31 != -1 ||
/* 0x90800000 */
-1870659584 >> 1 != -935329792 ||
-1870659584 >> 2 != -467664896 ||
-1870659584 >> 3 != -233832448 ||
-1870659584 >> 4 != -116916224 ||
-1870659584 >> 8 != -7307264 ||
-1870659584 >> 16 != -28544 ||
-1870659584 >> 24 != -112 ||
-1870659584 >> 31 != -1 ||
0;
]])], [
success_arithmetic=yes
])
success_logical=no
AC_RUN_IFELSE([AC_LANG_PROGRAM([[]], [[
return
/* 0xffffffff */
-1 >> 1 != (signed)((unsigned)-1 >> 1) ||
-1 >> 2 != (signed)((unsigned)-1 >> 2) ||
-1 >> 3 != (signed)((unsigned)-1 >> 3) ||
-1 >> 4 != (signed)((unsigned)-1 >> 4) ||
-1 >> 8 != (signed)((unsigned)-1 >> 8) ||
-1 >> 16 != (signed)((unsigned)-1 >> 16) ||
-1 >> 24 != (signed)((unsigned)-1 >> 24) ||
-1 >> 31 != (signed)((unsigned)-1 >> 31) ||
/* 0x80000000 */
(-2147483647 - 1) >> 1 != (signed)((unsigned)(-2147483647 - 1) >> 1) ||
(-2147483647 - 1) >> 2 != (signed)((unsigned)(-2147483647 - 1) >> 2) ||
(-2147483647 - 1) >> 3 != (signed)((unsigned)(-2147483647 - 1) >> 3) ||
(-2147483647 - 1) >> 4 != (signed)((unsigned)(-2147483647 - 1) >> 4) ||
(-2147483647 - 1) >> 8 != (signed)((unsigned)(-2147483647 - 1) >> 8) ||
(-2147483647 - 1) >> 16 != (signed)((unsigned)(-2147483647 - 1) >> 16) ||
(-2147483647 - 1) >> 24 != (signed)((unsigned)(-2147483647 - 1) >> 24) ||
(-2147483647 - 1) >> 31 != (signed)((unsigned)(-2147483647 - 1) >> 31) ||
/* 0x90800000 */
-1870659584 >> 1 != (signed)((unsigned)-1870659584 >> 1) ||
-1870659584 >> 2 != (signed)((unsigned)-1870659584 >> 2) ||
-1870659584 >> 3 != (signed)((unsigned)-1870659584 >> 3) ||
-1870659584 >> 4 != (signed)((unsigned)-1870659584 >> 4) ||
-1870659584 >> 8 != (signed)((unsigned)-1870659584 >> 8) ||
-1870659584 >> 16 != (signed)((unsigned)-1870659584 >> 16) ||
-1870659584 >> 24 != (signed)((unsigned)-1870659584 >> 24) ||
-1870659584 >> 31 != (signed)((unsigned)-1870659584 >> 31) ||
0;
]])], [
success_logical=yes
])
AC_DEFINE([ARITHMETIC_RIGHT_SHIFT], 1, [Possible value for SIGNED_RIGHT_SHIFT_IS])
AC_DEFINE([LOGICAL_RIGHT_SHIFT], 2, [Possible value for SIGNED_RIGHT_SHIFT_IS])
AC_DEFINE([UNKNOWN_RIGHT_SHIFT], 3, [Possible value for SIGNED_RIGHT_SHIFT_IS])
if test "$success_arithmetic" = "yes" && test "$success_logical" = "yes" ; then
AC_MSG_ERROR("Right shift appears to be both arithmetic and logical!")
elif test "$success_arithmetic" = "yes" ; then
ax_signed_right_shift=arithmetic
AC_DEFINE([SIGNED_RIGHT_SHIFT_IS], 1,
[Indicates the effect of the right shift operator
on negative signed integers])
elif test "$success_logical" = "yes" ; then
ax_signed_right_shift=logical
AC_DEFINE([SIGNED_RIGHT_SHIFT_IS], 2,
[Indicates the effect of the right shift operator
on negative signed integers])
else
ax_signed_right_shift=unknown
AC_DEFINE([SIGNED_RIGHT_SHIFT_IS], 3,
[Indicates the effect of the right shift operator
on negative signed integers])
fi
AC_MSG_RESULT($ax_signed_right_shift)
])

View file

@ -1,28 +0,0 @@
dnl @synopsis AX_THRIFT_GEN(SHORT_LANGUAGE, LONG_LANGUAGE, DEFAULT)
dnl @synopsis AX_THRIFT_LIB(SHORT_LANGUAGE, LONG_LANGUAGE, DEFAULT)
dnl
dnl Allow a particular language generator to be disabled.
dnl Allow a particular language library to be disabled.
dnl
dnl These macros have poor error handling and are poorly documented.
dnl They are intended only for internal use by the Thrift compiler.
dnl
dnl @version 2008-02-20
dnl @license AllPermissive
dnl
dnl Copyright (C) 2009 David Reiss
dnl Copying and distribution of this file, with or without modification,
dnl are permitted in any medium without royalty provided the copyright
dnl notice and this notice are preserved.
AC_DEFUN([AX_THRIFT_LIB],
[
AC_ARG_WITH($1,
AC_HELP_STRING([--with-$1], [build the $2 library @<:@default=$3@:>@]),
[with_$1="$withval"],
[with_$1=$3]
)
have_$1=no
dnl What we do here is going to vary from library to library,
dnl so we can't really generalize (yet!).
])

View file

@ -1,177 +0,0 @@
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_compare_version.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_COMPARE_VERSION(VERSION_A, OP, VERSION_B, [ACTION-IF-TRUE], [ACTION-IF-FALSE])
#
# DESCRIPTION
#
# This macro compares two version strings. Due to the various number of
# minor-version numbers that can exist, and the fact that string
# comparisons are not compatible with numeric comparisons, this is not
# necessarily trivial to do in a autoconf script. This macro makes doing
# these comparisons easy.
#
# The six basic comparisons are available, as well as checking equality
# limited to a certain number of minor-version levels.
#
# The operator OP determines what type of comparison to do, and can be one
# of:
#
# eq - equal (test A == B)
# ne - not equal (test A != B)
# le - less than or equal (test A <= B)
# ge - greater than or equal (test A >= B)
# lt - less than (test A < B)
# gt - greater than (test A > B)
#
# Additionally, the eq and ne operator can have a number after it to limit
# the test to that number of minor versions.
#
# eq0 - equal up to the length of the shorter version
# ne0 - not equal up to the length of the shorter version
# eqN - equal up to N sub-version levels
# neN - not equal up to N sub-version levels
#
# When the condition is true, shell commands ACTION-IF-TRUE are run,
# otherwise shell commands ACTION-IF-FALSE are run. The environment
# variable 'ax_compare_version' is always set to either 'true' or 'false'
# as well.
#
# Examples:
#
# AX_COMPARE_VERSION([3.15.7],[lt],[3.15.8])
# AX_COMPARE_VERSION([3.15],[lt],[3.15.8])
#
# would both be true.
#
# AX_COMPARE_VERSION([3.15.7],[eq],[3.15.8])
# AX_COMPARE_VERSION([3.15],[gt],[3.15.8])
#
# would both be false.
#
# AX_COMPARE_VERSION([3.15.7],[eq2],[3.15.8])
#
# would be true because it is only comparing two minor versions.
#
# AX_COMPARE_VERSION([3.15.7],[eq0],[3.15])
#
# would be true because it is only comparing the lesser number of minor
# versions of the two values.
#
# Note: The characters that separate the version numbers do not matter. An
# empty string is the same as version 0. OP is evaluated by autoconf, not
# configure, so must be a string, not a variable.
#
# The author would like to acknowledge Guido Draheim whose advice about
# the m4_case and m4_ifvaln functions make this macro only include the
# portions necessary to perform the specific comparison specified by the
# OP argument in the final configure script.
#
# LICENSE
#
# Copyright (c) 2008 Tim Toolan <toolan@ele.uri.edu>
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 11
dnl #########################################################################
AC_DEFUN([AX_COMPARE_VERSION], [
AC_REQUIRE([AC_PROG_AWK])
# Used to indicate true or false condition
ax_compare_version=false
# Convert the two version strings to be compared into a format that
# allows a simple string comparison. The end result is that a version
# string of the form 1.12.5-r617 will be converted to the form
# 0001001200050617. In other words, each number is zero padded to four
# digits, and non digits are removed.
AS_VAR_PUSHDEF([A],[ax_compare_version_A])
A=`echo "$1" | sed -e 's/\([[0-9]]*\)/Z\1Z/g' \
-e 's/Z\([[0-9]]\)Z/Z0\1Z/g' \
-e 's/Z\([[0-9]][[0-9]]\)Z/Z0\1Z/g' \
-e 's/Z\([[0-9]][[0-9]][[0-9]]\)Z/Z0\1Z/g' \
-e 's/[[^0-9]]//g'`
AS_VAR_PUSHDEF([B],[ax_compare_version_B])
B=`echo "$3" | sed -e 's/\([[0-9]]*\)/Z\1Z/g' \
-e 's/Z\([[0-9]]\)Z/Z0\1Z/g' \
-e 's/Z\([[0-9]][[0-9]]\)Z/Z0\1Z/g' \
-e 's/Z\([[0-9]][[0-9]][[0-9]]\)Z/Z0\1Z/g' \
-e 's/[[^0-9]]//g'`
dnl # In the case of le, ge, lt, and gt, the strings are sorted as necessary
dnl # then the first line is used to determine if the condition is true.
dnl # The sed right after the echo is to remove any indented white space.
m4_case(m4_tolower($2),
[lt],[
ax_compare_version=`echo "x$A
x$B" | sed 's/^ *//' | sort -r | sed "s/x${A}/false/;s/x${B}/true/;1q"`
],
[gt],[
ax_compare_version=`echo "x$A
x$B" | sed 's/^ *//' | sort | sed "s/x${A}/false/;s/x${B}/true/;1q"`
],
[le],[
ax_compare_version=`echo "x$A
x$B" | sed 's/^ *//' | sort | sed "s/x${A}/true/;s/x${B}/false/;1q"`
],
[ge],[
ax_compare_version=`echo "x$A
x$B" | sed 's/^ *//' | sort -r | sed "s/x${A}/true/;s/x${B}/false/;1q"`
],[
dnl Split the operator from the subversion count if present.
m4_bmatch(m4_substr($2,2),
[0],[
# A count of zero means use the length of the shorter version.
# Determine the number of characters in A and B.
ax_compare_version_len_A=`echo "$A" | $AWK '{print(length)}'`
ax_compare_version_len_B=`echo "$B" | $AWK '{print(length)}'`
# Set A to no more than B's length and B to no more than A's length.
A=`echo "$A" | sed "s/\(.\{$ax_compare_version_len_B\}\).*/\1/"`
B=`echo "$B" | sed "s/\(.\{$ax_compare_version_len_A\}\).*/\1/"`
],
[[0-9]+],[
# A count greater than zero means use only that many subversions
A=`echo "$A" | sed "s/\(\([[0-9]]\{4\}\)\{m4_substr($2,2)\}\).*/\1/"`
B=`echo "$B" | sed "s/\(\([[0-9]]\{4\}\)\{m4_substr($2,2)\}\).*/\1/"`
],
[.+],[
AC_WARNING(
[illegal OP numeric parameter: $2])
],[])
# Pad zeros at end of numbers to make same length.
ax_compare_version_tmp_A="$A`echo $B | sed 's/./0/g'`"
B="$B`echo $A | sed 's/./0/g'`"
A="$ax_compare_version_tmp_A"
# Check for equality or inequality as necessary.
m4_case(m4_tolower(m4_substr($2,0,2)),
[eq],[
test "x$A" = "x$B" && ax_compare_version=true
],
[ne],[
test "x$A" != "x$B" && ax_compare_version=true
],[
AC_WARNING([illegal OP parameter: $2])
])
])
AS_VAR_POPDEF([A])dnl
AS_VAR_POPDEF([B])dnl
dnl # Execute ACTION-IF-TRUE / ACTION-IF-FALSE.
if test "$ax_compare_version" = "true" ; then
m4_ifvaln([$4],[$4],[:])dnl
m4_ifvaln([$5],[else $5])dnl
fi
]) dnl AX_COMPARE_VERSION

View file

@ -1,93 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# build Apache Thrift on AppVeyor - https://ci.appveyor.com
shallow_clone: true
clone_depth: 10
version: '{build}'
os:
# - Windows Server 2012 R2
- Visual Studio 2015
environment:
BOOST_ROOT: C:\Libraries\boost_1_59_0
BOOST_LIBRARYDIR: C:\Libraries\boost_1_59_0\lib64-msvc-14.0
# Unfurtunately, this version needs manual update because old versions are quickly deleted.
ANT_VERSION: 1.9.7
install:
- '"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x64'
- cd \
# Zlib
- appveyor DownloadFile https://github.com/madler/zlib/archive/v1.2.8.tar.gz
- 7z x v1.2.8.tar.gz -so | 7z x -si -ttar > nul
- cd zlib-1.2.8
- cmake -G "Visual Studio 14 2015 Win64" .
- cmake --build . --config release
- cd ..
# OpenSSL
- C:\Python35-x64\python %APPVEYOR_BUILD_FOLDER%\build\appveyor\download_openssl.py
- ps: Start-Process "Win64OpenSSL.exe" -ArgumentList "/silent /verysilent /sp- /suppressmsgboxes" -Wait
# Libevent
- appveyor DownloadFile https://github.com/libevent/libevent/releases/download/release-2.0.22-stable/libevent-2.0.22-stable.tar.gz
- 7z x libevent-2.0.22-stable.tar.gz -so | 7z x -si -ttar > nul
- cd libevent-2.0.22-stable
- nmake -f Makefile.nmake
- mkdir lib
- move *.lib lib\
- move WIN32-Code\event2\* include\event2\
- move *.h include\
- cd ..
- appveyor-retry cinst -y winflexbison
- appveyor DownloadFile http://www.us.apache.org/dist/ant/binaries/apache-ant-%ANT_VERSION%-bin.zip
- 7z x apache-ant-%ANT_VERSION%-bin.zip > nul
- cd %APPVEYOR_BUILD_FOLDER%
# TODO: Enable Haskell build
# - cinst HaskellPlatform -version 2014.2.0.0
build_script:
- set PATH=C:\ProgramData\chocolatey\bin;C:\apache-ant-%ANT_VERSION%\bin;%PATH%
- set JAVA_HOME=C:\Program Files\Java\jdk1.7.0
- set PATH=%JAVA_HOME%\bin;%PATH%
# - set PATH=%PATH%;C:\Program Files (x86)\Haskell Platform\2014.2.0.0\bin
# - set PATH=%PATH%;C:\Program Files (x86)\Haskell Platform\2014.2.0.0\lib\extralibs\bin
- set PATH=C:\Python27-x64\scripts;C:\Python27-x64;%PATH%
- pip install ipaddress backports.ssl_match_hostname tornado twisted
- mkdir cmake-build
- cd cmake-build
- cmake -G "Visual Studio 14 2015 Win64" -DWITH_SHARED_LIB=OFF -DLIBEVENT_ROOT=C:\libevent-2.0.22-stable -DZLIB_INCLUDE_DIR=C:\zlib-1.2.8 -DZLIB_LIBRARY=C:\zlib-1.2.8\release\zlibstatic.lib -DBOOST_ROOT="%BOOST_ROOT%" -DBOOST_LIBRARYDIR="%BOOST_LIBRARYDIR%" ..
- findstr /b /e BUILD_COMPILER:BOOL=ON CMakeCache.txt
- findstr /b /e BUILD_CPP:BOOL=ON CMakeCache.txt
- findstr /b /e BUILD_JAVA:BOOL=ON CMakeCache.txt
- findstr /b /e BUILD_PYTHON:BOOL=ON CMakeCache.txt
# - findstr /b /e BUILD_C_GLIB:BOOL=ON CMakeCache.txt
# - findstr /b /e BUILD_HASKELL:BOOL=ON CMakeCache.txt
- findstr /b /e BUILD_TESTING:BOOL=ON CMakeCache.txt
# - cmake --build .
- cmake --build . --config Release
# TODO: Fix cpack
# - cpack
# TODO: Run more tests
# CTest fails to invoke ant seemingly due to "ant.bat" v.s. "ant" (shell script) conflict.
# Currently, everything that involves OpenSSL seems to hang forever on our Appveyor setup.
# Also a few C++ tests hang (on Appveyor or on Windows in general).
- ctest -C Release --timeout 600 -VV -E "(StressTestNonBlocking|PythonTestSSLSocket|python_test$|^Java)"
# TODO make it perfect ;-r

View file

@ -1,54 +0,0 @@
#!/bin/sh
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
./cleanup.sh
if test -d lib/php/src/ext/thrift_protocol ; then
if phpize -v >/dev/null 2>/dev/null ; then
(cd lib/php/src/ext/thrift_protocol && phpize)
fi
fi
set -e
# libtoolize is called "glibtoolize" on OSX.
if libtoolize --version 1 >/dev/null 2>/dev/null; then
LIBTOOLIZE=libtoolize
elif glibtoolize --version 1 >/dev/null 2>/dev/null; then
LIBTOOLIZE=glibtoolize
else
echo >&2 "Couldn't find libtoolize!"
exit 1
fi
# we require automake 1.13 or later
# check must happen externally due to use of newer macro
AUTOMAKE_VERSION=`automake --version | grep automake | egrep -o '([0-9]{1,}\.)+[0-9]{1,}'`
if [ "$AUTOMAKE_VERSION" \< "1.13" ]; then
echo >&2 "automake version $AUTOMAKE_VERSION is too old (need 1.13 or later)"
exit 1
fi
autoscan
$LIBTOOLIZE --copy --automake
aclocal -I ./aclocal
autoheader
autoconf
automake --copy --add-missing --foreign

View file

@ -1,16 +0,0 @@
{
"name": "thrift",
"version": "0.10.0",
"homepage": "https://git-wip-us.apache.org/repos/asf/thrift.git",
"authors": [
"Apache Thrift <dev@thrift.apache.org>"
],
"description": "Apache Thrift",
"main": "lib/js/src/thrift.js",
"keywords": [
"thrift"
],
"license": "Apache v2",
"ignore": [
]
}

View file

@ -1,41 +0,0 @@
import urllib.request
import sys
OUT = 'Win64OpenSSL.exe'
URL_STR = 'https://slproweb.com/download/Win64OpenSSL-%s.exe'
VERSION_MAJOR = 1
VERSION_MINOR = 0
VERSION_PATCH = 2
VERSION_SUFFIX = 'j'
VERSION_STR = '%d_%d_%d%s'
TRY_COUNT = 4
def main():
for patch in range(VERSION_PATCH, TRY_COUNT):
for suffix in range(TRY_COUNT):
if patch == VERSION_PATCH:
s = VERSION_SUFFIX
else:
s = 'a'
s = chr(ord(s) + suffix)
ver = VERSION_STR % (VERSION_MAJOR, VERSION_MINOR, patch, s)
url = URL_STR % ver
try:
with urllib.request.urlopen(url) as res:
if res.getcode() == 200:
with open(OUT, 'wb') as out:
out.write(res.read())
print('successfully downloaded from ' + url)
return 0
except urllib.error.HTTPError:
pass
print('failed to download from ' + url, file=sys.stderr)
print('could not download openssl', file=sys.stderr)
return 1
if __name__ == '__main__':
sys.exit(main())

View file

@ -1,68 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
#TODO: Should we bundle system libraries for DLLs?
#include(InstallRequiredSystemLibraries)
# For help take a look at:
# http://www.cmake.org/Wiki/CMake:CPackConfiguration
### general settings
set(CPACK_PACKAGE_NAME "thrift")
set(CPACK_PACKAGE_VERSION "${PACKAGE_VERSION}")
set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "Apache Thrift")
set(CPACK_PACKAGE_DESCRIPTION_FILE "${CMAKE_CURRENT_SOURCE_DIR}/README.md")
set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/LICENSE")
set(CPACK_PACKAGE_VENDOR "Apache Software Foundation")
set(CPACK_PACKAGE_CONTACT "dev@thrift.apache.org")
set(CPACK_PACKAGE_INSTALL_DIRECTORY "${CPACK_PACKAGE_NAME}")
set(CPACK_SYSTEM_NAME "${CMAKE_SYSTEM_NAME}")
### versions
set(CPACK_PACKAGE_VERSION_MAJOR ${thrift_VERSION_MAJOR})
set(CPACK_PACKAGE_VERSION_MINOR ${thrift_VERSION_MINOR})
set(CPACK_PACKAGE_VERSION_PATCH ${thrift_VERSION_PATCH})
### source generator
set(CPACK_SOURCE_GENERATOR "TGZ")
set(CPACK_SOURCE_IGNORE_FILES "~$;[.]swp$;/[.]svn/;/[.]git/;.gitignore;/build/;tags;cscope.*")
set(CPACK_SOURCE_PACKAGE_FILE_NAME "${CPACK_PACKAGE_NAME}-${CPACK_PACKAGE_VERSION}")
### zip generator
set(CPACK_GENERATOR "ZIP")
set(CPACK_PACKAGE_INSTALL_DIRECTORY "thrift")
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
set(CPACK_GENERATOR "NSIS")
set(CPACK_NSIS_HELP_LINK "http://thrift.apache.org")
set(CPACK_NSIS_MENU_LINKS
"http://thrift.apache.org" "Apache Thrift - Web Site"
"https://issues.apache.org/jira/browse/THRIFT" "Apache Thrift - Issues")
set(CPACK_NSIS_CONTACT ${CPACK_PACKAGE_CONTACT})
set(CPACK_NSIS_MODIFY_PATH "ON")
set(CPACK_PACKAGE_INSTALL_DIRECTORY "${CPACK_PACKAGE_NAME}")
else()
set(CPACK_GENERATOR "DEB" )
set(CPACK_DEBIAN_PACKAGE_MAINTAINER ${CPACK_PACKAGE_CONTACT})
endif()
include(CPack)

View file

@ -1,76 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
include(CheckSymbolExists)
include(CheckIncludeFile)
include(CheckIncludeFiles)
include(CheckFunctionExists)
# If AI_ADDRCONFIG is not defined we define it as 0
check_symbol_exists(AI_ADDRCONFIG "sys/types.h;sys/socket.h;netdb.h" HAVE_AI_ADDRCONFIG)
if(NOT HAVE_AI_ADDRCONFIG)
set(AI_ADDRCONFIG 1)
endif(NOT HAVE_AI_ADDRCONFIG)
check_include_file(arpa/inet.h HAVE_ARPA_INET_H)
check_include_file(fcntl.h HAVE_FCNTL_H)
check_include_file(getopt.h HAVE_GETOPT_H)
check_include_file(inttypes.h HAVE_INTTYPES_H)
check_include_file(netdb.h HAVE_NETDB_H)
check_include_file(netinet/in.h HAVE_NETINET_IN_H)
check_include_file(stdint.h HAVE_STDINT_H)
check_include_file(unistd.h HAVE_UNISTD_H)
check_include_file(pthread.h HAVE_PTHREAD_H)
check_include_file(sys/time.h HAVE_SYS_TIME_H)
check_include_file(sys/param.h HAVE_SYS_PARAM_H)
check_include_file(sys/resource.h HAVE_SYS_RESOURCE_H)
check_include_file(sys/socket.h HAVE_SYS_SOCKET_H)
check_include_file(sys/stat.h HAVE_SYS_STAT_H)
check_include_file(sys/un.h HAVE_SYS_UN_H)
check_include_file(sys/poll.h HAVE_SYS_POLL_H)
check_include_file(sys/select.h HAVE_SYS_SELECT_H)
check_include_file(sched.h HAVE_SCHED_H)
check_include_file(strings.h HAVE_STRINGS_H)
check_function_exists(gethostbyname HAVE_GETHOSTBYNAME)
check_function_exists(gethostbyname_r HAVE_GETHOSTBYNAME_R)
check_function_exists(strerror_r HAVE_STRERROR_R)
check_function_exists(sched_get_priority_max HAVE_SCHED_GET_PRIORITY_MAX)
check_function_exists(sched_get_priority_min HAVE_SCHED_GET_PRIORITY_MIN)
include(CheckCSourceCompiles)
include(CheckCXXSourceCompiles)
check_cxx_source_compiles(
"
#include <string.h>
int main(){char b;char *a = strerror_r(0, &b, 0); return(0);}
"
STRERROR_R_CHAR_P)
set(PACKAGE ${PACKAGE_NAME})
set(PACKAGE_STRING "${PACKAGE_NAME} ${PACKAGE_VERSION}")
set(VERSION ${thrift_VERSION})
# generate a config.h file
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/build/cmake/config.h.in" "${CMAKE_CURRENT_BINARY_DIR}/thrift/config.h")
# HACK: Some files include thrift/config.h and some config.h so we include both. This should be cleaned up.
include_directories("${CMAKE_CURRENT_BINARY_DIR}/thrift" "${CMAKE_CURRENT_BINARY_DIR}")

View file

@ -1,70 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Always include srcdir and builddir in include path
# This saves typing ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY} in
# about every subdir
# since cmake 2.4.0
set(CMAKE_INCLUDE_CURRENT_DIR ON)
# Put the include dirs which are in the source or build tree
# before all other include dirs, so the headers in the sources
# are preferred over the already installed ones
# since cmake 2.4.1
set(CMAKE_INCLUDE_DIRECTORIES_PROJECT_BEFORE ON)
# Use colored output
# since cmake 2.4.0
set(CMAKE_COLOR_MAKEFILE ON)
# Define the generic version of the libraries here
set(GENERIC_LIB_VERSION "0.10.0")
set(GENERIC_LIB_SOVERSION "0")
# Set the default build type to release with debug info
if (NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE RelWithDebInfo
CACHE STRING
"Choose the type of build, options are: None Debug Release RelWithDebInfo MinSizeRel."
)
endif (NOT CMAKE_BUILD_TYPE)
# Create the compile command database for clang by default
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
# Put the libraries and binaries that get built into directories at the
# top of the build tree rather than in hard-to-find leaf
# directories. This simplifies manual testing and the use of the build
# tree rather than installed thrift libraries.
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
#
# "rpath" support.
# See http://www.itk.org/Wiki/index.php?title=CMake_RPATH_handling
#
# On MacOSX, for shared libraries, enable rpath support.
set(CMAKE_MACOSX_RPATH TRUE)
#
# On any OS, for executables, allow linking with shared libraries in non-system
# locations and running the executables without LD_PRELOAD or similar.
# This requires the library to be built with rpath support.
set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)

View file

@ -1,26 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Define the default install paths
set(BIN_INSTALL_DIR "bin" CACHE PATH "The binary install dir (default: bin)")
set(LIB_INSTALL_DIR "lib${LIB_SUFFIX}" CACHE PATH "The library install dir (default: lib${LIB_SUFFIX})")
set(INCLUDE_INSTALL_DIR "include" CACHE PATH "The library install dir (default: include)")
set(CMAKE_INSTALL_DIR "cmake" CACHE PATH "The subdirectory to install cmake config files (default: cmake)")
set(DOC_INSTALL_DIR "share/doc" CACHE PATH "The subdirectory to install documentation files (default: share/doc)")

View file

@ -1,210 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
include(CMakeDependentOption)
set(THRIFT_COMPILER "" CACHE FILEPATH "External Thrift compiler to use during build")
# Additional components
option(BUILD_COMPILER "Build Thrift compiler" ON)
if(BUILD_COMPILER OR EXISTS ${THRIFT_COMPILER})
set(HAVE_COMPILER ON)
endif()
CMAKE_DEPENDENT_OPTION(BUILD_TESTING "Build with unit tests" ON "HAVE_COMPILER" OFF)
CMAKE_DEPENDENT_OPTION(BUILD_EXAMPLES "Build examples" ON "HAVE_COMPILER" OFF)
CMAKE_DEPENDENT_OPTION(BUILD_TUTORIALS "Build Thrift tutorials" ON "HAVE_COMPILER" OFF)
option(BUILD_LIBRARIES "Build Thrift libraries" ON)
# Libraries to build
# Each language library can be enabled or disabled using the WITH_<LANG> flag.
# By default CMake checks if the required dependencies for a language are present
# and enables the library if all are found. This means the default is to build as
# much as possible but leaving out libraries if their dependencies are not met.
CMAKE_DEPENDENT_OPTION(WITH_BOOST_STATIC "Build with Boost static link library" OFF "NOT MSVC" ON)
set(Boost_USE_STATIC_LIBS ${WITH_BOOST_STATIC})
if (NOT WITH_BOOST_STATIC)
add_definitions(-DBOOST_ALL_DYN_LINK)
add_definitions(-DBOOST_TEST_DYN_LINK)
endif()
# C++
option(WITH_CPP "Build C++ Thrift library" ON)
if(WITH_CPP)
find_package(Boost 1.53 QUIET)
# NOTE: Currently the following options are C++ specific,
# but in future other libraries might reuse them.
# So they are not dependent on WITH_CPP but setting them without WITH_CPP currently
# has no effect.
if(ZLIB_LIBRARY)
# FindZLIB.cmake does not normalize path so we need to do it ourselves.
file(TO_CMAKE_PATH ${ZLIB_LIBRARY} ZLIB_LIBRARY)
endif()
find_package(ZLIB QUIET)
CMAKE_DEPENDENT_OPTION(WITH_ZLIB "Build with ZLIB support" ON
"ZLIB_FOUND" OFF)
find_package(Libevent QUIET)
CMAKE_DEPENDENT_OPTION(WITH_LIBEVENT "Build with libevent support" ON
"Libevent_FOUND" OFF)
find_package(Qt4 QUIET COMPONENTS QtCore QtNetwork)
CMAKE_DEPENDENT_OPTION(WITH_QT4 "Build with Qt4 support" ON
"QT4_FOUND" OFF)
find_package(Qt5 QUIET COMPONENTS Core Network)
CMAKE_DEPENDENT_OPTION(WITH_QT5 "Build with Qt5 support" ON
"Qt5_FOUND" OFF)
if(${WITH_QT4} AND ${WITH_QT5} AND ${CMAKE_MAJOR_VERSION} LESS 3)
# cmake < 3.0.0 causes conflict when building both Qt4 and Qt5
set(WITH_QT4 OFF)
endif()
find_package(OpenSSL QUIET)
CMAKE_DEPENDENT_OPTION(WITH_OPENSSL "Build with OpenSSL support" ON
"OPENSSL_FOUND" OFF)
option(WITH_STDTHREADS "Build with C++ std::thread support" OFF)
CMAKE_DEPENDENT_OPTION(WITH_BOOSTTHREADS "Build with Boost threads support" OFF
"NOT WITH_STDTHREADS;Boost_FOUND" OFF)
endif()
CMAKE_DEPENDENT_OPTION(BUILD_CPP "Build C++ library" ON
"BUILD_LIBRARIES;WITH_CPP;Boost_FOUND" OFF)
CMAKE_DEPENDENT_OPTION(WITH_PLUGIN "Build compiler plugin support" ON
"BUILD_COMPILER;BUILD_CPP" OFF)
# C GLib
option(WITH_C_GLIB "Build C (GLib) Thrift library" ON)
if(WITH_C_GLIB)
find_package(GLIB QUIET COMPONENTS gobject)
endif()
CMAKE_DEPENDENT_OPTION(BUILD_C_GLIB "Build C (GLib) library" ON
"BUILD_LIBRARIES;WITH_C_GLIB;GLIB_FOUND" OFF)
if(BUILD_CPP)
set(boost_components)
if(WITH_BOOSTTHREADS OR BUILD_TESTING)
list(APPEND boost_components system thread)
endif()
if(BUILD_TESTING)
list(APPEND boost_components unit_test_framework filesystem chrono program_options)
endif()
if(boost_components)
find_package(Boost 1.53 REQUIRED COMPONENTS ${boost_components})
endif()
elseif(BUILD_C_GLIB AND BUILD_TESTING)
find_package(Boost 1.53 REQUIRED)
endif()
# Java
option(WITH_JAVA "Build Java Thrift library" ON)
if(ANDROID)
find_package(Gradle QUIET)
CMAKE_DEPENDENT_OPTION(BUILD_JAVA "Build Java library" ON
"BUILD_LIBRARIES;WITH_JAVA;GRADLE_FOUND" OFF)
else()
find_package(Java QUIET)
find_package(Ant QUIET)
CMAKE_DEPENDENT_OPTION(BUILD_JAVA "Build Java library" ON
"BUILD_LIBRARIES;WITH_JAVA;JAVA_FOUND;ANT_FOUND" OFF)
endif()
# Python
option(WITH_PYTHON "Build Python Thrift library" ON)
find_package(PythonInterp QUIET) # for Python executable
find_package(PythonLibs QUIET) # for Python.h
CMAKE_DEPENDENT_OPTION(BUILD_PYTHON "Build Python library" ON
"BUILD_LIBRARIES;WITH_PYTHON;PYTHONLIBS_FOUND" OFF)
# Haskell
option(WITH_HASKELL "Build Haskell Thrift library" ON)
find_package(GHC QUIET)
find_package(Cabal QUIET)
CMAKE_DEPENDENT_OPTION(BUILD_HASKELL "Build GHC library" ON
"BUILD_LIBRARIES;WITH_HASKELL;GHC_FOUND;CABAL_FOUND" OFF)
# Common library options
option(WITH_SHARED_LIB "Build shared libraries" ON)
option(WITH_STATIC_LIB "Build static libraries" ON)
if (NOT WITH_SHARED_LIB AND NOT WITH_STATIC_LIB)
message(FATAL_ERROR "Cannot build with both shared and static outputs disabled!")
endif()
#NOTE: C++ compiler options are defined in the lib/cpp/CMakeLists.txt
# Visual Studio only options
if(MSVC)
option(WITH_MT "Build using MT instead of MD (MSVC only)" OFF)
endif(MSVC)
macro(MESSAGE_DEP flag summary)
if(NOT ${flag})
message(STATUS " - ${summary}")
endif()
endmacro(MESSAGE_DEP flag summary)
macro(PRINT_CONFIG_SUMMARY)
message(STATUS "----------------------------------------------------------")
message(STATUS "Thrift version: ${thrift_VERSION} (${thrift_VERSION_MAJOR}.${thrift_VERSION_MINOR}.${thrift_VERSION_PATCH})")
message(STATUS "Thrift package version: ${PACKAGE_VERSION}")
message(STATUS "Build configuration Summary")
message(STATUS " Build Thrift compiler: ${BUILD_COMPILER}")
message(STATUS " Build compiler plugin support: ${WITH_PLUGIN}")
MESSAGE_DEP(PLUGIN_COMPILER_NOT_TOO_OLD "Disabled due to older compiler")
message(STATUS " Build with unit tests: ${BUILD_TESTING}")
MESSAGE_DEP(HAVE_COMPILER "Disabled because BUILD_THRIFT=OFF and no valid THRIFT_COMPILER is given")
message(STATUS " Build examples: ${BUILD_EXAMPLES}")
MESSAGE_DEP(HAVE_COMPILER "Disabled because BUILD_THRIFT=OFF and no valid THRIFT_COMPILER is given")
message(STATUS " Build Thrift libraries: ${BUILD_LIBRARIES}")
message(STATUS " Language libraries:")
message(STATUS " Build C++ library: ${BUILD_CPP}")
MESSAGE_DEP(WITH_CPP "Disabled by WITH_CPP=OFF")
MESSAGE_DEP(Boost_FOUND "Boost headers missing")
message(STATUS " Build C (GLib) library: ${BUILD_C_GLIB}")
MESSAGE_DEP(WITH_C_GLIB "Disabled by WITH_C_GLIB=OFF")
MESSAGE_DEP(GLIB_FOUND "GLib missing")
message(STATUS " Build Java library: ${BUILD_JAVA}")
MESSAGE_DEP(WITH_JAVA "Disabled by WITH_JAVA=OFF")
if(ANDROID)
MESSAGE_DEP(GRADLE_FOUND "Gradle missing")
else()
MESSAGE_DEP(JAVA_FOUND "Java Runtime missing")
MESSAGE_DEP(ANT_FOUND "Ant missing")
endif()
message(STATUS " Build Python library: ${BUILD_PYTHON}")
MESSAGE_DEP(WITH_PYTHON "Disabled by WITH_PYTHON=OFF")
MESSAGE_DEP(PYTHONLIBS_FOUND "Python libraries missing")
message(STATUS " Build Haskell library: ${BUILD_HASKELL}")
MESSAGE_DEP(WITH_HASKELL "Disabled by WITH_HASKELL=OFF")
MESSAGE_DEP(GHC_FOUND "GHC missing")
MESSAGE_DEP(CABAL_FOUND "Cabal missing")
message(STATUS " Library features:")
message(STATUS " Build shared libraries: ${WITH_SHARED_LIB}")
message(STATUS " Build static libraries: ${WITH_STATIC_LIB}")
message(STATUS " Build with ZLIB support: ${WITH_ZLIB}")
message(STATUS " Build with libevent support: ${WITH_LIBEVENT}")
message(STATUS " Build with Qt4 support: ${WITH_QT4}")
message(STATUS " Build with Qt5 support: ${WITH_QT5}")
message(STATUS " Build with OpenSSL support: ${WITH_OPENSSL}")
message(STATUS " Build with Boost thread support: ${WITH_BOOSTTHREADS}")
message(STATUS " Build with C++ std::thread support: ${WITH_STDTHREADS}")
message(STATUS " Build with Boost static link library: ${WITH_BOOST_STATIC}")
if(MSVC)
message(STATUS " - Enabled for Visual C++")
endif()
message(STATUS "----------------------------------------------------------")
endmacro(PRINT_CONFIG_SUMMARY)

View file

@ -1,106 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Visual Studio specific options
if(MSVC)
#For visual studio the library naming is as following:
# Dynamic libraries:
# - thrift.dll for release library
# - thriftd.dll for debug library
#
# Static libraries:
# - thriftmd.lib for /MD release build
# - thriftmt.lib for /MT release build
#
# - thriftmdd.lib for /MD debug build
# - thriftmtd.lib for /MT debug build
#
# the same holds for other libraries like libthriftz etc.
# For Debug build types, append a "d" to the library names.
set(CMAKE_DEBUG_POSTFIX "d" CACHE STRING "Set debug library postfix" FORCE)
set(CMAKE_RELEASE_POSTFIX "" CACHE STRING "Set release library postfix" FORCE)
set(CMAKE_RELWITHDEBINFO_POSTFIX "" CACHE STRING "Set release library postfix" FORCE)
# Build using /MT option instead of /MD if the WITH_MT options is set
if(WITH_MT)
set(CompilerFlags
CMAKE_CXX_FLAGS
CMAKE_CXX_FLAGS_DEBUG
CMAKE_CXX_FLAGS_RELEASE
CMAKE_CXX_FLAGS_RELWITHDEBINFO
CMAKE_C_FLAGS
CMAKE_C_FLAGS_DEBUG
CMAKE_C_FLAGS_RELEASE
CMAKE_C_FLAGS_RELWITHDEBINFO
)
foreach(CompilerFlag ${CompilerFlags})
string(REPLACE "/MD" "/MT" ${CompilerFlag} "${${CompilerFlag}}")
endforeach()
set(STATIC_POSTFIX "mt" CACHE STRING "Set static library postfix" FORCE)
else(WITH_MT)
set(STATIC_POSTFIX "md" CACHE STRING "Set static library postfix" FORCE)
endif(WITH_MT)
# Disable Windows.h definition of macros for min and max
add_definitions("-DNOMINMAX")
# Disable boost auto linking pragmas - cmake includes the right files
add_definitions("-DBOOST_ALL_NO_LIB")
# Windows build does not know how to make a shared library yet
# as there are no __declspec(dllexport) or exports files in the project.
if (WITH_SHARED_LIB)
message (FATAL_ERROR "Windows build does not support shared library output yet, please set -DWITH_SHARED_LIB=off")
endif()
elseif(UNIX)
find_program( MEMORYCHECK_COMMAND valgrind )
set( MEMORYCHECK_COMMAND_OPTIONS "--gen-suppressions=all --leak-check=full" )
set( MEMORYCHECK_SUPPRESSIONS_FILE "${PROJECT_SOURCE_DIR}/test/valgrind.suppress" )
endif()
# WITH_*THREADS selects which threading library to use
if(WITH_BOOSTTHREADS)
add_definitions("-DUSE_BOOST_THREAD=1")
elseif(WITH_STDTHREADS)
add_definitions("-DUSE_STD_THREAD=1")
endif()
# GCC and Clang.
if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
# FIXME -pedantic can not be used at the moment because of: https://issues.apache.org/jira/browse/THRIFT-2784
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -O2 -Wall -Wextra -pedantic")
# FIXME enabling c++11 breaks some Linux builds on Travis by triggering a g++ bug, see
# https://travis-ci.org/apache/thrift/jobs/58017022
# on the other hand, both MacOSX and FreeBSD need c++11
if(${CMAKE_SYSTEM_NAME} MATCHES "Darwin" OR ${CMAKE_SYSTEM_NAME} MATCHES "FreeBSD")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -O2 -Wall -Wextra")
endif()
endif()
# If gcc older than 4.8 is detected, disable new compiler plug-in support (see THRIFT-3937)
set(PLUGIN_COMPILER_NOT_TOO_OLD ON) # simplifies messaging in DefineOptions summary
if (CMAKE_CXX_COMPILER_ID MATCHES "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "4.8" AND WITH_PLUGIN)
message(STATUS "Disabling compiler plug-in support to work with older gcc compiler")
set(WITH_PLUGIN OFF)
set(PLUGIN_COMPILER_NOT_TOO_OLD OFF)
endif()

View file

@ -1,30 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# ANT_FOUND - system has Ant
# Ant_EXECUTABLE - the Ant executable
#
# It will search the environment variable ANT_HOME if it is set
include(FindPackageHandleStandardArgs)
find_program(Ant_EXECUTABLE NAMES ant PATHS $ENV{ANT_HOME}/bin)
find_package_handle_standard_args(Ant DEFAULT_MSG Ant_EXECUTABLE)
mark_as_advanced(Ant_EXECUTABLE)

View file

@ -1,30 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Cabal_FOUND - system has Cabal
# Cabal - the Cabal executable
#
# It will search the environment variable CABAL_HOME if it is set
include(FindPackageHandleStandardArgs)
find_program(CABAL NAMES cabal PATHS $ENV{HOME}/.cabal/bin $ENV{CABAL_HOME}/bin)
find_package_handle_standard_args(CABAL DEFAULT_MSG CABAL)
mark_as_advanced(CABAL)

View file

@ -1,36 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# GHC_FOUND - system has GHC
# GHC - the GHC executable
# RUN_HASKELL_FOUND - system has runhaskell
# RUN_HASKELL - the runhaskell executable
#
# It will search the environment variable GHC_HOME if it is set
include(FindPackageHandleStandardArgs)
find_program(GHC NAMES ghc PATHS $ENV{GHC_HOME}/bin)
find_package_handle_standard_args(GHC DEFAULT_MSG GHC)
mark_as_advanced(GHC)
find_program(RUN_HASKELL NAMES runhaskell PATHS $ENV{GHC_HOME}/bin)
find_package_handle_standard_args(RUN_HASKELL DEFAULT_MSG RUN_HASKELL)
mark_as_advanced(RUN_HASKELL)

View file

@ -1,122 +0,0 @@
# - Try to find Glib and its components (gio, gobject etc)
# Once done, this will define
#
# GLIB_FOUND - system has Glib
# GLIB_INCLUDE_DIRS - the Glib include directories
# GLIB_LIBRARIES - link these to use Glib
#
# Optionally, the COMPONENTS keyword can be passed to find_package()
# and Glib components can be looked for. Currently, the following
# components can be used, and they define the following variables if
# found:
#
# gio: GLIB_GIO_LIBRARIES
# gobject: GLIB_GOBJECT_LIBRARIES
# gmodule: GLIB_GMODULE_LIBRARIES
# gthread: GLIB_GTHREAD_LIBRARIES
#
# Note that the respective _INCLUDE_DIR variables are not set, since
# all headers are in the same directory as GLIB_INCLUDE_DIRS.
#
# Copyright (C) 2012 Raphael Kubo da Costa <rakuco@webkit.org>
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER AND ITS CONTRIBUTORS ``AS
# IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR ITS
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
find_package(PkgConfig)
pkg_check_modules(PC_GLIB QUIET glib-2.0)
find_library(GLIB_LIBRARIES
NAMES glib-2.0
HINTS ${PC_GLIB_LIBDIR}
${PC_GLIB_LIBRARY_DIRS}
)
# Files in glib's main include path may include glibconfig.h, which,
# for some odd reason, is normally in $LIBDIR/glib-2.0/include.
get_filename_component(_GLIB_LIBRARY_DIR ${GLIB_LIBRARIES} PATH)
find_path(GLIBCONFIG_INCLUDE_DIR
NAMES glibconfig.h
HINTS ${PC_LIBDIR} ${PC_LIBRARY_DIRS} ${_GLIB_LIBRARY_DIR}
${PC_GLIB_INCLUDEDIR} ${PC_GLIB_INCLUDE_DIRS}
PATH_SUFFIXES glib-2.0/include
)
find_path(GLIB_INCLUDE_DIR
NAMES glib.h
HINTS ${PC_GLIB_INCLUDEDIR}
${PC_GLIB_INCLUDE_DIRS}
PATH_SUFFIXES glib-2.0
)
set(GLIB_INCLUDE_DIRS ${GLIB_INCLUDE_DIR} ${GLIBCONFIG_INCLUDE_DIR})
if(GLIBCONFIG_INCLUDE_DIR)
# Version detection
file(READ "${GLIBCONFIG_INCLUDE_DIR}/glibconfig.h" GLIBCONFIG_H_CONTENTS)
string(REGEX MATCH "#define GLIB_MAJOR_VERSION ([0-9]+)" _dummy "${GLIBCONFIG_H_CONTENTS}")
set(GLIB_VERSION_MAJOR "${CMAKE_MATCH_1}")
string(REGEX MATCH "#define GLIB_MINOR_VERSION ([0-9]+)" _dummy "${GLIBCONFIG_H_CONTENTS}")
set(GLIB_VERSION_MINOR "${CMAKE_MATCH_1}")
string(REGEX MATCH "#define GLIB_MICRO_VERSION ([0-9]+)" _dummy "${GLIBCONFIG_H_CONTENTS}")
set(GLIB_VERSION_MICRO "${CMAKE_MATCH_1}")
set(GLIB_VERSION "${GLIB_VERSION_MAJOR}.${GLIB_VERSION_MINOR}.${GLIB_VERSION_MICRO}")
endif()
# Additional Glib components. We only look for libraries, as not all of them
# have corresponding headers and all headers are installed alongside the main
# glib ones.
foreach (_component ${GLIB_FIND_COMPONENTS})
if (${_component} STREQUAL "gio")
find_library(GLIB_GIO_LIBRARIES NAMES gio-2.0 HINTS ${_GLIB_LIBRARY_DIR})
set(ADDITIONAL_REQUIRED_VARS ${ADDITIONAL_REQUIRED_VARS} GLIB_GIO_LIBRARIES)
elseif (${_component} STREQUAL "gobject")
find_library(GLIB_GOBJECT_LIBRARIES NAMES gobject-2.0 HINTS ${_GLIB_LIBRARY_DIR})
set(ADDITIONAL_REQUIRED_VARS ${ADDITIONAL_REQUIRED_VARS} GLIB_GOBJECT_LIBRARIES)
elseif (${_component} STREQUAL "gmodule")
find_library(GLIB_GMODULE_LIBRARIES NAMES gmodule-2.0 HINTS ${_GLIB_LIBRARY_DIR})
set(ADDITIONAL_REQUIRED_VARS ${ADDITIONAL_REQUIRED_VARS} GLIB_GMODULE_LIBRARIES)
elseif (${_component} STREQUAL "gthread")
find_library(GLIB_GTHREAD_LIBRARIES NAMES gthread-2.0 HINTS ${_GLIB_LIBRARY_DIR})
set(ADDITIONAL_REQUIRED_VARS ${ADDITIONAL_REQUIRED_VARS} GLIB_GTHREAD_LIBRARIES)
elseif (${_component} STREQUAL "gio-unix")
# gio-unix is compiled as part of the gio library, but the include paths
# are separate from the shared glib ones. Since this is currently only used
# by WebKitGTK+ we don't go to extraordinary measures beyond pkg-config.
pkg_check_modules(GIO_UNIX QUIET gio-unix-2.0)
endif ()
endforeach ()
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(GLIB REQUIRED_VARS GLIB_INCLUDE_DIRS GLIB_LIBRARIES ${ADDITIONAL_REQUIRED_VARS}
VERSION_VAR GLIB_VERSION)
mark_as_advanced(
GLIBCONFIG_INCLUDE_DIR
GLIB_GIO_LIBRARIES
GLIB_GIO_UNIX_LIBRARIES
GLIB_GMODULE_LIBRARIES
GLIB_GOBJECT_LIBRARIES
GLIB_GTHREAD_LIBRARIES
GLIB_INCLUDE_DIR
GLIB_INCLUDE_DIRS
GLIB_LIBRARIES
)

View file

@ -1,30 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# GRADLE_FOUND - system has Gradle
# GRADLE_EXECUTABLE - the Gradle executable
#
# It will search the environment variable ANT_HOME if it is set
include(FindPackageHandleStandardArgs)
find_program(GRADLE_EXECUTABLE NAMES gradle PATHS $ENV{GRADLE_HOME}/bin NO_CMAKE_FIND_ROOT_PATH)
find_package_handle_standard_args(Gradle DEFAULT_MSG GRADLE_EXECUTABLE)
mark_as_advanced(GRADLE_EXECUTABLE)

View file

@ -1,41 +0,0 @@
# find LibEvent
# an event notification library (http://libevent.org/)
#
# Usage:
# LIBEVENT_INCLUDE_DIRS, where to find LibEvent headers
# LIBEVENT_LIBRARIES, LibEvent libraries
# Libevent_FOUND, If false, do not try to use libevent
set(LIBEVENT_ROOT CACHE PATH "Root directory of libevent installation")
set(LibEvent_EXTRA_PREFIXES /usr/local /opt/local "$ENV{HOME}" ${LIBEVENT_ROOT})
foreach(prefix ${LibEvent_EXTRA_PREFIXES})
list(APPEND LibEvent_INCLUDE_PATHS "${prefix}/include")
list(APPEND LibEvent_LIBRARIES_PATHS "${prefix}/lib")
endforeach()
find_path(LIBEVENT_INCLUDE_DIRS event.h PATHS ${LibEvent_INCLUDE_PATHS})
# "lib" prefix is needed on Windows
find_library(LIBEVENT_LIBRARIES NAMES event libevent PATHS ${LibEvent_LIBRARIES_PATHS})
if (LIBEVENT_LIBRARIES AND LIBEVENT_INCLUDE_DIRS)
set(Libevent_FOUND TRUE)
set(LIBEVENT_LIBRARIES ${LIBEVENT_LIBRARIES})
else ()
set(Libevent_FOUND FALSE)
endif ()
if (Libevent_FOUND)
if (NOT Libevent_FIND_QUIETLY)
message(STATUS "Found libevent: ${LIBEVENT_LIBRARIES}")
endif ()
else ()
if (LibEvent_FIND_REQUIRED)
message(FATAL_ERROR "Could NOT find libevent.")
endif ()
message(STATUS "libevent NOT found.")
endif ()
mark_as_advanced(
LIBEVENT_LIBRARIES
LIBEVENT_INCLUDE_DIRS
)

Some files were not shown because too many files have changed in this diff Show more