2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-10-20 21:26:14 +08:00

Compare commits

..

266 Commits

Author SHA1 Message Date
Ken Hibino
32d3f329b9 v0.17.1 2021-04-04 12:51:00 -07:00
Ken Hibino
544c301a8b Fix bug in RDB.memoryUsage 2021-04-04 12:49:19 -07:00
Ken Hibino
8b997d2fab v0.17.0 2021-03-24 16:51:59 -07:00
Ken Hibino
901105a8d7 Add dial, read, write timeout options to RedisConnOpt 2021-03-24 16:49:04 -07:00
Ken Hibino
aaa3f1d4fd v0.16.1 2021-03-20 06:27:03 -07:00
disc
4722ca2d3d Replaced blocking KEYS XXX:* command to non-blocking SCAN XXX:*
More details: https://redis.io/commands/KEYS
2021-03-20 06:24:08 -07:00
Ken Hibino
6a9d9fd717 v0.16.0 2021-03-10 20:39:46 -08:00
Ken Hibino
de28c1ea19 Add Unregister method to Scheduler 2021-03-10 20:38:44 -08:00
Ken Hibino
f618f5b1f5 Add benchmark tests for rdb package 2021-03-07 16:27:14 -08:00
Ken Hibino
aa936466b3 Minor fix 2021-03-07 16:27:14 -08:00
Ken Hibino
5d1ec70544 Run CI build with go1.16 2021-02-25 09:31:17 -08:00
Ken Hibino
d1d3be9b00 Add Web UI section in README 2021-02-01 17:01:04 -08:00
Ken Hibino
bc77f6fe14 v0.15.0 2021-01-31 06:11:17 -08:00
Ken Hibino
efe197a47b Use db13 for inspeq package testing 2021-01-31 06:09:40 -08:00
Ken Hibino
97b5516183 Update RedisConnOpt interface 2021-01-31 06:09:40 -08:00
Ken Hibino
8eafa03ca7 Fix doc indentation 2021-01-31 06:09:40 -08:00
Ken Hibino
430b01c9aa Fix CLI build 2021-01-31 06:09:40 -08:00
Ken Hibino
14c381dc40 Update documentation for inspeq package 2021-01-31 06:09:40 -08:00
Ken Hibino
e13122723a Move all inspector related code to subpackage inspeq 2021-01-31 06:09:40 -08:00
Ken Hibino
eba7c4e085 Record deadline within WorkerInfo 2021-01-31 06:09:40 -08:00
Ken Hibino
bfde0b6283 Add Retry and LastError fields to inspector tasks 2021-01-31 06:09:40 -08:00
Ken Hibino
afde6a7266 Add MemoryUsage field to QueueStats 2021-01-31 06:09:40 -08:00
Ken Hibino
6529a1e0b1 Fix scheduler
* Delete scheduler history data when scheduler stops

* Fix history trimming bug
2021-01-31 06:09:40 -08:00
Ken Hibino
c9a6ab8ae1 Support delete and archive actions on PendingTask
* Add `DeleteAllPendingTasks`, `ArchiveAllPendingTasks` to `Inspector`

* `DeleteTaskByKey` and `ArchiveTaskByKey` now supports deleting/archiving PendingTask

* Updated `asynq task` command with support for deleting/archiving pending tasks
2021-01-31 06:09:40 -08:00
Ken Hibino
557c1a5044 Remove Travis CI files 2021-01-29 23:01:20 -08:00
Ken Hibino
0236eb9a1c Add benchstat workflow 2021-01-29 22:59:28 -08:00
Ken Hibino
3c2b2cf4a3 Update build status badge 2021-01-29 15:03:27 -08:00
Ken Hibino
04df71198d Create go github action 2021-01-29 14:52:55 -08:00
Ken Hibino
2884044e75 v0.14.1 2021-01-19 06:22:54 -08:00
Ken Hibino
3719fad396 Update asynq version in go.mod for toolings 2021-01-19 06:20:39 -08:00
Ken Hibino
42c7ac0746 v0.14.0 2021-01-14 06:49:36 -08:00
Ken Hibino
d331ff055d Minor doc fixes 2021-01-14 06:43:44 -08:00
Ken Hibino
ccb682853e Export DefaultRetryDelayFunc 2021-01-14 06:43:44 -08:00
Ken Hibino
7c3ad9e45c Update CHANGELOG 2021-01-14 06:43:44 -08:00
Ken Hibino
ea23db4f6b Update migrate command to move all dead tasks to the new archived zset 2021-01-14 06:43:44 -08:00
Ken Hibino
00a25ca570 Rename DeadTask to ArchivedTask and action "kill" to "archive" 2021-01-14 06:43:44 -08:00
Ken Hibino
7235041128 Add SkipRetry error to be used as a return value from Handler 2021-01-14 06:43:44 -08:00
Ken Hibino
a150d18ed7 Include file and line number info in the error generated from a panic 2021-01-14 06:43:44 -08:00
Ken Hibino
0712e90f23 Print stack track when recovering from a panic in processor 2021-01-14 06:43:44 -08:00
Ken Hibino
c5100a9c23 Add a method to list running servers to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
196d66f221 Fix ListSchedulerEnqueueEvents to list recent events first 2021-01-14 06:43:44 -08:00
Ken Hibino
38509e309f Update cron history command to accept pagination options 2021-01-14 06:43:44 -08:00
Ken Hibino
f4dd8fe962 Add ListScheduelerEnqueueEvents to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
c06e9de97d Add CancelActiveTask method to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
52d536a8f5 Update changelog 2021-01-14 06:43:44 -08:00
Ken Hibino
f9c0673116 Add SchedulerEntries method to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
b604d25937 Add helper function to parse Option string 2021-01-14 06:43:44 -08:00
Ken Hibino
dfdf530a24 Fix cron history command usage string 2021-01-14 06:43:44 -08:00
Ken Hibino
e9239260ae Add DeleteQueue method to Inspector
- Added ErrQueueNotFound and ErrQueueNotEmpty type to indicate the kind
  of an error returned from the method.
2021-01-14 06:43:44 -08:00
Bojan Zivanovic
8f9d5a3352 When a scheduler enqueues a task, log to DEBUG, not INFO. Fixes #223. 2021-01-13 15:49:56 -08:00
MinJae Kwon
c4dc993241 fix: resolve go vet lint 2020-12-20 06:09:51 -08:00
MinJae Kwon
37dfd746d4 fix: syntax error in readme example 2020-12-17 06:05:16 -08:00
Ken Hibino
8d6e4167ab Fix a typo in readme 2020-11-25 06:11:55 -08:00
Ken Hibino
476862dd7b v0.13.1 2020-11-22 12:26:52 -08:00
Ken Hibino
dcd873fa2a fix: Wait for specified time duration before shutdown 2020-11-22 12:25:27 -08:00
strobus
2604bb2192 add tls support to command line tool 2020-10-14 15:13:05 -07:00
Ken Hibino
942345ee80 v0.13.0 2020-10-13 06:33:47 -07:00
Ken Hibino
1f059eeee1 Update docs for periodic tasks feature 2020-10-13 06:31:47 -07:00
Ken Hibino
4ae73abdaa Minor update to asynq cron command 2020-10-13 06:31:47 -07:00
Ken Hibino
96b2318300 Add EnqueueErrorHandler option to SchedulerOpts 2020-10-13 06:31:47 -07:00
Ken Hibino
8312515e64 Update Option interface
- Added `String()`, `Type()`, and `Value()` methods to the interface to
  aid with debugging and error handling.
2020-10-13 06:31:47 -07:00
Ken Hibino
50e7f38365 Add Scheduler
- Renamed previously called scheduler to forwarder to resolve name
  conflicts
2020-10-13 06:31:47 -07:00
Ken Hibino
fadcae76d6 Add String and MarshalJSON methods to Payload type 2020-09-20 07:33:23 -07:00
Ken Hibino
a2d4ead989 Fix comments in Config 2020-09-14 21:48:05 -07:00
Ken Hibino
82b6828f43 Replace benchcmp with benchstat 2020-09-14 06:59:55 -07:00
Ken Hibino
3114987428 v0.12.0 2020-09-12 13:34:27 -07:00
Ken Hibino
1ee3b10104 Update changelog 2020-09-12 12:59:03 -07:00
Ken Hibino
6d720d6a05 Update demo.gif for CLI demo 2020-09-12 12:59:03 -07:00
Ken Hibino
3e6981170d Use color package to bold fonts in CLI output 2020-09-12 12:59:03 -07:00
Ken Hibino
a9aa480551 Update migrate command 2020-09-12 12:59:03 -07:00
Ken Hibino
9d41de795a Mention about testing using redis cluster in CONTRIBUTING.md 2020-09-12 12:59:03 -07:00
Ken Hibino
c43fb21a0a Minor test updates 2020-09-12 12:59:03 -07:00
Ken Hibino
a293efcdab Add Close to Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
69d7ec725a Close redis client after each test run 2020-09-12 12:59:03 -07:00
Ken Hibino
450a9aa1e2 Add MaxRedirects field in RedisClusterClientOpt 2020-09-12 12:59:03 -07:00
Ken Hibino
6e294a7013 Add Username field to RedisConnOpt 2020-09-12 12:59:03 -07:00
Ken Hibino
c26b7469bd Display cluster info in stats command when --cluster flag is passed 2020-09-12 12:59:03 -07:00
Ken Hibino
818c2d6f35 Add GetQueueName helper to extract queue name from context 2020-09-12 12:59:03 -07:00
Ken Hibino
e09870a08a Update package documentation 2020-09-12 12:59:03 -07:00
Ken Hibino
ac3d5b126a Update README 2020-09-12 12:59:03 -07:00
Ken Hibino
29e542e591 Rename Enqueue methods in Inspector to Run 2020-09-12 12:59:03 -07:00
Ken Hibino
a891ce5568 Rename InProgress to Active 2020-09-12 12:59:03 -07:00
Ken Hibino
ebe3c4083f Rename NextEnqueueAt to NextProcessAt 2020-09-12 12:59:03 -07:00
Ken Hibino
c8c47fcbf0 Rename Enqueued to Pending 2020-09-12 12:59:03 -07:00
Ken Hibino
cca680a7fd Change Client.Enqueue to take ProcessAt and ProcessIn as Option 2020-09-12 12:59:03 -07:00
Ken Hibino
8076b5ae50 Use different redis db number for rdb package tests 2020-09-12 12:59:03 -07:00
Ken Hibino
a42c174dae Display cluster keyslot and nodes in queueList command 2020-09-12 12:59:03 -07:00
Ken Hibino
a88325cb96 Add ClusterNodes and ClusterKeySlot in Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
eb739a0258 Fix flaky test 2020-09-12 12:59:03 -07:00
Ken Hibino
a9c31553b8 Add redis-cluster support in asynq CLI 2020-09-12 12:59:03 -07:00
Ken Hibino
dab8295883 Validate queue name in Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
131ac823fd Return error if queue name is empty when enqueueing 2020-09-12 12:59:03 -07:00
Ken Hibino
4897dba397 Upgrade redis client lib to v7.4.0 2020-09-12 12:59:03 -07:00
Ken Hibino
6b96459881 Add test flags to run tests using redis cluster 2020-09-12 12:59:03 -07:00
Ken Hibino
572eb338d5 Fix flaky ProcessorRetry test 2020-09-12 12:59:03 -07:00
Ken Hibino
27f4027447 Add RedisClusterClientOpt to connect to redis cluster 2020-09-12 12:59:03 -07:00
Ken Hibino
ee1afd12f5 Fix done lua script
If UniqueKey is an empty string, do not provide the key to Lua script
because that will cause CROSSSLOT error in redis cluster (since it
doesn't have any hash tag).
2020-09-12 12:59:03 -07:00
Ken Hibino
3ac548e97c Fix dequeue Lua script to use a single hash tag 2020-09-12 12:59:03 -07:00
Ken Hibino
f38f94b947 Restructure CLI commands with subcommands 2020-09-12 12:59:03 -07:00
Ken Hibino
d6f389e63f Add Queues method to Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
118ef27bf2 Update RemoveQueue in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
fad0696828 Fix errors in inspector tests 2020-09-12 12:59:03 -07:00
Ken Hibino
4037b41479 Fix client tests 2020-09-12 12:59:03 -07:00
Ken Hibino
96f23d88cd Add more processor tests 2020-09-12 12:59:03 -07:00
Ken Hibino
83bdca5220 Fix test build errors 2020-09-12 12:59:03 -07:00
Ken Hibino
2f226dfb84 Update ListServers and ListWorkers methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
3f26122ac0 Fix more build errors 2020-09-12 12:59:03 -07:00
Ken Hibino
2a18181501 Fix inspector build error 2020-09-12 12:59:03 -07:00
Ken Hibino
aa2676bb57 Update Broker interface 2020-09-12 12:59:03 -07:00
Ken Hibino
9348a62691 Update Inspector API 2020-09-12 12:59:03 -07:00
Ken Hibino
f59de9ac56 Update all delete methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
996a6c0ead Update all kill methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
47e9ba4eba Update enqueue methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
dbf140a767 Update all list methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
5f82b4b365 Update HistoricalStats method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
44a3d177f0 Update Pause and Unpause methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
24b13bd865 Update CurrentStats method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
d25090c669 Add AllQueues method to RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
b5caefd663 Remove stale benchmark test 2020-09-12 12:59:03 -07:00
Ken Hibino
becd26479b Update WriteServerState and ClearServerState in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
4b81b91d3e Minor fix 2020-09-12 12:59:03 -07:00
Ken Hibino
8e23b865e9 Update recoverer 2020-09-12 12:59:03 -07:00
Ken Hibino
a873d488ee Update ListDeadlineExceeded in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
e0a8f1252a Update scheduler to check and enqueue for only the specified queues. 2020-09-12 12:59:03 -07:00
Ken Hibino
650d7fdbe9 Update CheckAndEnqueue method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
f6d504939e Update Requeue method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
74f08795f8 Update Kill method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
35b2b1782e Update Retry method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
f63dcce0c0 Update Done method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
565f86ee4f Update Dequeue command in rdb 2020-09-12 12:59:03 -07:00
Ken Hibino
94aa878060 Update Enqueue and Schedule commands in rdb 2020-09-12 12:59:03 -07:00
Ken Hibino
50b6034bf9 Move unique key generator function to base 2020-09-12 12:59:03 -07:00
Ken Hibino
154113d0d0 Update base package to generate redis keys with hashtag 2020-09-12 12:59:03 -07:00
Ken Hibino
669c7995c4 Run CI builds using go v1.15.x 2020-09-02 06:34:58 -07:00
Ken Hibino
6d6a301379 v0.11.0 2020-07-28 22:46:41 -07:00
Ken Hibino
53f9475582 Update changelog 2020-07-28 22:45:57 -07:00
Ken Hibino
e8fdbc5a72 Fix history command 2020-07-28 22:45:57 -07:00
Ken Hibino
5f06c308f0 Add Pause and Unpause queue methods to Inspector 2020-07-28 22:45:57 -07:00
Ken Hibino
a913e6d73f Add healthchecker to check broker connection 2020-07-28 22:45:57 -07:00
Ken Hibino
6978e93080 Fix flaky test 2020-07-28 22:45:57 -07:00
Ken Hibino
92d77bbc6e Minor comment fix 2020-07-28 22:45:57 -07:00
Ken Hibino
a28f61f313 Add Inspector type 2020-07-28 22:45:57 -07:00
Ken Hibino
9bd3d8e19e v0.10.0 2020-07-06 05:53:56 -07:00
Ken Hibino
7382e2aeb8 Do not start worker goroutine for task already exceeded its deadline 2020-07-06 05:48:31 -07:00
Ken Hibino
007fac8055 Invoke error handler when ctx.Done channel is closed 2020-07-06 05:48:31 -07:00
Ken Hibino
8d43fe407a Change ErrorHandler function signature 2020-07-06 05:48:31 -07:00
Ken Hibino
34b90ecc8a Return Result struct to caller of Enqueue 2020-07-06 05:48:31 -07:00
Ken Hibino
8b60e6a268 Replace github.com/rs/xid with github.com/google/uuid 2020-07-06 05:48:31 -07:00
Ken Hibino
486dcd799b Add version command to CLI 2020-07-06 05:48:31 -07:00
Ken Hibino
195f4603bb Add migrate command to CLI
The command converts all messages in redis to be compatible for asynq
v0.10.0
2020-07-06 05:48:31 -07:00
Ken Hibino
2e2c9b9f6b Update docs 2020-07-06 05:48:31 -07:00
Ken Hibino
199bf4d66a Minor code cleanup 2020-07-06 05:48:31 -07:00
Ken Hibino
7e942ec241 Use int64 type for Timeout and Deadline in TaskMessage 2020-07-06 05:48:31 -07:00
Ken Hibino
379da8f7a2 Clean up processor test 2020-07-06 05:48:31 -07:00
Ken Hibino
feee87adda Add recoverer 2020-07-06 05:48:31 -07:00
Ken Hibino
7657f560ec Add RDB.ListDeadlineExceeded 2020-07-06 05:48:31 -07:00
Ken Hibino
7c7de0d8e0 Fix processor 2020-07-06 05:48:31 -07:00
Ken Hibino
83f1e20d74 Add deadline to syncRequest
- syncer will drop a request if its deadline has been exceeded
2020-07-06 05:48:31 -07:00
Ken Hibino
4e8ac151ae Update processor to adapt for deadlines set change
- Processor dequeues tasks only when it's available to process
- Processor retries a task when its context's Done channel is closed
2020-07-06 05:48:31 -07:00
Ken Hibino
08b71672aa Update RDB.Requeue to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
92af00f9fd Update RDB.Dequeue to return deadline as time.Time 2020-07-06 05:48:31 -07:00
Ken Hibino
113451ce6a Update RDB.Kill to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
9cd9f3d6b4 Update RDB.Retry to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
7b9119c703 Update RDB.Done to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
9b05dea394 Update RDB.Dequeue to return message and deadline 2020-07-06 05:48:31 -07:00
Ken Hibino
6cc5bafaba Add task message to deadlines set on dequeue
Updated dequeueCmd to decode the message and compute its deadline and add
the message to the Deadline set.
2020-07-06 05:48:31 -07:00
Ken Hibino
716d3d987e Use default timeout of 30mins if both timeout and deadline are not
provided
2020-07-06 05:48:31 -07:00
Ken Hibino
0527b93432 Change TaskMessage Timeout and Deadline to int
* This change breaks existing tasks in Redis
2020-07-06 05:48:31 -07:00
Ken Hibino
5dddc35d7c Add redis key for deadlines in base package 2020-07-06 05:48:31 -07:00
Ken Hibino
4e5f596910 Fix Client.Enqueue to always call enqueue
Closes https://github.com/hibiken/asynq/issues/158
2020-06-14 05:54:18 -07:00
Ken Hibino
8bf5917cd9 v0.9.4 2020-06-13 06:27:28 -07:00
Ken Hibino
7f30fa2bb6 Fix requeue logic in processor 2020-06-13 06:22:32 -07:00
Ken Hibino
ade6e61f51 v0.9.3 2020-06-12 06:31:42 -07:00
Ken Hibino
a2abeedaa0 Fix JSON number ovewflow issue 2020-06-12 06:29:36 -07:00
lion.zhao
81bb52b08c processor: log detail err in markAsDone func 2020-06-10 05:57:31 -07:00
Ken Hibino
bc2a7635a0 v0.9.2 2020-06-08 06:23:02 -07:00
Ken Hibino
f65d408bf9 Update docs for pause feature 2020-06-08 06:22:14 -07:00
Ken Hibino
4749b4bbfc Add benchmark test to verify client enqueue performance while server is
running
2020-06-08 06:06:18 -07:00
Ken Hibino
06c4a1c7f8 Limit the number of tasks moved by CheckAndEnqueue to prevent a long
running script
2020-06-08 06:06:18 -07:00
Ken Hibino
8af4cbad51 Fix data race in test 2020-06-08 06:06:18 -07:00
Ken Hibino
4e800a7f68 Update stats command to show queue paused status 2020-06-08 06:06:18 -07:00
Ken Hibino
d6a5c84dc6 Add pause and unpause command to CLI 2020-06-08 06:06:18 -07:00
Ken Hibino
363cfedb49 Update Dequeue operation to skip paused queues 2020-06-08 06:06:18 -07:00
Ken Hibino
4595bd41c3 Add Pause and Unpause methods to rdb 2020-06-08 06:06:18 -07:00
Ken Hibino
e236d55477 Fix cli build 2020-06-04 06:35:50 -07:00
Ken Hibino
a38f628f3b Refactor server state management 2020-05-31 06:41:19 -07:00
Ken Hibino
69ad583278 v0.9.1 2020-05-29 05:42:40 -07:00
Ken Hibino
23f46dde52 Add helper functions to extract task metadata from context 2020-05-29 05:40:42 -07:00
lihe
39188fe930 remove typo and redundant code 2020-05-22 05:11:54 -07:00
Ken Hibino
4492ed9255 Change internal constructor signatures.
Created "params" type to avoid positional arguments.
Personally it feels more explicit and reads better.
2020-05-17 13:25:24 -07:00
Ken Hibino
4e3e053989 Update readme 2020-05-16 11:00:44 -07:00
Ken Hibino
aef0775c05 v0.9.0 2020-05-16 08:02:57 -07:00
Ken Hibino
de146993d2 Add log messages around Server.Quiet 2020-05-16 08:01:39 -07:00
Ken Hibino
60cbf8dc5a Minor code cleanup 2020-05-16 08:00:35 -07:00
Ken Hibino
fb38086590 Clean up log messages
Moved development purpose log messages to DEBUG level.
2020-05-16 08:00:35 -07:00
Ken Hibino
cfcd19a222 Change default log level to info 2020-05-16 08:00:35 -07:00
Ken Hibino
24ee4b9693 Define test flags for package testing
Added test flags for

- redis address (defaults to "localhost:6379")
- redis db number (defaults to 14)
- log level (defaults to FATAL)
2020-05-16 08:00:35 -07:00
Ken Hibino
7849b395bd Update changelog 2020-05-16 08:00:35 -07:00
Ken Hibino
fa3082e5bb Change LogLevel to satisfy flag.Value interface 2020-05-16 08:00:35 -07:00
Ken Hibino
d13f7e900f Allow setting minimum log level for logger 2020-05-16 08:00:35 -07:00
Ken Hibino
b63476ddc8 Simplify Logger interface 2020-05-16 08:00:35 -07:00
Ken Hibino
210b026b01 Add log messages around Server.Quiet 2020-05-16 07:12:08 -07:00
Ken Hibino
556b2103fe Minor code cleanup 2020-05-15 08:19:35 -07:00
Ken Hibino
0289bc7a10 Clean up log messages
Moved development purpose log messages to DEBUG level.
2020-05-11 20:28:49 -07:00
Ken Hibino
ae942c93e5 Change default log level to info 2020-05-11 20:28:49 -07:00
Ken Hibino
0faf97f146 Define test flags for package testing
Added test flags for

- redis address (defaults to "localhost:6379")
- redis db number (defaults to 14)
- log level (defaults to FATAL)
2020-05-11 06:22:43 -07:00
Ken Hibino
711bfa371f Update changelog 2020-05-11 06:22:43 -07:00
Ken Hibino
73d62844e6 Change LogLevel to satisfy flag.Value interface 2020-05-11 06:22:43 -07:00
Ken Hibino
00b82904c6 Allow setting minimum log level for logger 2020-05-11 06:22:43 -07:00
Ken Hibino
a866369866 Simplify Logger interface 2020-05-11 06:22:43 -07:00
Ken Hibino
26b78136ba v0.8.3 2020-05-08 06:21:01 -07:00
t-asaka
44aad7f037 Add redis conn close func to client 2020-05-08 06:15:14 -07:00
Ken Hibino
9884d5f2fa v0.8.2 2020-05-03 16:55:34 -07:00
Ken Hibino
826f1ecff4 Update docs 2020-05-03 16:54:39 -07:00
Ken Hibino
24f2b64c6c Make sure to invoke CancelFunc in all cases 2020-05-03 15:58:23 -07:00
Ken Hibino
1c1474c55c Add tests to simulate cases where server cannot talk to redis 2020-05-02 07:05:26 -07:00
Ken Hibino
5161b9368a Clean up tests 2020-05-02 07:05:26 -07:00
Ken Hibino
0c998a8e17 Add test for signal handling 2020-04-28 06:56:05 -07:00
Ken Hibino
49160f2536 v0.8.1 2020-04-27 06:49:12 -07:00
Ken Hibino
e33d297d8e Add SetDefaultOptions method to Client 2020-04-27 06:45:13 -07:00
Ken Hibino
eb8ced6bdd Add ParseRedisURI helper function 2020-04-25 13:06:20 -07:00
Ken Hibino
789a9fd711 Update readme 2020-04-20 07:52:26 -07:00
Ken Hibino
5924cdac33 Add example tests 2020-04-19 11:36:43 -07:00
Ken Hibino
442c9275a0 v0.8.0 2020-04-19 09:08:20 -07:00
Ken Hibino
a0865df33c Change default concurrency to the number of CPUs 2020-04-19 08:51:17 -07:00
Ken Hibino
431a96a1f7 Update changelog 2020-04-19 08:51:17 -07:00
Ken Hibino
74e5582cfc Update readme 2020-04-19 08:51:17 -07:00
Ken Hibino
bf542a781c Add failure test for heartbeater 2020-04-19 08:51:17 -07:00
Ken Hibino
7c7f8e5f30 Move Broker interface to base package 2020-04-19 08:51:17 -07:00
Ken Hibino
46ab4417dd Add test to simulate situation where redis is down 2020-04-19 08:51:17 -07:00
Ken Hibino
f8a94fb839 Define broker interface 2020-04-19 08:51:17 -07:00
Ken Hibino
42453280f4 Fix subscriber to not panic when it cannot establish pubsub channel on
startup
2020-04-19 08:51:17 -07:00
Ken Hibino
4ec2dc9e47 Minor reorganization in tests 2020-04-19 08:51:17 -07:00
Ken Hibino
45933eb6b0 Reword doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
4df372b369 Allow user to configure shutdown timeout 2020-04-19 08:51:17 -07:00
Ken Hibino
c688b8f4f9 Fix test for base package 2020-04-19 08:51:17 -07:00
Ken Hibino
eef2f5f3cb Add test cases for server error 2020-04-19 08:51:17 -07:00
Ken Hibino
239ef27a6e Update doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
24da281aa7 Update docs with new APIs 2020-04-19 08:51:17 -07:00
Ken Hibino
b086e88a47 Rename ps command to servers 2020-04-19 08:51:17 -07:00
Ken Hibino
cf61911a49 Update all reference to asynqmon to Asynq CLI 2020-04-19 08:51:17 -07:00
Ken Hibino
aafd8a5b74 Rename internal ProcessState to ServerState 2020-04-19 08:51:17 -07:00
Ken Hibino
4f11e52558 Rename CLI to asynq 2020-04-19 08:51:17 -07:00
Ken Hibino
b14c73809e Refactor server state 2020-04-19 08:51:17 -07:00
Ken Hibino
779065c269 Export Start, Stop and Quiet method on Server type 2020-04-19 08:51:17 -07:00
Ken Hibino
f9842ba914 Rename Background to Server 2020-04-19 08:51:17 -07:00
Ken Hibino
022dc29701 Add overview section in readme 2020-04-11 17:08:31 -07:00
Ken Hibino
40d1889ba0 Highlight stability and compatibility section in readme 2020-04-11 09:30:00 -07:00
Ken Hibino
7e96e893fe (fix): Change log messages depending on signals being handled 2020-04-10 08:56:01 -07:00
Ken Hibino
84b0c76c8b v0.7.1 2020-04-05 14:56:06 -07:00
Ken Hibino
60b887b8e3 Fix singnal handling for different systems 2020-04-05 14:37:23 -07:00
Ken Hibino
7864bea55c Update readme
Add features section
2020-03-28 08:44:06 -07:00
Apos Spanos
47220554ca Correct typo 2020-03-23 13:47:05 -07:00
Ken Hibino
f91c05b92c v0.7.0 2020-03-22 12:04:37 -07:00
Ken Hibino
9b4438347e Fix comment 2020-03-21 11:44:26 -07:00
Ken Hibino
c33dd447ac Allow client to enqueue a task with unique option
Changes:

- Added Unique option for clients
- Require go v.13 or above (to use new errors wrapping functions)
- Fixed adding queue key to all-queues set (asynq:queues) when scheduling.
2020-03-21 11:40:40 -07:00
Ken Hibino
6df2c3ae2b v0.6.2 2020-03-15 21:02:28 -07:00
Ken Hibino
37554fd23c Update readme example code 2020-03-15 14:56:00 -07:00
Ken Hibino
77f5a38453 Refactor payload_test to reduce cyclomatic complexities 2020-03-14 12:30:42 -07:00
Ken Hibino
8d2b9d6be7 Add comments to exported types and functions from internal/log package 2020-03-13 21:04:45 -07:00
Bo-Yi Wu
1b7d557c66 fix typo 2020-03-13 20:02:26 -07:00
Bo-Yi Wu
30b68728d4 chore(lint): fix from gofmt -s 2020-03-13 20:01:39 -07:00
Ken Hibino
310d38620d Minor tweak to readme example code 2020-03-13 17:27:20 -07:00
Ken Hibino
1a53bbf21b Update changelog 2020-03-13 17:27:20 -07:00
Ken Hibino
9c79a7d507 Simplify code with gofmt -s 2020-03-13 14:24:24 -07:00
Ken Hibino
516f95edff Add Use method to better support middlewares with ServeMux 2020-03-13 14:13:17 -07:00
91 changed files with 17708 additions and 5573 deletions

82
.github/workflows/benchstat.yml vendored Normal file
View File

@@ -0,0 +1,82 @@
# This workflow runs benchmarks against the current branch,
# compares them to benchmarks against master,
# and uploads the results as an artifact.
name: benchstat
on: [pull_request]
jobs:
incoming:
runs-on: ubuntu-latest
services:
redis:
image: redis
ports:
- 6379:6379
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.16.x
- name: Benchmark
run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a new.txt
- name: Upload Benchmark
uses: actions/upload-artifact@v2
with:
name: bench-incoming
path: new.txt
current:
runs-on: ubuntu-latest
services:
redis:
image: redis
ports:
- 6379:6379
steps:
- name: Checkout
uses: actions/checkout@v2
with:
ref: master
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.x
- name: Benchmark
run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a old.txt
- name: Upload Benchmark
uses: actions/upload-artifact@v2
with:
name: bench-current
path: old.txt
benchstat:
needs: [incoming, current]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.x
- name: Install benchstat
run: go get -u golang.org/x/perf/cmd/benchstat
- name: Download Incoming
uses: actions/download-artifact@v2
with:
name: bench-incoming
- name: Download Current
uses: actions/download-artifact@v2
with:
name: bench-current
- name: Benchstat Results
run: benchstat old.txt new.txt | tee -a benchstat.txt
- name: Upload benchstat results
uses: actions/upload-artifact@v2
with:
name: benchstat
path: benchstat.txt

35
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: build
on: [push, pull_request]
jobs:
build:
strategy:
matrix:
os: [ubuntu-latest]
go-version: [1.13.x, 1.14.x, 1.15.x, 1.16.x]
runs-on: ${{ matrix.os }}
services:
redis:
image: redis
ports:
- 6379:6379
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Build
run: go build -v ./...
- name: Test
run: go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
- name: Benchmark Test
run: go test -run=^$ -bench=. -loglevel=debug ./...
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1

6
.gitignore vendored
View File

@@ -15,7 +15,7 @@
/examples /examples
# Ignore command binary # Ignore command binary
/tools/asynqmon/asynqmon /tools/asynq/asynq
# Ignore asynqmon config file # Ignore asynq config file
.asynqmon.* .asynq.*

View File

@@ -1,14 +0,0 @@
language: go
go_import_path: github.com/hibiken/asynq
git:
depth: 1
env:
- GO111MODULE=on # go modules are the default
go: [1.12.x, 1.13.x, 1.14.x]
script:
- go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
services:
- redis-server
after_success:
- bash ./.travis/benchcmp.sh
- bash <(curl -s https://codecov.io/bash)

View File

@@ -1,15 +0,0 @@
if [ "${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}" != "master" ]; then
REMOTE_URL="$(git config --get remote.origin.url)";
cd ${TRAVIS_BUILD_DIR}/.. && \
git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \
cd "${TRAVIS_REPO_SLUG}-bench" && \
# Benchmark master
git checkout master && \
go test -run=XXX -bench=. ./... > master.txt && \
# Benchmark feature branch
git checkout ${TRAVIS_COMMIT} && \
go test -run=XXX -bench=. ./... > feature.txt && \
go get -u golang.org/x/tools/cmd/benchcmp && \
# compare two benchmarks
benchcmp master.txt feature.txt;
fi

View File

@@ -7,6 +7,280 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [0.17.1] - 2021-04-04
### Fixed
- Fix bug in internal `RDB.memoryUsage` method.
## [0.17.0] - 2021-03-24
### Added
- `DialTimeout`, `ReadTimeout`, and `WriteTimeout` options are added to `RedisConnOpt`.
## [0.16.1] - 2021-03-20
### Fixed
- Replace `KEYS` command with `SCAN` as recommended by [redis doc](https://redis.io/commands/KEYS).
## [0.16.0] - 2021-03-10
### Added
- `Unregister` method is added to `Scheduler` to remove a registered entry.
## [0.15.0] - 2021-01-31
**IMPORTATNT**: All `Inspector` related code are moved to subpackage "github.com/hibiken/asynq/inspeq"
### Changed
- `Inspector` related code are moved to subpackage "github.com/hibken/asynq/inspeq".
- `RedisConnOpt` interface has changed slightly. If you have been passing `RedisClientOpt`, `RedisFailoverClientOpt`, or `RedisClusterClientOpt` as a pointer,
update your code to pass as a value.
- `ErrorMsg` field in `RetryTask` and `ArchivedTask` was renamed to `LastError`.
### Added
- `MaxRetry`, `Retried`, `LastError` fields were added to all task types returned from `Inspector`.
- `MemoryUsage` field was added to `QueueStats`.
- `DeleteAllPendingTasks`, `ArchiveAllPendingTasks` were added to `Inspector`
- `DeleteTaskByKey` and `ArchiveTaskByKey` now supports deleting/archiving `PendingTask`.
- asynq CLI now supports deleting/archiving pending tasks.
## [0.14.1] - 2021-01-19
### Fixed
- `go.mod` file for CLI
## [0.14.0] - 2021-01-14
**IMPORTATNT**: Please run `asynq migrate` command to migrate from the previous versions.
### Changed
- Renamed `DeadTask` to `ArchivedTask`.
- Renamed the operation `Kill` to `Archive` in `Inpsector`.
- Print stack trace when Handler panics.
- Include a file name and a line number in the error message when recovering from a panic.
### Added
- `DefaultRetryDelayFunc` is now a public API, which can be used in the custom `RetryDelayFunc`.
- `SkipRetry` error is added to be used as a return value from `Handler`.
- `Servers` method is added to `Inspector`
- `CancelActiveTask` method is added to `Inspector`.
- `ListSchedulerEnqueueEvents` method is added to `Inspector`.
- `SchedulerEntries` method is added to `Inspector`.
- `DeleteQueue` method is added to `Inspector`.
## [0.13.1] - 2020-11-22
### Fixed
- Fixed processor to wait for specified time duration before forcefully shutdown workers.
## [0.13.0] - 2020-10-13
### Added
- `Scheduler` type is added to enable periodic tasks. See the godoc for its APIs and [wiki](https://github.com/hibiken/asynq/wiki/Periodic-Tasks) for the getting-started guide.
### Changed
- interface `Option` has changed. See the godoc for the new interface.
This change would have no impact as long as you are using exported functions (e.g. `MaxRetry`, `Queue`, etc)
to create `Option`s.
### Added
- `Payload.String() string` method is added
- `Payload.MarshalJSON() ([]byte, error)` method is added
## [0.12.0] - 2020-09-12
**IMPORTANT**: If you are upgrading from a previous version, please install the latest version of the CLI `go get -u github.com/hibiken/asynq/tools/asynq` and run `asynq migrate` command. No process should be writing to Redis while you run the migration command.
## The semantics of queue have changed
Previously, we called tasks that are ready to be processed _"Enqueued tasks"_, and other tasks that are scheduled to be processed in the future _"Scheduled tasks"_, etc.
We changed the semantics of _"Enqueue"_ slightly; All tasks that client pushes to Redis are _Enqueued_ to a queue. Within a queue, tasks will transition from one state to another.
Possible task states are:
- `Pending`: task is ready to be processed (previously called "Enqueued")
- `Active`: tasks is currently being processed (previously called "InProgress")
- `Scheduled`: task is scheduled to be processed in the future
- `Retry`: task failed to be processed and will be retried again in the future
- `Dead`: task has exhausted all of its retries and stored for manual inspection purpose
**These semantics change is reflected in the new `Inspector` API and CLI commands.**
---
### Changed
#### `Client`
Use `ProcessIn` or `ProcessAt` option to schedule a task instead of `EnqueueIn` or `EnqueueAt`.
| Previously | v0.12.0 |
| --------------------------- | ------------------------------------------ |
| `client.EnqueueAt(t, task)` | `client.Enqueue(task, asynq.ProcessAt(t))` |
| `client.EnqueueIn(d, task)` | `client.Enqueue(task, asynq.ProcessIn(d))` |
#### `Inspector`
All Inspector methods are scoped to a queue, and the methods take `qname (string)` as the first argument.
`EnqueuedTask` is renamed to `PendingTask` and its corresponding methods.
`InProgressTask` is renamed to `ActiveTask` and its corresponding methods.
Command "Enqueue" is replaced by the verb "Run" (e.g. `EnqueueAllScheduledTasks` --> `RunAllScheduledTasks`)
#### `CLI`
CLI commands are restructured to use subcommands. Commands are organized into a few management commands:
To view details on any command, use `asynq help <command> <subcommand>`.
- `asynq stats`
- `asynq queue [ls inspect history rm pause unpause]`
- `asynq task [ls cancel delete kill run delete-all kill-all run-all]`
- `asynq server [ls]`
### Added
#### `RedisConnOpt`
- `RedisClusterClientOpt` is added to connect to Redis Cluster.
- `Username` field is added to all `RedisConnOpt` types in order to authenticate connection when Redis ACLs are used.
#### `Client`
- `ProcessIn(d time.Duration) Option` and `ProcessAt(t time.Time) Option` are added to replace `EnqueueIn` and `EnqueueAt` functionality.
#### `Inspector`
- `Queues() ([]string, error)` method is added to get all queue names.
- `ClusterKeySlot(qname string) (int64, error)` method is added to get queue's hash slot in Redis cluster.
- `ClusterNodes(qname string) ([]ClusterNode, error)` method is added to get a list of Redis cluster nodes for the given queue.
- `Close() error` method is added to close connection with redis.
### `Handler`
- `GetQueueName(ctx context.Context) (string, bool)` helper is added to extract queue name from a context.
## [0.11.0] - 2020-07-28
### Added
- `Inspector` type was added to monitor and mutate state of queues and tasks.
- `HealthCheckFunc` and `HealthCheckInterval` fields were added to `Config` to allow user to specify a callback
function to check for broker connection.
## [0.10.0] - 2020-07-06
### Changed
- All tasks now requires timeout or deadline. By default, timeout is set to 30 mins.
- Tasks that exceed its deadline are automatically retried.
- Encoding schema for task message has changed. Please install the latest CLI and run `migrate` command if
you have tasks enqueued with the previous version of asynq.
- API of `(*Client).Enqueue`, `(*Client).EnqueueIn`, and `(*Client).EnqueueAt` has changed to return a `*Result`.
- API of `ErrorHandler` has changed. It now takes context as the first argument and removed `retried`, `maxRetry` from the argument list.
Use `GetRetryCount` and/or `GetMaxRetry` to get the count values.
## [0.9.4] - 2020-06-13
### Fixed
- Fixes issue of same tasks processed by more than one worker (https://github.com/hibiken/asynq/issues/90).
## [0.9.3] - 2020-06-12
### Fixed
- Fixes the JSON number overflow issue (https://github.com/hibiken/asynq/issues/166).
## [0.9.2] - 2020-06-08
### Added
- The `pause` and `unpause` commands were added to the CLI. See README for the CLI for details.
## [0.9.1] - 2020-05-29
### Added
- `GetTaskID`, `GetRetryCount`, and `GetMaxRetry` functions were added to extract task metadata from context.
## [0.9.0] - 2020-05-16
### Changed
- `Logger` interface has changed. Please see the godoc for the new interface.
### Added
- `LogLevel` type is added. Server's log level can be specified through `LogLevel` field in `Config`.
## [0.8.3] - 2020-05-08
### Added
- `Close` method is added to `Client`.
## [0.8.2] - 2020-05-03
### Fixed
- [Fixed cancelfunc leak](https://github.com/hibiken/asynq/pull/145)
## [0.8.1] - 2020-04-27
### Added
- `ParseRedisURI` helper function is added to create a `RedisConnOpt` from a URI string.
- `SetDefaultOptions` method is added to `Client`.
## [0.8.0] - 2020-04-19
### Changed
- `Background` type is renamed to `Server`.
- To upgrade from the previous version, Update `NewBackground` to `NewServer` and pass `Config` by value.
- CLI is renamed to `asynq`.
- To upgrade the CLI to the latest version run `go get -u github.com/hibiken/tools/asynq`
- The `ps` command in CLI is renamed to `servers`
- `Concurrency` defaults to the number of CPUs when unset or set to a negative value.
### Added
- `ShutdownTimeout` field is added to `Config` to speicfy timeout duration used during graceful shutdown.
- New `Server` type exposes `Start`, `Stop`, and `Quiet` as well as `Run`.
## [0.7.1] - 2020-04-05
### Fixed
- Fixed signal handling for windows.
## [0.7.0] - 2020-03-22
### Changed
- Support Go v1.13+, dropped support for go v1.12
### Added
- `Unique` option was added to allow client to enqueue a task only if it's unique within a certain time period.
## [0.6.2] - 2020-03-15
### Added
- `Use` method was added to `ServeMux` to apply middlewares to all handlers.
## [0.6.1] - 2020-03-12 ## [0.6.1] - 2020-03-12
### Added ### Added

View File

@@ -45,6 +45,7 @@ Thank you! We'll try to respond as quickly as possible.
6. Create a new pull request 6. Create a new pull request
Please try to keep your pull request focused in scope and avoid including unrelated commits. Please try to keep your pull request focused in scope and avoid including unrelated commits.
Please run tests against redis cluster locally with `--redis_cluster` flag to ensure that code works for Redis cluster. TODO: Run tests using Redis cluster on CI.
After you have submitted your pull request, we'll try to get back to you as soon as possible. We may suggest some changes or improvements. After you have submitted your pull request, we'll try to get back to you as soon as possible. We may suggest some changes or improvements.

282
README.md
View File

@@ -1,18 +1,51 @@
# Asynq # Asynq
[![Build Status](https://travis-ci.com/hibiken/asynq.svg?token=paqzfpSkF4p23s5Ux39b&branch=master)](https://travis-ci.com/hibiken/asynq) ![Build Status](https://github.com/hibiken/asynq/workflows/build/badge.svg)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)
[![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq) [![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq)
[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community) [![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq)
Asynq is a simple Go library for queueing tasks and processing them in the background with workers. ## Overview
It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before v1.0.0 release. Asynq is a Go library for queueing tasks and processing them asynchronously with workers. It's backed by Redis and is designed to be scalable yet easy to get started.
![Task Queue Diagram](/docs/assets/task-queue.png) Highlevel overview of how Asynq works:
- Client puts task on a queue
- Server pulls task off queues and starts a worker goroutine for each task
- Tasks are processed concurrently by multiple workers
Task queues are used as a mechanism to distribute work across multiple machines.
A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
![Task Queue Diagram](/docs/assets/overview.png)
## Stability and Compatibility
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users (Feedback on APIs are appreciated!). The public API could change without a major version update before v1.0.0 release.
**Status**: The library is currently undergoing heavy development with frequent, breaking API changes.
## Features
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
- Scheduling of tasks
- Durability since tasks are written to Redis
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
- Automatic recovery of tasks in the event of a worker crash
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#strict-priority-queues)
- Low latency to add a task since writes are fast in Redis
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
- [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive)
- [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue
- [Periodic Tasks](https://github.com/hibiken/asynq/wiki/Periodic-Tasks)
- [Support Redis Cluster](https://github.com/hibiken/asynq/wiki/Redis-Cluster) for automatic sharding and high availability
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for high availability
- [Web UI](#web-ui) to inspect and remote-control queues and tasks
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Quickstart ## Quickstart
@@ -22,63 +55,176 @@ First, make sure you are running a Redis server locally.
$ redis-server $ redis-server
``` ```
To create and schedule tasks, use `Client` and provide a task and when to enqueue the task. Next, write a package that encapsulates task creation and task handling.
A task will be processed by a background worker as soon as the task gets enqueued.
Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
```go ```go
func main() { package tasks
r := &asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
}
client := asynq.NewClient(r) import (
"fmt"
// Create a task with task type and payload. "github.com/hibiken/asynq"
t1 := asynq.NewTask("email:signup", map[string]interface{}{"user_id": 42}) )
t2 := asynq.NewTask("email:reminder", map[string]interface{}{"user_id": 42}) // A list of task types.
const (
TypeEmailDelivery = "email:deliver"
TypeImageResize = "image:resize"
)
// Enqueue immediately. //----------------------------------------------
err := client.Enqueue(t1) // Write a function NewXXXTask to create a task.
// A task consists of a type and a payload.
//----------------------------------------------
// Enqueue 24 hrs later. func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
err = client.EnqueueIn(24*time.Hour, t2) payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
return asynq.NewTask(TypeEmailDelivery, payload)
// Enqueue at specific time.
err = client.EnqueueAt(time.Date(2020, time.March, 6, 10, 0, 0, 0, time.UTC), t2)
// Pass vararg options to specify processing behavior for the given task.
//
// MaxRetry specifies the max number of retry if the task fails (Default is 25).
// Queue specifies which queue to enqueue this task to (Default is "default" queue).
// Timeout specifies the the task timeout (Default is no timeout).
err = client.Enqueue(t1, asynq.MaxRetry(10), asynq.Queue("critical"), asynq.Timeout(time.Minute))
} }
```
To start the background workers, use `Background` and provide your `Handler` to process the tasks. func NewImageResizeTask(src string) *asynq.Task {
payload := map[string]interface{}{"src": src}
return asynq.NewTask(TypeImageResize, payload)
}
`Handler` is an interface with one method `ProcessTask` with the following signature. //---------------------------------------------------------------
// Write a function HandleXXXTask to handle the input task.
```go // Note that it satisfies the asynq.HandlerFunc interface.
// ProcessTask should return nil if the processing of a task is successful.
// //
// If ProcessTask return a non-nil error or panics, the task will be retried after delay. // Handler doesn't need to be a function. You can define a type
type Handler interface { // that satisfies asynq.Handler interface. See examples below.
ProcessTask(context.Context, *asynq.Task) error //---------------------------------------------------------------
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
userID, err := t.Payload.GetInt("user_id")
if err != nil {
return err
}
tmplID, err := t.Payload.GetString("template_id")
if err != nil {
return err
}
fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID)
// Email delivery code ...
return nil
}
// ImageProcessor implements asynq.Handler interface.
type ImageProcessor struct {
// ... fields for struct
}
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
src, err := t.Payload.GetString("src")
if err != nil {
return err
}
fmt.Printf("Resize image: src = %s\n", src)
// Image resizing code ...
return nil
}
func NewImageProcessor() *ImageProcessor {
// ... return an instance
} }
``` ```
You can optionally use `ServeMux` to create a handler, just as you would with `"net/http"` Handler. In your application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
```go ```go
func main() { package main
r := &asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
}
bg := asynq.NewBackground(r, &asynq.Config{ import (
"fmt"
"log"
"time"
"github.com/hibiken/asynq"
"your/app/package/tasks"
)
const redisAddr = "127.0.0.1:6379"
func main() {
r := asynq.RedisClientOpt{Addr: redisAddr}
c := asynq.NewClient(r)
defer c.Close()
// ------------------------------------------------------
// Example 1: Enqueue task to be processed immediately.
// Use (*Client).Enqueue method.
// ------------------------------------------------------
t := tasks.NewEmailDeliveryTask(42, "some:template:id")
res, err := c.Enqueue(t)
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
// ------------------------------------------------------------
// Example 2: Schedule task to be processed in the future.
// Use ProcessIn or ProcessAt option.
// ------------------------------------------------------------
t = tasks.NewEmailDeliveryTask(42, "other:template:id")
res, err = c.Enqueue(t, asynq.ProcessIn(24*time.Hour))
if err != nil {
log.Fatal("could not schedule task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
// ----------------------------------------------------------------------------
// Example 3: Set other options to tune task processing behavior.
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
// ----------------------------------------------------------------------------
c.SetDefaultOptions(tasks.TypeImageResize, asynq.MaxRetry(10), asynq.Timeout(3*time.Minute))
t = tasks.NewImageResizeTask("some/blobstore/path")
res, err = c.Enqueue(t)
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
// ---------------------------------------------------------------------------
// Example 4: Pass options to tune task processing behavior at enqueue time.
// Options passed at enqueue time override default ones, if any.
// ---------------------------------------------------------------------------
t = tasks.NewImageResizeTask("some/blobstore/path")
res, err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
}
```
Next, start a worker server to process these tasks in the background.
To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
```go
package main
import (
"log"
"github.com/hibiken/asynq"
"your/app/package/tasks"
)
const redisAddr = "127.0.0.1:6379"
func main() {
r := asynq.RedisClientOpt{Addr: redisAddr}
srv := asynq.NewServer(r, asynq.Config{
// Specify how many concurrent workers to use // Specify how many concurrent workers to use
Concurrency: 10, Concurrency: 10,
// Optionally specify multiple queues with different priority. // Optionally specify multiple queues with different priority.
@@ -92,22 +238,13 @@ func main() {
// mux maps a type to a handler // mux maps a type to a handler
mux := asynq.NewServeMux() mux := asynq.NewServeMux()
mux.HandleFunc("email:signup", signupEmailHandler) mux.HandleFunc(tasks.TypeEmailDelivery, tasks.HandleEmailDeliveryTask)
mux.HandleFunc("email:reminder", reminderEmailHandler) mux.Handle(tasks.TypeImageResize, tasks.NewImageProcessor())
// ...register other handlers... // ...register other handlers...
bg.Run(mux) if err := srv.Run(mux); err != nil {
} log.Fatalf("could not run server: %v", err)
// function with the same signature as the ProcessTask method for the Handler interface.
func signupEmailHandler(ctx context.Context, t *asynq.Task) error {
id, err := t.Payload.GetInt("user_id")
if err != nil {
return err
} }
fmt.Printf("Send welcome email to user %d\n", id)
// ...your email sending logic...
return nil
} }
``` ```
@@ -115,6 +252,19 @@ For a more detailed walk-through of the library, see our [Getting Started Guide]
To Learn more about `asynq` features and APIs, see our [Wiki](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq). To Learn more about `asynq` features and APIs, see our [Wiki](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq).
## Web UI
[Asynqmon](https://github.com/hibiken/asynqmon) is a web based tool for monitoring and administrating Asynq queues and tasks.
Please see the tool's [README](https://github.com/hibiken/asynqmon) for details.
Here's a few screenshots of the web UI.
**Queues view**
![Web UI QueuesView](/docs/assets/asynqmon-queues-view.png)
**Tasks view**
![Web UI TasksView](/docs/assets/asynqmon-task-view.png)
## Command Line Tool ## Command Line Tool
Asynq ships with a command line tool to inspect the state of queues and tasks. Asynq ships with a command line tool to inspect the state of queues and tasks.
@@ -123,7 +273,7 @@ Here's an example of running the `stats` command.
![Gif](/docs/assets/demo.gif) ![Gif](/docs/assets/demo.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynqmon/README.md). For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
## Installation ## Installation
@@ -136,15 +286,15 @@ go get -u github.com/hibiken/asynq
To install the CLI tool, run the following command: To install the CLI tool, run the following command:
```sh ```sh
go get -u github.com/hibiken/asynq/tools/asynqmon go get -u github.com/hibiken/asynq/tools/asynq
``` ```
## Requirements ## Requirements
| Dependency | Version | | Dependency | Version |
| -------------------------- | ------- | | -------------------------- | ------- |
| [Redis](https://redis.io/) | v2.8+ | | [Redis](https://redis.io/) | v3.0+ |
| [Go](https://golang.org/) | v1.12+ | | [Go](https://golang.org/) | v1.13+ |
## Contributing ## Contributing
@@ -155,7 +305,7 @@ Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI - [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library. - [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynqmon CLI is built with cobra - [Cobra](https://github.com/spf13/cobra) : Asynq CLI is built with cobra
## License ## License

267
asynq.go
View File

@@ -7,6 +7,10 @@ package asynq
import ( import (
"crypto/tls" "crypto/tls"
"fmt" "fmt"
"net/url"
"strconv"
"strings"
"time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
) )
@@ -34,8 +38,14 @@ func NewTask(typename string, payload map[string]interface{}) *Task {
// //
// RedisConnOpt represents a sum of following types: // RedisConnOpt represents a sum of following types:
// //
// RedisClientOpt | *RedisClientOpt | RedisFailoverClientOpt | *RedisFailoverClientOpt // - RedisClientOpt
type RedisConnOpt interface{} // - RedisFailoverClientOpt
// - RedisClusterClientOpt
type RedisConnOpt interface {
// MakeRedisClient returns a new redis client instance.
// Return value is intentionally opaque to hide the implementation detail of redis client.
MakeRedisClient() interface{}
}
// RedisClientOpt is used to create a redis client that connects // RedisClientOpt is used to create a redis client that connects
// to a redis server directly. // to a redis server directly.
@@ -47,13 +57,38 @@ type RedisClientOpt struct {
// Redis server address in "host:port" format. // Redis server address in "host:port" format.
Addr string Addr string
// Redis server password. // Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string Password string
// Redis DB to select after connecting to a server. // Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select. // See: https://redis.io/commands/select.
DB int DB int
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimout.
WriteTimeout time.Duration
// Maximum number of socket connections. // Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU. // Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int PoolSize int
@@ -63,6 +98,21 @@ type RedisClientOpt struct {
TLSConfig *tls.Config TLSConfig *tls.Config
} }
func (opt RedisClientOpt) MakeRedisClient() interface{} {
return redis.NewClient(&redis.Options{
Network: opt.Network,
Addr: opt.Addr,
Username: opt.Username,
Password: opt.Password,
DB: opt.DB,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
PoolSize: opt.PoolSize,
TLSConfig: opt.TLSConfig,
})
}
// RedisFailoverClientOpt is used to creates a redis client that talks // RedisFailoverClientOpt is used to creates a redis client that talks
// to redis sentinels for service discovery and has an automatic failover // to redis sentinels for service discovery and has an automatic failover
// capability. // capability.
@@ -78,13 +128,38 @@ type RedisFailoverClientOpt struct {
// Redis sentinel password. // Redis sentinel password.
SentinelPassword string SentinelPassword string
// Redis server password. // Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string Password string
// Redis DB to select after connecting to a server. // Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select. // See: https://redis.io/commands/select.
DB int DB int
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimeout
WriteTimeout time.Duration
// Maximum number of socket connections. // Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU. // Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int PoolSize int
@@ -94,50 +169,148 @@ type RedisFailoverClientOpt struct {
TLSConfig *tls.Config TLSConfig *tls.Config
} }
// createRedisClient returns a redis client given a redis connection configuration. func (opt RedisFailoverClientOpt) MakeRedisClient() interface{} {
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: opt.MasterName,
SentinelAddrs: opt.SentinelAddrs,
SentinelPassword: opt.SentinelPassword,
Username: opt.Username,
Password: opt.Password,
DB: opt.DB,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
PoolSize: opt.PoolSize,
TLSConfig: opt.TLSConfig,
})
}
// RedisClusterClientOpt is used to creates a redis client that connects to
// redis cluster.
type RedisClusterClientOpt struct {
// A seed list of host:port addresses of cluster nodes.
Addrs []string
// The maximum number of retries before giving up.
// Command is retried on network errors and MOVED/ASK redirects.
// Default is 8 retries.
MaxRedirects int
// Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimeout.
WriteTimeout time.Duration
// TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set.
TLSConfig *tls.Config
}
func (opt RedisClusterClientOpt) MakeRedisClient() interface{} {
return redis.NewClusterClient(&redis.ClusterOptions{
Addrs: opt.Addrs,
MaxRedirects: opt.MaxRedirects,
Username: opt.Username,
Password: opt.Password,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
TLSConfig: opt.TLSConfig,
})
}
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
// It returns a non-nil error if uri cannot be parsed.
// //
// Passing an unexpected type as a RedisConnOpt argument will cause panic. // Three URI schemes are supported, which are redis:, redis-socket:, and redis-sentinel:.
func createRedisClient(r RedisConnOpt) *redis.Client { // Supported formats are:
switch r := r.(type) { // redis://[:password@]host[:port][/dbnumber]
case *RedisClientOpt: // redis-socket://[:password@]path[?db=dbnumber]
return redis.NewClient(&redis.Options{ // redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName]
Network: r.Network, func ParseRedisURI(uri string) (RedisConnOpt, error) {
Addr: r.Addr, u, err := url.Parse(uri)
Password: r.Password, if err != nil {
DB: r.DB, return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err)
PoolSize: r.PoolSize, }
TLSConfig: r.TLSConfig, switch u.Scheme {
}) case "redis":
case RedisClientOpt: return parseRedisURI(u)
return redis.NewClient(&redis.Options{ case "redis-socket":
Network: r.Network, return parseRedisSocketURI(u)
Addr: r.Addr, case "redis-sentinel":
Password: r.Password, return parseRedisSentinelURI(u)
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case *RedisFailoverClientOpt:
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case RedisFailoverClientOpt:
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
default: default:
panic(fmt.Sprintf("asynq: unexpected type %T for RedisConnOpt", r)) return nil, fmt.Errorf("asynq: unsupported uri scheme: %q", u.Scheme)
} }
} }
func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
var db int
var err error
if len(u.Path) > 0 {
xs := strings.Split(strings.Trim(u.Path, "/"), "/")
db, err = strconv.Atoi(xs[0])
if err != nil {
return nil, fmt.Errorf("asynq: could not parse redis uri: database number should be the first segment of the path")
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Addr: u.Host, DB: db, Password: password}, nil
}
func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) {
const errPrefix = "asynq: could not parse redis socket uri"
if len(u.Path) == 0 {
return nil, fmt.Errorf("%s: path does not exist", errPrefix)
}
q := u.Query()
var db int
var err error
if n := q.Get("db"); n != "" {
db, err = strconv.Atoi(n)
if err != nil {
return nil, fmt.Errorf("%s: query param `db` should be a number", errPrefix)
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Network: "unix", Addr: u.Path, DB: db, Password: password}, nil
}
func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
addrs := strings.Split(u.Host, ",")
master := u.Query().Get("master")
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, Password: password}, nil
}

View File

@@ -5,8 +5,9 @@
package asynq package asynq
import ( import (
"os" "flag"
"sort" "sort"
"strings"
"testing" "testing"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
@@ -15,28 +16,72 @@ import (
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
) )
// This file defines test helper functions used by //============================================================================
// other test files. // This file defines helper functions and variables used in other test files.
//============================================================================
// redis used for package testing. // variables used for package testing.
const ( var (
redisAddr = "localhost:6379" redisAddr string
redisDB = 14 redisDB int
useRedisCluster bool
redisClusterAddrs string // comma-separated list of host:port
testLogLevel = FatalLevel
) )
var testLogger = log.NewLogger(os.Stderr) var testLogger *log.Logger
func setup(tb testing.TB) *redis.Client { func init() {
flag.StringVar(&redisAddr, "redis_addr", "localhost:6379", "redis address to use in testing")
flag.IntVar(&redisDB, "redis_db", 14, "redis db number to use in testing")
flag.BoolVar(&useRedisCluster, "redis_cluster", false, "use redis cluster as a broker in testing")
flag.StringVar(&redisClusterAddrs, "redis_cluster_addrs", "localhost:7000,localhost:7001,localhost:7002", "comma separated list of redis server addresses")
flag.Var(&testLogLevel, "loglevel", "log level to use in testing")
testLogger = log.NewLogger(nil)
testLogger.SetLevel(toInternalLogLevel(testLogLevel))
}
func setup(tb testing.TB) (r redis.UniversalClient) {
tb.Helper() tb.Helper()
r := redis.NewClient(&redis.Options{ if useRedisCluster {
addrs := strings.Split(redisClusterAddrs, ",")
if len(addrs) == 0 {
tb.Fatal("No redis cluster addresses provided. Please set addresses using --redis_cluster_addrs flag.")
}
r = redis.NewClusterClient(&redis.ClusterOptions{
Addrs: addrs,
})
} else {
r = redis.NewClient(&redis.Options{
Addr: redisAddr, Addr: redisAddr,
DB: redisDB, DB: redisDB,
}) })
}
// Start each test with a clean slate. // Start each test with a clean slate.
h.FlushDB(tb, r) h.FlushDB(tb, r)
return r return r
} }
func getRedisConnOpt(tb testing.TB) RedisConnOpt {
tb.Helper()
if useRedisCluster {
addrs := strings.Split(redisClusterAddrs, ",")
if len(addrs) == 0 {
tb.Fatal("No redis cluster addresses provided. Please set addresses using --redis_cluster_addrs flag.")
}
return RedisClusterClientOpt{
Addrs: addrs,
}
}
return RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
}
var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task { var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
out := append([]*Task(nil), in...) // Copy input to avoid mutating it out := append([]*Task(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
@@ -44,3 +89,106 @@ var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
}) })
return out return out
}) })
func TestParseRedisURI(t *testing.T) {
tests := []struct {
uri string
want RedisConnOpt
}{
{
"redis://localhost:6379",
RedisClientOpt{Addr: "localhost:6379"},
},
{
"redis://localhost:6379/3",
RedisClientOpt{Addr: "localhost:6379", DB: 3},
},
{
"redis://:mypassword@localhost:6379",
RedisClientOpt{Addr: "localhost:6379", Password: "mypassword"},
},
{
"redis://:mypassword@127.0.0.1:6379/11",
RedisClientOpt{Addr: "127.0.0.1:6379", Password: "mypassword", DB: 11},
},
{
"redis-socket:///var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock"},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword"},
},
{
"redis-socket:///var/run/redis/redis.sock?db=7",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", DB: 7},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock?db=12",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword", DB: 12},
},
{
"redis-sentinel://localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
},
},
{
"redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
Password: "mypassword",
},
},
}
for _, tc := range tests {
got, err := ParseRedisURI(tc.uri)
if err != nil {
t.Errorf("ParseRedisURI(%q) returned an error: %v", tc.uri, err)
continue
}
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff)
}
}
}
func TestParseRedisURIErrors(t *testing.T) {
tests := []struct {
desc string
uri string
}{
{
"unsupported scheme",
"rdb://localhost:6379",
},
{
"missing scheme",
"localhost:6379",
},
{
"multiple db numbers",
"redis://localhost:6379/1,2,3",
},
{
"missing path for socket connection",
"redis-socket://?db=one",
},
{
"non integer for db numbers for socket",
"redis-socket:///some/path/to/redis?db=one",
},
}
for _, tc := range tests {
_, err := ParseRedisURI(tc.uri)
if err == nil {
t.Errorf("%s: ParseRedisURI(%q) succeeded for malformed input, want error",
tc.desc, tc.uri)
}
}
}

View File

@@ -1,313 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"math"
"math/rand"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
)
// Background is responsible for managing the background-task processing.
//
// Background manages task queues to process tasks.
// If the processing of a task is unsuccessful, background will
// schedule it for a retry until either the task gets processed successfully
// or it exhausts its max retry count.
//
// Once a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met
// (e.g., queue size reaches a certain limit, or the task has been in the
// queue for a certain amount of time).
type Background struct {
mu sync.Mutex
running bool
ps *base.ProcessState
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
logger Logger
rdb *rdb.RDB
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
}
// Config specifies the background-task processing behavior.
type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to a zero or negative value, NewBackground will overwrite the value to one.
Concurrency int
// Function to calculate retry delay for a failed task.
//
// By default, it uses exponential backoff algorithm to calculate the delay.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the background will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
// If set to true, tasks in the queue with the highest priority is processed first.
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
// Logger specifies the logger used by the background instance.
//
// If unset, default logger is used.
Logger Logger
}
// An ErrorHandler handles errors returned by the task handler.
type ErrorHandler interface {
HandleError(task *Task, err error, retried, maxRetry int)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(task *Task, err error, retried, maxRetry int)
// HandleError calls fn(task, err, retried, maxRetry)
func (fn ErrorHandlerFunc) HandleError(task *Task, err error, retried, maxRetry int) {
fn(task, err, retried, maxRetry)
}
// Logger implements logging with various log levels.
type Logger interface {
// Debug logs a message at Debug level.
Debug(format string, args ...interface{})
// Info logs a message at Info level.
Info(format string, args ...interface{})
// Warn logs a message at Warning level.
Warn(format string, args ...interface{})
// Error logs a message at Error level.
Error(format string, args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(format string, args ...interface{})
}
// Formula taken from https://github.com/mperham/sidekiq.
func defaultDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1))
return time.Duration(s) * time.Second
}
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
// NewBackground returns a new Background given a redis connection option
// and background processing configuration.
func NewBackground(r RedisConnOpt, cfg *Config) *Background {
n := cfg.Concurrency
if n < 1 {
n = 1
}
delayFunc := cfg.RetryDelayFunc
if delayFunc == nil {
delayFunc = defaultDelayFunc
}
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
logger := cfg.Logger
if logger == nil {
logger = log.NewLogger(os.Stderr)
}
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
pid := os.Getpid()
rdb := rdb.NewRDB(createRedisClient(r))
ps := base.NewProcessState(host, pid, n, queues, cfg.StrictPriority)
syncCh := make(chan *syncRequest)
cancels := base.NewCancelations()
syncer := newSyncer(logger, syncCh, 5*time.Second)
heartbeater := newHeartbeater(logger, rdb, ps, 5*time.Second)
scheduler := newScheduler(logger, rdb, 5*time.Second, queues)
processor := newProcessor(logger, rdb, ps, delayFunc, syncCh, cancels, cfg.ErrorHandler)
subscriber := newSubscriber(logger, rdb, cancels)
return &Background{
logger: logger,
rdb: rdb,
ps: ps,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
}
}
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
type Handler interface {
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// Run starts the background-task processing and blocks until
// an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all pending workers and other
// goroutines to process the tasks.
func (bg *Background) Run(handler Handler) {
type prefixLogger interface {
SetPrefix(prefix string)
}
// If logger supports setting prefix, then set prefix for log output.
if l, ok := bg.logger.(prefixLogger); ok {
l.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid()))
}
bg.logger.Info("Starting processing")
bg.start(handler)
defer bg.stop()
bg.logger.Info("Send signal TSTP to stop processing new tasks")
bg.logger.Info("Send signal TERM or INT to terminate the process")
// Wait for a signal to terminate.
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGTSTP)
for {
sig := <-sigs
if sig == syscall.SIGTSTP {
bg.processor.stop()
bg.ps.SetStatus(base.StatusStopped)
continue
}
break
}
fmt.Println()
bg.logger.Info("Starting graceful shutdown")
}
// starts the background-task processing.
func (bg *Background) start(handler Handler) {
bg.mu.Lock()
defer bg.mu.Unlock()
if bg.running {
return
}
bg.running = true
bg.processor.handler = handler
bg.heartbeater.start(&bg.wg)
bg.subscriber.start(&bg.wg)
bg.syncer.start(&bg.wg)
bg.scheduler.start(&bg.wg)
bg.processor.start(&bg.wg)
}
// stops the background-task processing.
func (bg *Background) stop() {
bg.mu.Lock()
defer bg.mu.Unlock()
if !bg.running {
return
}
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
//
// processor -> syncer (via syncCh)
bg.scheduler.terminate()
bg.processor.terminate()
bg.syncer.terminate()
bg.subscriber.terminate()
bg.heartbeater.terminate()
bg.wg.Wait()
bg.rdb.Close()
bg.running = false
bg.logger.Info("Bye!")
}

View File

@@ -1,128 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"go.uber.org/goleak"
)
func TestBackground(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
client := NewClient(r)
bg := NewBackground(r, &Config{
Concurrency: 10,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
bg.start(HandlerFunc(h))
err := client.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
err = client.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
bg.stop()
}
func TestGCD(t *testing.T) {
tests := []struct {
input []int
want int
}{
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
got := gcd(tc.input...)
if got != tc.want {
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
}
}
}
func TestNormalizeQueueCfg(t *testing.T) {
tests := []struct {
input map[string]int
want map[string]int
}{
{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
}
for _, tc := range tests {
got := normalizeQueueCfg(tc.input)
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
tc.input, got, tc.want, diff)
}
}
}

View File

@@ -7,7 +7,6 @@ package asynq
import ( import (
"context" "context"
"fmt" "fmt"
"math/rand"
"sync" "sync"
"testing" "testing"
"time" "time"
@@ -19,24 +18,23 @@ func BenchmarkEndToEndSimple(b *testing.B) {
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
setup(b) setup(b)
redis := &RedisClientOpt{ redis := getRedisConnOpt(b)
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration { RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second return time.Second
}, },
LogLevel: testLogLevel,
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil { if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
client.Close()
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(count) wg.Add(count)
@@ -46,11 +44,11 @@ func BenchmarkEndToEndSimple(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }
@@ -60,38 +58,44 @@ func BenchmarkEndToEnd(b *testing.B) {
const count = 100000 const count = 100000
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
rand.Seed(time.Now().UnixNano())
setup(b) setup(b)
redis := &RedisClientOpt{ redis := getRedisConnOpt(b)
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration { RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second return time.Second
}, },
LogLevel: testLogLevel,
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil { if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil { if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
client.Close()
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(count * 2) wg.Add(count * 2)
handler := func(ctx context.Context, t *Task) error { handler := func(ctx context.Context, t *Task) error {
// randomly fail 1% of tasks n, err := t.Payload.GetInt("data")
if rand.Intn(100) == 1 { if err != nil {
b.Logf("internal error: %v", err)
}
retried, ok := GetRetryCount(ctx)
if !ok {
b.Logf("internal error: %v", err)
}
// Fail 1% of tasks for the first attempt.
if retried == 0 && n%100 == 0 {
return fmt.Errorf(":(") return fmt.Errorf(":(")
} }
wg.Done() wg.Done()
@@ -99,11 +103,11 @@ func BenchmarkEndToEnd(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }
@@ -119,38 +123,37 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
setup(b) setup(b)
redis := &RedisClientOpt{ redis := getRedisConnOpt(b)
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
Queues: map[string]int{ Queues: map[string]int{
"high": 6, "high": 6,
"default": 3, "default": 3,
"low": 1, "low": 1,
}, },
LogLevel: testLogLevel,
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < highCount; i++ { for i := 0; i < highCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("high")); err != nil { if _, err := client.Enqueue(t, Queue("high")); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < defaultCount; i++ { for i := 0; i < defaultCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil { if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < lowCount; i++ { for i := 0; i < lowCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("low")); err != nil { if _, err := client.Enqueue(t, Queue("low")); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
client.Close()
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(highCount + defaultCount + lowCount) wg.Add(highCount + defaultCount + lowCount)
@@ -160,11 +163,68 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown
}
}
// E2E benchmark to check client enqueue operation performs correctly,
// while server is busy processing tasks.
func BenchmarkClientWhileServerRunning(b *testing.B) {
const count = 10000
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := getRedisConnOpt(b)
client := NewClient(redis)
srv := NewServer(redis, Config{
Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second
},
LogLevel: testLogLevel,
})
// Enqueue 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
// Schedule 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
handler := func(ctx context.Context, t *Task) error {
return nil
}
srv.Start(HandlerFunc(handler))
b.StartTimer() // end setup
b.Log("Starting enqueueing")
enqueued := 0
for enqueued < 100000 {
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), map[string]interface{}{"data": enqueued})
if _, err := client.Enqueue(t); err != nil {
b.Logf("could not enqueue task %d: %v", enqueued, err)
continue
}
enqueued++
}
b.Logf("Finished enqueueing %d tasks", enqueued)
b.StopTimer() // begin teardown
srv.Stop()
client.Close()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }

311
client.go
View File

@@ -5,12 +5,16 @@
package asynq package asynq
import ( import (
"errors"
"fmt"
"strings" "strings"
"sync"
"time" "time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
) )
// A Client is responsible for scheduling tasks. // A Client is responsible for scheduling tasks.
@@ -20,17 +24,47 @@ import (
// //
// Clients are safe for concurrent use by multiple goroutines. // Clients are safe for concurrent use by multiple goroutines.
type Client struct { type Client struct {
mu sync.Mutex
opts map[string][]Option
rdb *rdb.RDB rdb *rdb.RDB
} }
// NewClient and returns a new Client given a redis connection option. // NewClient returns a new Client instance given a redis connection option.
func NewClient(r RedisConnOpt) *Client { func NewClient(r RedisConnOpt) *Client {
rdb := rdb.NewRDB(createRedisClient(r)) c, ok := r.MakeRedisClient().(redis.UniversalClient)
return &Client{rdb} if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
}
rdb := rdb.NewRDB(c)
return &Client{
opts: make(map[string][]Option),
rdb: rdb,
}
} }
type OptionType int
const (
MaxRetryOpt OptionType = iota
QueueOpt
TimeoutOpt
DeadlineOpt
UniqueOpt
ProcessAtOpt
ProcessInOpt
)
// Option specifies the task processing behavior. // Option specifies the task processing behavior.
type Option interface{} type Option interface {
// String returns a string representation of the option.
String() string
// Type describes the type of the option.
Type() OptionType
// Value returns a value used to create this option.
Value() interface{}
}
// Internal option representations. // Internal option representations.
type ( type (
@@ -38,6 +72,9 @@ type (
queueOption string queueOption string
timeoutOption time.Duration timeoutOption time.Duration
deadlineOption time.Time deadlineOption time.Time
uniqueOption time.Duration
processAtOption time.Time
processInOption time.Duration
) )
// MaxRetry returns an option to specify the max number of times // MaxRetry returns an option to specify the max number of times
@@ -51,104 +88,292 @@ func MaxRetry(n int) Option {
return retryOption(n) return retryOption(n)
} }
func (n retryOption) String() string { return fmt.Sprintf("MaxRetry(%d)", int(n)) }
func (n retryOption) Type() OptionType { return MaxRetryOpt }
func (n retryOption) Value() interface{} { return int(n) }
// Queue returns an option to specify the queue to enqueue the task into. // Queue returns an option to specify the queue to enqueue the task into.
// //
// Queue name is case-insensitive and the lowercased version is used. // Queue name is case-insensitive and the lowercased version is used.
func Queue(name string) Option { func Queue(qname string) Option {
return queueOption(strings.ToLower(name)) return queueOption(strings.ToLower(qname))
} }
func (qname queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(qname)) }
func (qname queueOption) Type() OptionType { return QueueOpt }
func (qname queueOption) Value() interface{} { return string(qname) }
// Timeout returns an option to specify how long a task may run. // Timeout returns an option to specify how long a task may run.
// If the timeout elapses before the Handler returns, then the task
// will be retried.
// //
// Zero duration means no limit. // Zero duration means no limit.
//
// If there's a conflicting Deadline option, whichever comes earliest
// will be used.
func Timeout(d time.Duration) Option { func Timeout(d time.Duration) Option {
return timeoutOption(d) return timeoutOption(d)
} }
func (d timeoutOption) String() string { return fmt.Sprintf("Timeout(%v)", time.Duration(d)) }
func (d timeoutOption) Type() OptionType { return TimeoutOpt }
func (d timeoutOption) Value() interface{} { return time.Duration(d) }
// Deadline returns an option to specify the deadline for the given task. // Deadline returns an option to specify the deadline for the given task.
// If it reaches the deadline before the Handler returns, then the task
// will be retried.
//
// If there's a conflicting Timeout option, whichever comes earliest
// will be used.
func Deadline(t time.Time) Option { func Deadline(t time.Time) Option {
return deadlineOption(t) return deadlineOption(t)
} }
func (t deadlineOption) String() string {
return fmt.Sprintf("Deadline(%v)", time.Time(t).Format(time.UnixDate))
}
func (t deadlineOption) Type() OptionType { return DeadlineOpt }
func (t deadlineOption) Value() interface{} { return time.Time(t) }
// Unique returns an option to enqueue a task only if the given task is unique.
// Task enqueued with this option is guaranteed to be unique within the given ttl.
// Once the task gets processed successfully or once the TTL has expired, another task with the same uniqueness may be enqueued.
// ErrDuplicateTask error is returned when enqueueing a duplicate task.
//
// Uniqueness of a task is based on the following properties:
// - Task Type
// - Task Payload
// - Queue Name
func Unique(ttl time.Duration) Option {
return uniqueOption(ttl)
}
func (ttl uniqueOption) String() string { return fmt.Sprintf("Unique(%v)", time.Duration(ttl)) }
func (ttl uniqueOption) Type() OptionType { return UniqueOpt }
func (ttl uniqueOption) Value() interface{} { return time.Duration(ttl) }
// ProcessAt returns an option to specify when to process the given task.
//
// If there's a conflicting ProcessIn option, the last option passed to Enqueue overrides the others.
func ProcessAt(t time.Time) Option {
return processAtOption(t)
}
func (t processAtOption) String() string {
return fmt.Sprintf("ProcessAt(%v)", time.Time(t).Format(time.UnixDate))
}
func (t processAtOption) Type() OptionType { return ProcessAtOpt }
func (t processAtOption) Value() interface{} { return time.Time(t) }
// ProcessIn returns an option to specify when to process the given task relative to the current time.
//
// If there's a conflicting ProcessAt option, the last option passed to Enqueue overrides the others.
func ProcessIn(d time.Duration) Option {
return processInOption(d)
}
func (d processInOption) String() string { return fmt.Sprintf("ProcessIn(%v)", time.Duration(d)) }
func (d processInOption) Type() OptionType { return ProcessInOpt }
func (d processInOption) Value() interface{} { return time.Duration(d) }
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
//
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
var ErrDuplicateTask = errors.New("task already exists")
type option struct { type option struct {
retry int retry int
queue string queue string
timeout time.Duration timeout time.Duration
deadline time.Time deadline time.Time
uniqueTTL time.Duration
processAt time.Time
} }
func composeOptions(opts ...Option) option { // composeOptions merges user provided options into the default options
// and returns the composed option.
// It also validates the user provided options and returns an error if any of
// the user provided options fail the validations.
func composeOptions(opts ...Option) (option, error) {
res := option{ res := option{
retry: defaultMaxRetry, retry: defaultMaxRetry,
queue: base.DefaultQueueName, queue: base.DefaultQueueName,
timeout: 0, timeout: 0, // do not set to deafultTimeout here
deadline: time.Time{}, deadline: time.Time{},
processAt: time.Now(),
} }
for _, opt := range opts { for _, opt := range opts {
switch opt := opt.(type) { switch opt := opt.(type) {
case retryOption: case retryOption:
res.retry = int(opt) res.retry = int(opt)
case queueOption: case queueOption:
res.queue = string(opt) trimmed := strings.TrimSpace(string(opt))
if err := base.ValidateQueueName(trimmed); err != nil {
return option{}, err
}
res.queue = trimmed
case timeoutOption: case timeoutOption:
res.timeout = time.Duration(opt) res.timeout = time.Duration(opt)
case deadlineOption: case deadlineOption:
res.deadline = time.Time(opt) res.deadline = time.Time(opt)
case uniqueOption:
res.uniqueTTL = time.Duration(opt)
case processAtOption:
res.processAt = time.Time(opt)
case processInOption:
res.processAt = time.Now().Add(time.Duration(opt))
default: default:
// ignore unexpected option // ignore unexpected option
} }
} }
return res return res, nil
} }
const ( const (
// Max retry count by default // Default max retry count used if nothing is specified.
defaultMaxRetry = 25 defaultMaxRetry = 25
// Default timeout used if both timeout and deadline are not specified.
defaultTimeout = 30 * time.Minute
) )
// EnqueueAt schedules task to be enqueued at the specified time. // Value zero indicates no timeout and no deadline.
// var (
// EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error. noTimeout time.Duration = 0
// noDeadline time.Time = time.Unix(0, 0)
)
// SetDefaultOptions sets options to be used for a given task type.
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error { //
opt := composeOptions(opts...) // Default options can be overridden by options passed at enqueue time.
msg := &base.TaskMessage{ func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
ID: xid.New(), c.mu.Lock()
Type: task.Type, defer c.mu.Unlock()
Payload: task.Payload.data, c.opts[taskType] = opts
Queue: opt.queue,
Retry: opt.retry,
Timeout: opt.timeout.String(),
Deadline: opt.deadline.Format(time.RFC3339),
}
return c.enqueue(msg, t)
} }
// Enqueue enqueues task to be processed immediately. // A Result holds enqueued task's metadata.
type Result struct {
// ID is a unique identifier for the task.
ID string
// EnqueuedAt is the time the task was enqueued in UTC.
EnqueuedAt time.Time
// ProcessAt indicates when the task should be processed.
ProcessAt time.Time
// Retry is the maximum number of retry for the task.
Retry int
// Queue is a name of the queue the task is enqueued to.
Queue string
// Timeout is the timeout value for the task.
// Counting for timeout starts when a worker starts processing the task.
// If task processing doesn't complete within the timeout, the task will be retried.
// The value zero means no timeout.
//
// If deadline is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Timeout time.Duration
// Deadline is the deadline value for the task.
// If task processing doesn't complete before the deadline, the task will be retried.
// The value time.Unix(0, 0) means no deadline.
//
// If timeout is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Deadline time.Time
}
// Close closes the connection with redis.
func (c *Client) Close() error {
return c.rdb.Close()
}
// Enqueue enqueues the given task to be processed asynchronously.
// //
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error. // Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
// //
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error { // By deafult, max retry is set to 25 and timeout is set to 30 minutes.
return c.EnqueueAt(time.Now(), task, opts...) // If no ProcessAt or ProcessIn options are passed, the task will be processed immediately.
func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
c.mu.Lock()
if defaults, ok := c.opts[task.Type]; ok {
opts = append(defaults, opts...)
}
c.mu.Unlock()
opt, err := composeOptions(opts...)
if err != nil {
return nil, err
}
deadline := noDeadline
if !opt.deadline.IsZero() {
deadline = opt.deadline
}
timeout := noTimeout
if opt.timeout != 0 {
timeout = opt.timeout
}
if deadline.Equal(noDeadline) && timeout == noTimeout {
// If neither deadline nor timeout are set, use default timeout.
timeout = defaultTimeout
}
var uniqueKey string
if opt.uniqueTTL > 0 {
uniqueKey = base.UniqueKey(opt.queue, task.Type, task.Payload.data)
}
msg := &base.TaskMessage{
ID: uuid.New(),
Type: task.Type,
Payload: task.Payload.data,
Queue: opt.queue,
Retry: opt.retry,
Deadline: deadline.Unix(),
Timeout: int64(timeout.Seconds()),
UniqueKey: uniqueKey,
}
now := time.Now()
if opt.processAt.Before(now) || opt.processAt.Equal(now) {
opt.processAt = now
err = c.enqueue(msg, opt.uniqueTTL)
} else {
err = c.schedule(msg, opt.processAt, opt.uniqueTTL)
}
switch {
case err == rdb.ErrDuplicateTask:
return nil, fmt.Errorf("%w", ErrDuplicateTask)
case err != nil:
return nil, err
}
return &Result{
ID: msg.ID.String(),
EnqueuedAt: time.Now().UTC(),
ProcessAt: opt.processAt,
Queue: msg.Queue,
Retry: msg.Retry,
Timeout: timeout,
Deadline: deadline,
}, nil
} }
// EnqueueIn schedules task to be enqueued after the specified delay. func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {
// if uniqueTTL > 0 {
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error. return c.rdb.EnqueueUnique(msg, uniqueTTL)
// }
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now().Add(d), task, opts...)
}
func (c *Client) enqueue(msg *base.TaskMessage, t time.Time) error {
if time.Now().After(t) {
return c.rdb.Enqueue(msg) return c.rdb.Enqueue(msg)
}
func (c *Client) schedule(msg *base.TaskMessage, t time.Time, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
ttl := t.Add(uniqueTTL).Sub(time.Now())
return c.rdb.ScheduleUnique(msg, t, ttl)
} }
return c.rdb.Schedule(msg, t) return c.rdb.Schedule(msg, t)
} }

View File

@@ -5,75 +5,95 @@
package asynq package asynq
import ( import (
"errors"
"testing" "testing"
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
func TestClientEnqueueAt(t *testing.T) { func TestClientEnqueueWithProcessAtOption(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(RedisClientOpt{ client := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer client.Close()
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var ( var (
now = time.Now() now = time.Now()
oneHourLater = now.Add(time.Hour) oneHourLater = now.Add(time.Hour)
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
) )
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
processAt time.Time processAt time.Time // value for ProcessAt option
opts []Option opts []Option // other options
wantEnqueued map[string][]*base.TaskMessage wantRes *Result
wantScheduled []h.ZSetEntry wantPending map[string][]*base.TaskMessage
wantScheduled map[string][]base.Z
}{ }{
{ {
desc: "Process task immediately", desc: "Process task immediately",
task: task, task: task,
processAt: now, processAt: now,
opts: []Option{}, opts: []Option{},
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"default": []*base.TaskMessage{ EnqueuedAt: now.UTC(),
&base.TaskMessage{ ProcessAt: now,
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil wantScheduled: map[string][]base.Z{
"default": {},
},
}, },
{ {
desc: "Schedule task to be processed in the future", desc: "Schedule task to be processed in the future",
task: task, task: task,
processAt: oneHourLater, processAt: oneHourLater,
opts: []Option{}, opts: []Option{},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil wantRes: &Result{
wantScheduled: []h.ZSetEntry{ EnqueuedAt: now.UTC(),
ProcessAt: oneHourLater,
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {},
},
wantScheduled: map[string][]base.Z{
"default": {
{ {
Msg: &base.TaskMessage{ Message: &base.TaskMessage{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
},
Score: oneHourLater.Unix(),
}, },
Score: float64(oneHourLater.Unix()),
}, },
}, },
}, },
@@ -82,45 +102,50 @@ func TestClientEnqueueAt(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...) opts := append(tc.opts, ProcessAt(tc.processAt))
gotRes, err := client.Enqueue(tc.task, opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID"),
cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, ProcessAt(%v)) returned %v, want %v; (-want,+got)\n%s",
tc.desc, tc.processAt, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantPending {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname) gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotEnqueued, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
} }
} }
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledEntries(t, r) gotScheduled := h.GetScheduledEntries(t, r, qname)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, gotScheduled, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledQueue, diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledKey(qname), diff)
}
} }
} }
} }
func TestClientEnqueue(t *testing.T) { func TestClientEnqueue(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(RedisClientOpt{ client := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer client.Close()
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
now := time.Now()
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
opts []Option opts []Option
wantEnqueued map[string][]*base.TaskMessage wantRes *Result
wantPending map[string][]*base.TaskMessage
}{ }{
{ {
desc: "Process task immediately with a custom retry count", desc: "Process task immediately with a custom retry count",
@@ -128,15 +153,22 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
MaxRetry(3), MaxRetry(3),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"default": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "default",
Retry: 3,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: 3, Retry: 3,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -147,15 +179,22 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
MaxRetry(-2), MaxRetry(-2),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"default": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "default",
Retry: 0,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: 0, // Retry count should be set to zero Retry: 0, // Retry count should be set to zero
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -167,15 +206,22 @@ func TestClientEnqueue(t *testing.T) {
MaxRetry(2), MaxRetry(2),
MaxRetry(10), MaxRetry(10),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"default": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "default",
Retry: 10,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: 10, // Last option takes precedence Retry: 10, // Last option takes precedence
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -186,15 +232,22 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Queue("custom"), Queue("custom"),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"custom": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "custom",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"custom": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "custom", Queue: "custom",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -205,15 +258,22 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Queue("HIGH"), Queue("HIGH"),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"high": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "high",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"high": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "high", Queue: "high",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -224,15 +284,22 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Timeout(20 * time.Second), Timeout(20 * time.Second),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"default": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "default",
Retry: defaultMaxRetry,
Timeout: 20 * time.Second,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: (20 * time.Second).String(), Timeout: 20,
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -243,15 +310,49 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)), Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"default": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "default",
Retry: defaultMaxRetry,
Timeout: noTimeout,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(noTimeout.Seconds()),
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Format(time.RFC3339), Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Unix(),
},
},
},
},
{
desc: "With both deadline and timeout options",
task: task,
opts: []Option{
Timeout(20 * time.Second),
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
},
wantRes: &Result{
ProcessAt: now,
Queue: "default",
Retry: defaultMaxRetry,
Timeout: 20 * time.Second,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: 20,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Unix(),
}, },
}, },
}, },
@@ -261,14 +362,22 @@ func TestClientEnqueue(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
err := client.Enqueue(tc.task, tc.opts...) gotRes, err := client.Enqueue(tc.task, tc.opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantPending {
got := h.GetEnqueuedMessages(t, r, qname) got := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
} }
@@ -276,45 +385,51 @@ func TestClientEnqueue(t *testing.T) {
} }
} }
func TestClientEnqueueIn(t *testing.T) { func TestClientEnqueueWithProcessInOption(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(RedisClientOpt{ client := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer client.Close()
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
now := time.Now()
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
delay time.Duration delay time.Duration // value for ProcessIn option
opts []Option opts []Option // other options
wantEnqueued map[string][]*base.TaskMessage wantRes *Result
wantScheduled []h.ZSetEntry wantPending map[string][]*base.TaskMessage
wantScheduled map[string][]base.Z
}{ }{
{ {
desc: "schedule a task to be enqueued in one hour", desc: "schedule a task to be processed in one hour",
task: task, task: task,
delay: time.Hour, delay: 1 * time.Hour,
opts: []Option{}, opts: []Option{},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil wantRes: &Result{
wantScheduled: []h.ZSetEntry{ ProcessAt: now.Add(1 * time.Hour),
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {},
},
wantScheduled: map[string][]base.Z{
"default": {
{ {
Msg: &base.TaskMessage{ Message: &base.TaskMessage{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
},
Score: time.Now().Add(time.Hour).Unix(),
}, },
Score: float64(time.Now().Add(time.Hour).Unix()),
}, },
}, },
}, },
@@ -323,41 +438,340 @@ func TestClientEnqueueIn(t *testing.T) {
task: task, task: task,
delay: 0, delay: 0,
opts: []Option{}, opts: []Option{},
wantEnqueued: map[string][]*base.TaskMessage{ wantRes: &Result{
"default": []*base.TaskMessage{ ProcessAt: now,
&base.TaskMessage{ Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantPending: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type, Type: task.Type,
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil wantScheduled: map[string][]base.Z{
"default": {},
},
}, },
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueIn(tc.delay, tc.task, tc.opts...) opts := append(tc.opts, ProcessIn(tc.delay))
gotRes, err := client.Enqueue(tc.task, opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, ProcessIn(%v)) returned %v, want %v; (-want,+got)\n%s",
tc.desc, tc.delay, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantPending {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname) gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotEnqueued, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
} }
} }
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledEntries(t, r) gotScheduled := h.GetScheduledEntries(t, r, qname)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, gotScheduled, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledQueue, diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledKey(qname), diff)
}
} }
} }
} }
func TestClientEnqueueError(t *testing.T) {
r := setup(t)
client := NewClient(getRedisConnOpt(t))
defer client.Close()
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
tests := []struct {
desc string
task *Task
opts []Option
}{
{
desc: "With empty queue name",
task: task,
opts: []Option{
Queue(""),
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
_, err := client.Enqueue(tc.task, tc.opts...)
if err == nil {
t.Errorf("%s; client.Enqueue(task, opts...) did not return non-nil error", tc.desc)
}
}
}
func TestClientDefaultOptions(t *testing.T) {
r := setup(t)
now := time.Now()
tests := []struct {
desc string
defaultOpts []Option // options set at the client level.
opts []Option // options used at enqueue time.
task *Task
wantRes *Result
queue string // queue that the message should go into.
want *base.TaskMessage
}{
{
desc: "With queue routing option",
defaultOpts: []Option{Queue("feed")},
opts: []Option{},
task: NewTask("feed:import", nil),
wantRes: &Result{
ProcessAt: now,
Queue: "feed",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: defaultMaxRetry,
Queue: "feed",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
{
desc: "With multiple options",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{},
task: NewTask("feed:import", nil),
wantRes: &Result{
ProcessAt: now,
Queue: "feed",
Retry: 5,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "feed",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
{
desc: "With overriding options at enqueue time",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{Queue("critical")},
task: NewTask("feed:import", nil),
wantRes: &Result{
ProcessAt: now,
Queue: "critical",
Retry: 5,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "critical",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "critical",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
c := NewClient(getRedisConnOpt(t))
defer c.Close()
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...)
gotRes, err := c.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Fatal(err)
}
cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
pending := h.GetPendingMessages(t, r, tc.queue)
if len(pending) != 1 {
t.Errorf("%s;\nexpected queue %q to have one message; got %d messages in the queue.",
tc.desc, tc.queue, len(pending))
continue
}
got := pending[0]
if diff := cmp.Diff(tc.want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in pending task message; (-want,+got)\n%s",
tc.desc, diff)
}
}
}
func TestClientEnqueueUnique(t *testing.T) {
r := setup(t)
c := NewClient(getRedisConnOpt(t))
defer c.Close()
tests := []struct {
task *Task
ttl time.Duration
}{
{
NewTask("email", map[string]interface{}{"user_id": 123}),
time.Hour,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
_, err := c.Enqueue(tc.task, Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl)
continue
}
// Enqueue the task again. It should fail.
_, err = c.Enqueue(tc.task, Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}
func TestClientEnqueueUniqueWithProcessInOption(t *testing.T) {
r := setup(t)
c := NewClient(getRedisConnOpt(t))
defer c.Close()
tests := []struct {
task *Task
d time.Duration
ttl time.Duration
}{
{
NewTask("reindex", nil),
time.Hour,
10 * time.Minute,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
_, err := c.Enqueue(tc.task, ProcessIn(tc.d), Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
continue
}
// Enqueue the task again. It should fail.
_, err = c.Enqueue(tc.task, ProcessIn(tc.d), Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}
func TestClientEnqueueUniqueWithProcessAtOption(t *testing.T) {
r := setup(t)
c := NewClient(getRedisConnOpt(t))
defer c.Close()
tests := []struct {
task *Task
at time.Time
ttl time.Duration
}{
{
NewTask("reindex", nil),
time.Now().Add(time.Hour),
10 * time.Minute,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
_, err := c.Enqueue(tc.task, ProcessAt(tc.at), Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
wantTTL := tc.at.Add(tc.ttl).Sub(time.Now())
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
continue
}
// Enqueue the task again. It should fail.
_, err = c.Enqueue(tc.task, ProcessAt(tc.at), Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}

87
context.go Normal file
View File

@@ -0,0 +1,87 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"time"
"github.com/hibiken/asynq/internal/base"
)
// A taskMetadata holds task scoped data to put in context.
type taskMetadata struct {
id string
maxRetry int
retryCount int
qname string
}
// ctxKey type is unexported to prevent collisions with context keys defined in
// other packages.
type ctxKey int
// metadataCtxKey is the context key for the task metadata.
// Its value of zero is arbitrary.
const metadataCtxKey ctxKey = 0
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage, deadline time.Time) (context.Context, context.CancelFunc) {
metadata := taskMetadata{
id: msg.ID.String(),
maxRetry: msg.Retry,
retryCount: msg.Retried,
qname: msg.Queue,
}
ctx := context.WithValue(context.Background(), metadataCtxKey, metadata)
return context.WithDeadline(ctx, deadline)
}
// GetTaskID extracts a task ID from a context, if any.
//
// ID of a task is guaranteed to be unique.
// ID of a task doesn't change if the task is being retried.
func GetTaskID(ctx context.Context) (id string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.id, true
}
// GetRetryCount extracts retry count from a context, if any.
//
// Return value n indicates the number of times associated task has been
// retried so far.
func GetRetryCount(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.retryCount, true
}
// GetMaxRetry extracts maximum retry from a context, if any.
//
// Return value n indicates the maximum number of times the assoicated task
// can be retried if ProcessTask returns a non-nil error.
func GetMaxRetry(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.maxRetry, true
}
// GetQueueName extracts queue name from a context, if any.
//
// Return value qname indicates which queue the task was pulled from.
func GetQueueName(ctx context.Context) (qname string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.qname, true
}

160
context_test.go Normal file
View File

@@ -0,0 +1,160 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
)
func TestCreateContextWithFutureDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.New(),
Payload: nil,
}
ctx, cancel := createContext(msg, tc.deadline)
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
}
}
func TestCreateContextWithPastDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(-2 * time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.New(),
Payload: nil,
}
ctx, cancel := createContext(msg, tc.deadline)
defer cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
}
}
func TestGetTaskMetadataFromContext(t *testing.T) {
tests := []struct {
desc string
msg *base.TaskMessage
}{
{"with zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 25, Retried: 0, Timeout: 1800, Queue: "default"}},
{"with non-zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 10, Retried: 5, Timeout: 1800, Queue: "default"}},
{"with custom queue name", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 25, Retried: 0, Timeout: 1800, Queue: "custom"}},
}
for _, tc := range tests {
ctx, cancel := createContext(tc.msg, time.Now().Add(30*time.Minute))
defer cancel()
id, ok := GetTaskID(ctx)
if !ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == false", tc.desc)
}
if ok && id != tc.msg.ID.String() {
t.Errorf("%s: GetTaskID(ctx) returned id == %q, want %q", tc.desc, id, tc.msg.ID.String())
}
retried, ok := GetRetryCount(ctx)
if !ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == false", tc.desc)
}
if ok && retried != tc.msg.Retried {
t.Errorf("%s: GetRetryCount(ctx) returned n == %d want %d", tc.desc, retried, tc.msg.Retried)
}
maxRetry, ok := GetMaxRetry(ctx)
if !ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == false", tc.desc)
}
if ok && maxRetry != tc.msg.Retry {
t.Errorf("%s: GetMaxRetry(ctx) returned n == %d want %d", tc.desc, maxRetry, tc.msg.Retry)
}
qname, ok := GetQueueName(ctx)
if !ok {
t.Errorf("%s: GetQueueName(ctx) returned ok == false", tc.desc)
}
if ok && qname != tc.msg.Queue {
t.Errorf("%s: GetQueueName(ctx) returned qname == %q, want %q", tc.desc, qname, tc.msg.Queue)
}
}
}
func TestGetTaskMetadataFromContextError(t *testing.T) {
tests := []struct {
desc string
ctx context.Context
}{
{"with background context", context.Background()},
}
for _, tc := range tests {
if _, ok := GetTaskID(tc.ctx); ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == true", tc.desc)
}
if _, ok := GetRetryCount(tc.ctx); ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == true", tc.desc)
}
if _, ok := GetMaxRetry(tc.ctx); ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == true", tc.desc)
}
if _, ok := GetQueueName(tc.ctx); ok {
t.Errorf("%s: GetQueueName(ctx) returned ok == true", tc.desc)
}
}
}

30
doc.go
View File

@@ -3,42 +3,44 @@
// that can be found in the LICENSE file. // that can be found in the LICENSE file.
/* /*
Package asynq provides a framework for asynchronous task processing. Package asynq provides a framework for Redis based distrubted task queue.
Asynq uses Redis as a message broker. To connect to redis server, Asynq uses Redis as a message broker. To connect to redis,
specify the options using one of RedisConnOpt types. specify the connection using one of RedisConnOpt types.
redis = &asynq.RedisClientOpt{ redisConnOpt = asynq.RedisClientOpt{
Addr: "127.0.0.1:6379", Addr: "127.0.0.1:6379",
Password: "xxxxx", Password: "xxxxx",
DB: 3, DB: 3,
} }
The Client is used to register a task to be processed at the specified time. The Client is used to enqueue a task.
Task is created with two parameters: its type and payload.
client := asynq.NewClient(redis) client := asynq.NewClient(redisConnOpt)
// Task is created with two parameters: its type and payload.
t := asynq.NewTask( t := asynq.NewTask(
"send_email", "send_email",
map[string]interface{}{"user_id": 42}) map[string]interface{}{"user_id": 42})
// Enqueue the task to be processed immediately. // Enqueue the task to be processed immediately.
err := client.Enqueue(t) res, err := client.Enqueue(t)
// Schedule the task to be processed in one minute. // Schedule the task to be processed after one minute.
err = client.EnqueueIn(time.Minute, t) res, err = client.Enqueue(t, asynq.ProcessIn(1*time.Minute))
The Background is used to run the background task processing with a given The Server is used to run the task processing workers with a given
handler. handler.
bg := asynq.NewBackground(redis, &asynq.Config{ srv := asynq.NewServer(redisConnOpt, asynq.Config{
Concurrency: 10, Concurrency: 10,
}) })
bg.Run(handler) if err := srv.Run(handler); err != nil {
log.Fatal(err)
}
Handler is an interface with one method ProcessTask which Handler is an interface type with a method which
takes a task and returns an error. Handler should return nil if takes a task and returns an error. Handler should return nil if
the processing is successful, otherwise return a non-nil error. the processing is successful, otherwise return a non-nil error.
If handler panics or returns a non-nil error, the task will be retried in the future. If handler panics or returns a non-nil error, the task will be retried in the future.

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

Before

Width:  |  Height:  |  Size: 582 KiB

After

Width:  |  Height:  |  Size: 582 KiB

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 279 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

BIN
docs/assets/cluster.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 983 KiB

After

Width:  |  Height:  |  Size: 329 KiB

BIN
docs/assets/overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

115
example_test.go Normal file
View File

@@ -0,0 +1,115 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq_test
import (
"fmt"
"log"
"os"
"os/signal"
"time"
"github.com/hibiken/asynq"
"golang.org/x/sys/unix"
)
func ExampleServer_Run() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
// Run blocks and waits for os signal to terminate the program.
if err := srv.Run(h); err != nil {
log.Fatal(err)
}
}
func ExampleServer_Stop() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs // wait for termination signal
srv.Stop()
}
func ExampleServer_Quiet() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
// Handle SIGTERM, SIGINT to exit the program.
// Handle SIGTSTP to stop processing new tasks.
for {
s := <-sigs
if s == unix.SIGTSTP {
srv.Quiet() // stop processing new tasks
continue
}
break
}
srv.Stop()
}
func ExampleScheduler() {
scheduler := asynq.NewScheduler(
asynq.RedisClientOpt{Addr: ":6379"},
&asynq.SchedulerOpts{Location: time.Local},
)
if _, err := scheduler.Register("* * * * *", asynq.NewTask("task1", nil)); err != nil {
log.Fatal(err)
}
if _, err := scheduler.Register("@every 30s", asynq.NewTask("task2", nil)); err != nil {
log.Fatal(err)
}
// Run blocks and waits for os signal to terminate the program.
if err := scheduler.Run(); err != nil {
log.Fatal(err)
}
}
func ExampleParseRedisURI() {
rconn, err := asynq.ParseRedisURI("redis://localhost:6379/10")
if err != nil {
log.Fatal(err)
}
r, ok := rconn.(asynq.RedisClientOpt)
if !ok {
log.Fatal("unexpected type")
}
fmt.Println(r.Addr)
fmt.Println(r.DB)
// Output:
// localhost:6379
// 10
}

75
forwarder.go Normal file
View File

@@ -0,0 +1,75 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// A forwarder is responsible for moving scheduled and retry tasks to pending state
// so that the tasks get processed by the workers.
type forwarder struct {
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "forwarder" goroutine.
done chan struct{}
// list of queue names to check and enqueue.
queues []string
// poll interval on average
avgInterval time.Duration
}
type forwarderParams struct {
logger *log.Logger
broker base.Broker
queues []string
interval time.Duration
}
func newForwarder(params forwarderParams) *forwarder {
return &forwarder{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
queues: params.queues,
avgInterval: params.interval,
}
}
func (f *forwarder) terminate() {
f.logger.Debug("Forwarder shutting down...")
// Signal the forwarder goroutine to stop polling.
f.done <- struct{}{}
}
// start starts the "forwarder" goroutine.
func (f *forwarder) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-f.done:
f.logger.Debug("Forwarder done")
return
case <-time.After(f.avgInterval):
f.exec()
}
}
}()
}
func (f *forwarder) exec() {
if err := f.broker.CheckAndEnqueue(f.queues...); err != nil {
f.logger.Errorf("Could not enqueue scheduled tasks: %v", err)
}
}

137
forwarder_test.go Normal file
View File

@@ -0,0 +1,137 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestForwarder(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newForwarder(forwarderParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default", "critical"},
interval: pollInterval,
})
t1 := h.NewTaskMessageWithQueue("gen_thumbnail", nil, "default")
t2 := h.NewTaskMessageWithQueue("send_email", nil, "critical")
t3 := h.NewTaskMessageWithQueue("reindex", nil, "default")
t4 := h.NewTaskMessageWithQueue("sync", nil, "critical")
now := time.Now()
tests := []struct {
initScheduled map[string][]base.Z // scheduled queue initial state
initRetry map[string][]base.Z // retry queue initial state
initPending map[string][]*base.TaskMessage // default queue initial state
wait time.Duration // wait duration before checking for final state
wantScheduled map[string][]*base.TaskMessage // schedule queue final state
wantRetry map[string][]*base.TaskMessage // retry queue final state
wantPending map[string][]*base.TaskMessage // default queue final state
}{
{
initScheduled: map[string][]base.Z{
"default": {{Message: t1, Score: now.Add(time.Hour).Unix()}},
"critical": {{Message: t2, Score: now.Add(-2 * time.Second).Unix()}},
},
initRetry: map[string][]base.Z{
"default": {{Message: t3, Score: time.Now().Add(-500 * time.Millisecond).Unix()}},
"critical": {},
},
initPending: map[string][]*base.TaskMessage{
"default": {},
"critical": {t4},
},
wait: pollInterval * 2,
wantScheduled: map[string][]*base.TaskMessage{
"default": {t1},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantPending: map[string][]*base.TaskMessage{
"default": {t3},
"critical": {t2, t4},
},
},
{
initScheduled: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Unix()},
{Message: t3, Score: now.Add(-500 * time.Millisecond).Unix()},
},
"critical": {
{Message: t2, Score: now.Add(-2 * time.Second).Unix()},
},
},
initRetry: map[string][]base.Z{
"default": {},
"critical": {},
},
initPending: map[string][]*base.TaskMessage{
"default": {},
"critical": {t4},
},
wait: pollInterval * 2,
wantScheduled: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantPending: map[string][]*base.TaskMessage{
"default": {t1, t3},
"critical": {t2, t4},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedAllScheduledQueues(t, r, tc.initScheduled) // initialize scheduled queue
h.SeedAllRetryQueues(t, r, tc.initRetry) // initialize retry queue
h.SeedAllPendingQueues(t, r, tc.initPending) // initialize default queue
var wg sync.WaitGroup
s.start(&wg)
time.Sleep(tc.wait)
s.terminate()
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledMessages(t, r, qname)
if diff := cmp.Diff(want, gotScheduled, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.ScheduledKey(qname), diff)
}
}
for qname, want := range tc.wantRetry {
gotRetry := h.GetRetryMessages(t, r, qname)
if diff := cmp.Diff(want, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.RetryKey(qname), diff)
}
}
for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.QueueKey(qname), diff)
}
}
}
}

7
go.mod
View File

@@ -3,12 +3,13 @@ module github.com/hibiken/asynq
go 1.13 go 1.13
require ( require (
github.com/go-redis/redis/v7 v7.2.0 github.com/go-redis/redis/v7 v7.4.0
github.com/google/go-cmp v0.4.0 github.com/google/go-cmp v0.4.0
github.com/rs/xid v1.2.1 github.com/google/uuid v1.1.1
github.com/robfig/cron/v3 v3.0.1
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.3.1
go.uber.org/goleak v0.10.0 go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e // indirect golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/yaml.v2 v2.2.7 // indirect gopkg.in/yaml.v2 v2.2.7 // indirect
) )

26
go.sum
View File

@@ -2,16 +2,17 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-redis/redis/v7 v7.0.0-beta.4 h1:p6z7Pde69EGRWvlC++y8aFcaWegyrKHzOBGo0zUACTQ=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs= github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg= github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
@@ -20,16 +21,14 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w= github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo= github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc= github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ= github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w= github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
@@ -39,10 +38,8 @@ go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092 h1:4QSRKanuywn15aTZvI/mIDEgPQpswuFndXpOj3rKEco= golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -60,8 +57,7 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=

80
healthcheck.go Normal file
View File

@@ -0,0 +1,80 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// healthchecker is responsible for pinging broker periodically
// and call user provided HeathCheckFunc with the ping result.
type healthchecker struct {
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "healthchecker" goroutine.
done chan struct{}
// interval between healthchecks.
interval time.Duration
// function to call periodically.
healthcheckFunc func(error)
}
type healthcheckerParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
healthcheckFunc func(error)
}
func newHealthChecker(params healthcheckerParams) *healthchecker {
return &healthchecker{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
interval: params.interval,
healthcheckFunc: params.healthcheckFunc,
}
}
func (hc *healthchecker) terminate() {
if hc.healthcheckFunc == nil {
return
}
hc.logger.Debug("Healthchecker shutting down...")
// Signal the healthchecker goroutine to stop.
hc.done <- struct{}{}
}
func (hc *healthchecker) start(wg *sync.WaitGroup) {
if hc.healthcheckFunc == nil {
return
}
wg.Add(1)
go func() {
defer wg.Done()
timer := time.NewTimer(hc.interval)
for {
select {
case <-hc.done:
hc.logger.Debug("Healthchecker done")
timer.Stop()
return
case <-timer.C:
err := hc.broker.Ping()
hc.healthcheckFunc(err)
timer.Reset(hc.interval)
}
}
}()
}

103
healthcheck_test.go Normal file
View File

@@ -0,0 +1,103 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
)
func TestHealthChecker(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
var (
// mu guards called and e variables.
mu sync.Mutex
called int
e error
)
checkFn := func(err error) {
mu.Lock()
defer mu.Unlock()
called++
e = err
}
hc := newHealthChecker(healthcheckerParams{
logger: testLogger,
broker: rdbClient,
interval: 1 * time.Second,
healthcheckFunc: checkFn,
})
hc.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second)
mu.Lock()
if called == 0 {
t.Errorf("Healthchecker did not call the provided HealthCheckFunc")
}
if e != nil {
t.Errorf("HealthCheckFunc was called with non-nil error: %v", e)
}
mu.Unlock()
hc.terminate()
}
func TestHealthCheckerWhenRedisDown(t *testing.T) {
// Make sure that healthchecker goroutine doesn't panic
// if it cannot connect to redis.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r)
var (
// mu guards called and e variables.
mu sync.Mutex
called int
e error
)
checkFn := func(err error) {
mu.Lock()
defer mu.Unlock()
called++
e = err
}
hc := newHealthChecker(healthcheckerParams{
logger: testLogger,
broker: testBroker,
interval: 1 * time.Second,
healthcheckFunc: checkFn,
})
testBroker.Sleep()
hc.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second)
mu.Lock()
if called == 0 {
t.Errorf("Healthchecker did not call the provided HealthCheckFunc")
}
if e == nil {
t.Errorf("HealthCheckFunc was called with nil; want non-nil error")
}
mu.Unlock()
hc.terminate()
}

View File

@@ -5,69 +5,166 @@
package asynq package asynq
import ( import (
"os"
"sync" "sync"
"time" "time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/log"
) )
// heartbeater is responsible for writing process info to redis periodically to // heartbeater is responsible for writing process info to redis periodically to
// indicate that the background worker process is up. // indicate that the background worker process is up.
type heartbeater struct { type heartbeater struct {
logger Logger logger *log.Logger
rdb *rdb.RDB broker base.Broker
ps *base.ProcessState
// channel to communicate back to the long running "heartbeater" goroutine. // channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{} done chan struct{}
// interval between heartbeats. // interval between heartbeats.
interval time.Duration interval time.Duration
// following fields are initialized at construction time and are immutable.
host string
pid int
serverID string
concurrency int
queues map[string]int
strictPriority bool
// following fields are mutable and should be accessed only by the
// heartbeater goroutine. In other words, confine these variables
// to this goroutine only.
started time.Time
workers map[string]*workerInfo
// status is shared with other goroutine but is concurrency safe.
status *base.ServerStatus
// channels to receive updates on active workers.
starting <-chan *workerInfo
finished <-chan *base.TaskMessage
} }
func newHeartbeater(l Logger, rdb *rdb.RDB, ps *base.ProcessState, interval time.Duration) *heartbeater { type heartbeaterParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
concurrency int
queues map[string]int
strictPriority bool
status *base.ServerStatus
starting <-chan *workerInfo
finished <-chan *base.TaskMessage
}
func newHeartbeater(params heartbeaterParams) *heartbeater {
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
return &heartbeater{ return &heartbeater{
logger: l, logger: params.logger,
rdb: rdb, broker: params.broker,
ps: ps,
done: make(chan struct{}), done: make(chan struct{}),
interval: interval, interval: params.interval,
host: host,
pid: os.Getpid(),
serverID: uuid.New().String(),
concurrency: params.concurrency,
queues: params.queues,
strictPriority: params.strictPriority,
status: params.status,
workers: make(map[string]*workerInfo),
starting: params.starting,
finished: params.finished,
} }
} }
func (h *heartbeater) terminate() { func (h *heartbeater) terminate() {
h.logger.Info("Heartbeater shutting down...") h.logger.Debug("Heartbeater shutting down...")
// Signal the heartbeater goroutine to stop. // Signal the heartbeater goroutine to stop.
h.done <- struct{}{} h.done <- struct{}{}
} }
// A workerInfo holds an active worker information.
type workerInfo struct {
// the task message the worker is processing.
msg *base.TaskMessage
// the time the worker has started processing the message.
started time.Time
// deadline the worker has to finish processing the task by.
deadline time.Time
}
func (h *heartbeater) start(wg *sync.WaitGroup) { func (h *heartbeater) start(wg *sync.WaitGroup) {
h.ps.SetStarted(time.Now())
h.ps.SetStatus(base.StatusRunning)
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
h.started = time.Now()
h.beat() h.beat()
timer := time.NewTimer(h.interval)
for { for {
select { select {
case <-h.done: case <-h.done:
h.rdb.ClearProcessState(h.ps) h.broker.ClearServerState(h.host, h.pid, h.serverID)
h.logger.Info("Heartbeater done") h.logger.Debug("Heartbeater done")
timer.Stop()
return return
case <-time.After(h.interval):
case <-timer.C:
h.beat() h.beat()
timer.Reset(h.interval)
case w := <-h.starting:
h.workers[w.msg.ID.String()] = w
case msg := <-h.finished:
delete(h.workers, msg.ID.String())
} }
} }
}() }()
} }
func (h *heartbeater) beat() { func (h *heartbeater) beat() {
info := base.ServerInfo{
Host: h.host,
PID: h.pid,
ServerID: h.serverID,
Concurrency: h.concurrency,
Queues: h.queues,
StrictPriority: h.strictPriority,
Status: h.status.String(),
Started: h.started,
ActiveWorkerCount: len(h.workers),
}
var ws []*base.WorkerInfo
for id, w := range h.workers {
ws = append(ws, &base.WorkerInfo{
Host: h.host,
PID: h.pid,
ServerID: h.serverID,
ID: id,
Type: w.msg.Type,
Queue: w.msg.Queue,
Payload: w.msg.Payload,
Started: w.started,
Deadline: w.deadline,
})
}
// Note: Set TTL to be long enough so that it won't expire before we write again // Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed. // and short enough to expire quickly once the process is shut down or killed.
err := h.rdb.WriteProcessState(h.ps, h.interval*2) if err := h.broker.WriteServerState(&info, ws, h.interval*2); err != nil {
if err != nil { h.logger.Errorf("could not write server state data: %v", err)
h.logger.Error("could not write heartbeat data: %v", err)
} }
} }

View File

@@ -14,10 +14,12 @@ import (
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
) )
func TestHeartbeater(t *testing.T) { func TestHeartbeater(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
tests := []struct { tests := []struct {
@@ -27,21 +29,37 @@ func TestHeartbeater(t *testing.T) {
queues map[string]int queues map[string]int
concurrency int concurrency int
}{ }{
{time.Second, "localhost", 45678, map[string]int{"default": 1}, 10}, {2 * time.Second, "localhost", 45678, map[string]int{"default": 1}, 10},
} }
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond) timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{}) ignoreOpt := cmpopts.IgnoreUnexported(base.ServerInfo{})
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
state := base.NewProcessState(tc.host, tc.pid, tc.concurrency, tc.queues, false) status := base.NewServerStatus(base.StatusIdle)
hb := newHeartbeater(testLogger, rdbClient, state, tc.interval) hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: rdbClient,
interval: tc.interval,
concurrency: tc.concurrency,
queues: tc.queues,
strictPriority: false,
status: status,
starting: make(chan *workerInfo),
finished: make(chan *base.TaskMessage),
})
// Change host and pid fields for testing purpose.
hb.host = tc.host
hb.pid = tc.pid
status.Set(base.StatusRunning)
var wg sync.WaitGroup var wg sync.WaitGroup
hb.start(&wg) hb.start(&wg)
want := &base.ProcessInfo{ want := &base.ServerInfo{
Host: tc.host, Host: tc.host,
PID: tc.pid, PID: tc.pid,
Queues: tc.queues, Queues: tc.queues,
@@ -51,49 +69,49 @@ func TestHeartbeater(t *testing.T) {
} }
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval)
ps, err := rdbClient.ListProcesses() ss, err := rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("could not read server info from redis: %v", err)
hb.terminate() hb.terminate()
continue continue
} }
if len(ps) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps)) t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
hb.terminate() hb.terminate()
continue continue
} }
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.terminate()
continue continue
} }
// status change // status change
state.SetStatus(base.StatusStopped) status.Set(base.StatusStopped)
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval * 2)
want.Status = "stopped" want.Status = "stopped"
ps, err = rdbClient.ListProcesses() ss, err = rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("could not read process status from redis: %v", err)
hb.terminate() hb.terminate()
continue continue
} }
if len(ps) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps)) t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
hb.terminate() hb.terminate()
continue continue
} }
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.terminate()
continue continue
} }
@@ -101,3 +119,36 @@ func TestHeartbeater(t *testing.T) {
hb.terminate() hb.terminate()
} }
} }
func TestHeartbeaterWithRedisDown(t *testing.T) {
// Make sure that heartbeater goroutine doesn't panic
// if it cannot connect to redis.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r)
hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: testBroker,
interval: time.Second,
concurrency: 10,
queues: map[string]int{"default": 1},
strictPriority: false,
status: base.NewServerStatus(base.StatusRunning),
starting: make(chan *workerInfo),
finished: make(chan *base.TaskMessage),
})
testBroker.Sleep()
var wg sync.WaitGroup
hb.start(&wg)
// wait for heartbeater to try writing data to redis
time.Sleep(2 * time.Second)
hb.terminate()
}

22
inspeq/doc.go Normal file
View File

@@ -0,0 +1,22 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
/*
Package inspeq provides helper types and functions to inspect queues and tasks managed by Asynq.
Inspector is used to query and mutate the state of queues and tasks.
Example:
inspector := inspeq.New(asynq.RedisClientOpt{Addr: "localhost:6379"})
tasks, err := inspector.ListArchivedTasks("my-queue")
for _, t := range tasks {
if err := inspector.DeleteTaskByKey(t.Key()); err != nil {
// handle error
}
}
*/
package inspeq

956
inspeq/inspector.go Normal file
View File

@@ -0,0 +1,956 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package inspeq
import (
"fmt"
"strconv"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
// Inspector is a client interface to inspect and mutate the state of
// queues and tasks.
type Inspector struct {
rdb *rdb.RDB
}
// New returns a new instance of Inspector.
func New(r asynq.RedisConnOpt) *Inspector {
c, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok {
panic(fmt.Sprintf("inspeq: unsupported RedisConnOpt type %T", r))
}
return &Inspector{
rdb: rdb.NewRDB(c),
}
}
// Close closes the connection with redis.
func (i *Inspector) Close() error {
return i.rdb.Close()
}
// Queues returns a list of all queue names.
func (i *Inspector) Queues() ([]string, error) {
return i.rdb.AllQueues()
}
// QueueStats represents a state of queues at a certain time.
type QueueStats struct {
// Name of the queue.
Queue string
// Total number of bytes that the queue and its tasks require to be stored in redis.
MemoryUsage int64
// Size is the total number of tasks in the queue.
// The value is the sum of Pending, Active, Scheduled, Retry, and Archived.
Size int
// Number of pending tasks.
Pending int
// Number of active tasks.
Active int
// Number of scheduled tasks.
Scheduled int
// Number of retry tasks.
Retry int
// Number of archived tasks.
Archived int
// Total number of tasks being processed during the given date.
// The number includes both succeeded and failed tasks.
Processed int
// Total number of tasks failed to be processed during the given date.
Failed int
// Paused indicates whether the queue is paused.
// If true, tasks in the queue will not be processed.
Paused bool
// Time when this stats was taken.
Timestamp time.Time
}
// CurrentStats returns a current stats of the given queue.
func (i *Inspector) CurrentStats(qname string) (*QueueStats, error) {
if err := base.ValidateQueueName(qname); err != nil {
return nil, err
}
stats, err := i.rdb.CurrentStats(qname)
if err != nil {
return nil, err
}
return &QueueStats{
Queue: stats.Queue,
MemoryUsage: stats.MemoryUsage,
Size: stats.Size,
Pending: stats.Pending,
Active: stats.Active,
Scheduled: stats.Scheduled,
Retry: stats.Retry,
Archived: stats.Archived,
Processed: stats.Processed,
Failed: stats.Failed,
Paused: stats.Paused,
Timestamp: stats.Timestamp,
}, nil
}
// DailyStats holds aggregate data for a given day for a given queue.
type DailyStats struct {
// Name of the queue.
Queue string
// Total number of tasks being processed during the given date.
// The number includes both succeeded and failed tasks.
Processed int
// Total number of tasks failed to be processed during the given date.
Failed int
// Date this stats was taken.
Date time.Time
}
// History returns a list of stats from the last n days.
func (i *Inspector) History(qname string, n int) ([]*DailyStats, error) {
if err := base.ValidateQueueName(qname); err != nil {
return nil, err
}
stats, err := i.rdb.HistoricalStats(qname, n)
if err != nil {
return nil, err
}
var res []*DailyStats
for _, s := range stats {
res = append(res, &DailyStats{
Queue: s.Queue,
Processed: s.Processed,
Failed: s.Failed,
Date: s.Time,
})
}
return res, nil
}
// ErrQueueNotFound indicates that the specified queue does not exist.
type ErrQueueNotFound struct {
qname string
}
func (e *ErrQueueNotFound) Error() string {
return fmt.Sprintf("queue %q does not exist", e.qname)
}
// ErrQueueNotEmpty indicates that the specified queue is not empty.
type ErrQueueNotEmpty struct {
qname string
}
func (e *ErrQueueNotEmpty) Error() string {
return fmt.Sprintf("queue %q is not empty", e.qname)
}
// DeleteQueue removes the specified queue.
//
// If force is set to true, DeleteQueue will remove the queue regardless of
// the queue size as long as no tasks are active in the queue.
// If force is set to false, DeleteQueue will remove the queue only if
// the queue is empty.
//
// If the specified queue does not exist, DeleteQueue returns ErrQueueNotFound.
// If force is set to false and the specified queue is not empty, DeleteQueue
// returns ErrQueueNotEmpty.
func (i *Inspector) DeleteQueue(qname string, force bool) error {
err := i.rdb.RemoveQueue(qname, force)
if _, ok := err.(*rdb.ErrQueueNotFound); ok {
return &ErrQueueNotFound{qname}
}
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
return &ErrQueueNotEmpty{qname}
}
return err
}
// PendingTask is a task in a queue and is ready to be processed.
type PendingTask struct {
*asynq.Task
ID string
Queue string
MaxRetry int
Retried int
LastError string
}
// ActiveTask is a task that's currently being processed.
type ActiveTask struct {
*asynq.Task
ID string
Queue string
MaxRetry int
Retried int
LastError string
}
// ScheduledTask is a task scheduled to be processed in the future.
type ScheduledTask struct {
*asynq.Task
ID string
Queue string
MaxRetry int
Retried int
LastError string
NextProcessAt time.Time
score int64
}
// RetryTask is a task scheduled to be retried in the future.
type RetryTask struct {
*asynq.Task
ID string
Queue string
NextProcessAt time.Time
MaxRetry int
Retried int
LastError string
// TODO: LastFailedAt time.Time
score int64
}
// ArchivedTask is a task archived for debugging and inspection purposes, and
// it won't be retried automatically.
// A task can be archived when the task exhausts its retry counts or manually
// archived by a user via the CLI or Inspector.
type ArchivedTask struct {
*asynq.Task
ID string
Queue string
MaxRetry int
Retried int
LastFailedAt time.Time
LastError string
score int64
}
// Format string used for task key.
// Format is <prefix>:<uuid>:<score>.
const taskKeyFormat = "%s:%v:%v"
// Prefix used for task key.
const (
keyPrefixPending = "p"
keyPrefixScheduled = "s"
keyPrefixRetry = "r"
keyPrefixArchived = "a"
allKeyPrefixes = keyPrefixPending + keyPrefixScheduled + keyPrefixRetry + keyPrefixArchived
)
// Key returns a key used to delete, and archive the pending task.
func (t *PendingTask) Key() string {
// Note: Pending tasks are stored in redis LIST, therefore no score.
// Use zero for the score to use the same key format.
return fmt.Sprintf(taskKeyFormat, keyPrefixPending, t.ID, 0)
}
// Key returns a key used to delete, run, and archive the scheduled task.
func (t *ScheduledTask) Key() string {
return fmt.Sprintf(taskKeyFormat, keyPrefixScheduled, t.ID, t.score)
}
// Key returns a key used to delete, run, and archive the retry task.
func (t *RetryTask) Key() string {
return fmt.Sprintf(taskKeyFormat, keyPrefixRetry, t.ID, t.score)
}
// Key returns a key used to delete and run the archived task.
func (t *ArchivedTask) Key() string {
return fmt.Sprintf(taskKeyFormat, keyPrefixArchived, t.ID, t.score)
}
// parseTaskKey parses a key string and returns each part of key with proper
// type if valid, otherwise it reports an error.
func parseTaskKey(key string) (prefix string, id uuid.UUID, score int64, err error) {
parts := strings.Split(key, ":")
if len(parts) != 3 {
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
}
id, err = uuid.Parse(parts[1])
if err != nil {
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
}
score, err = strconv.ParseInt(parts[2], 10, 64)
if err != nil {
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
}
prefix = parts[0]
if len(prefix) != 1 || !strings.Contains(allKeyPrefixes, prefix) {
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
}
return prefix, id, score, nil
}
// ListOption specifies behavior of list operation.
type ListOption interface{}
// Internal list option representations.
type (
pageSizeOpt int
pageNumOpt int
)
type listOption struct {
pageSize int
pageNum int
}
const (
// Page size used by default in list operation.
defaultPageSize = 30
// Page number used by default in list operation.
defaultPageNum = 1
)
func composeListOptions(opts ...ListOption) listOption {
res := listOption{
pageSize: defaultPageSize,
pageNum: defaultPageNum,
}
for _, opt := range opts {
switch opt := opt.(type) {
case pageSizeOpt:
res.pageSize = int(opt)
case pageNumOpt:
res.pageNum = int(opt)
default:
// ignore unexpected option
}
}
return res
}
// PageSize returns an option to specify the page size for list operation.
//
// Negative page size is treated as zero.
func PageSize(n int) ListOption {
if n < 0 {
n = 0
}
return pageSizeOpt(n)
}
// Page returns an option to specify the page number for list operation.
// The value 1 fetches the first page.
//
// Negative page number is treated as one.
func Page(n int) ListOption {
if n < 0 {
n = 1
}
return pageNumOpt(n)
}
// ListPendingTasks retrieves pending tasks from the specified queue.
//
// By default, it retrieves the first 30 tasks.
func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*PendingTask, error) {
if err := base.ValidateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
msgs, err := i.rdb.ListPending(qname, pgn)
if err != nil {
return nil, err
}
var tasks []*PendingTask
for _, m := range msgs {
tasks = append(tasks, &PendingTask{
Task: asynq.NewTask(m.Type, m.Payload),
ID: m.ID.String(),
Queue: m.Queue,
MaxRetry: m.Retry,
Retried: m.Retried,
LastError: m.ErrorMsg,
})
}
return tasks, err
}
// ListActiveTasks retrieves active tasks from the specified queue.
//
// By default, it retrieves the first 30 tasks.
func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*ActiveTask, error) {
if err := base.ValidateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
msgs, err := i.rdb.ListActive(qname, pgn)
if err != nil {
return nil, err
}
var tasks []*ActiveTask
for _, m := range msgs {
tasks = append(tasks, &ActiveTask{
Task: asynq.NewTask(m.Type, m.Payload),
ID: m.ID.String(),
Queue: m.Queue,
MaxRetry: m.Retry,
Retried: m.Retried,
LastError: m.ErrorMsg,
})
}
return tasks, err
}
// ListScheduledTasks retrieves scheduled tasks from the specified queue.
// Tasks are sorted by NextProcessAt field in ascending order.
//
// By default, it retrieves the first 30 tasks.
func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*ScheduledTask, error) {
if err := base.ValidateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListScheduled(qname, pgn)
if err != nil {
return nil, err
}
var tasks []*ScheduledTask
for _, z := range zs {
processAt := time.Unix(z.Score, 0)
t := asynq.NewTask(z.Message.Type, z.Message.Payload)
tasks = append(tasks, &ScheduledTask{
Task: t,
ID: z.Message.ID.String(),
Queue: z.Message.Queue,
MaxRetry: z.Message.Retry,
Retried: z.Message.Retried,
LastError: z.Message.ErrorMsg,
NextProcessAt: processAt,
score: z.Score,
})
}
return tasks, nil
}
// ListRetryTasks retrieves retry tasks from the specified queue.
// Tasks are sorted by NextProcessAt field in ascending order.
//
// By default, it retrieves the first 30 tasks.
func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*RetryTask, error) {
if err := base.ValidateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListRetry(qname, pgn)
if err != nil {
return nil, err
}
var tasks []*RetryTask
for _, z := range zs {
processAt := time.Unix(z.Score, 0)
t := asynq.NewTask(z.Message.Type, z.Message.Payload)
tasks = append(tasks, &RetryTask{
Task: t,
ID: z.Message.ID.String(),
Queue: z.Message.Queue,
NextProcessAt: processAt,
MaxRetry: z.Message.Retry,
Retried: z.Message.Retried,
// TODO: LastFailedAt: z.Message.LastFailedAt
LastError: z.Message.ErrorMsg,
score: z.Score,
})
}
return tasks, nil
}
// ListArchivedTasks retrieves archived tasks from the specified queue.
// Tasks are sorted by LastFailedAt field in descending order.
//
// By default, it retrieves the first 30 tasks.
func (i *Inspector) ListArchivedTasks(qname string, opts ...ListOption) ([]*ArchivedTask, error) {
if err := base.ValidateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListArchived(qname, pgn)
if err != nil {
return nil, err
}
var tasks []*ArchivedTask
for _, z := range zs {
failedAt := time.Unix(z.Score, 0)
t := asynq.NewTask(z.Message.Type, z.Message.Payload)
tasks = append(tasks, &ArchivedTask{
Task: t,
ID: z.Message.ID.String(),
Queue: z.Message.Queue,
MaxRetry: z.Message.Retry,
Retried: z.Message.Retried,
LastFailedAt: failedAt,
LastError: z.Message.ErrorMsg,
score: z.Score,
})
}
return tasks, nil
}
// DeleteAllPendingTasks deletes all pending tasks from the specified queue,
// and reports the number tasks deleted.
func (i *Inspector) DeleteAllPendingTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllPendingTasks(qname)
return int(n), err
}
// DeleteAllScheduledTasks deletes all scheduled tasks from the specified queue,
// and reports the number tasks deleted.
func (i *Inspector) DeleteAllScheduledTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllScheduledTasks(qname)
return int(n), err
}
// DeleteAllRetryTasks deletes all retry tasks from the specified queue,
// and reports the number tasks deleted.
func (i *Inspector) DeleteAllRetryTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllRetryTasks(qname)
return int(n), err
}
// DeleteAllArchivedTasks deletes all archived tasks from the specified queue,
// and reports the number tasks deleted.
func (i *Inspector) DeleteAllArchivedTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllArchivedTasks(qname)
return int(n), err
}
// DeleteTaskByKey deletes a task with the given key from the given queue.
func (i *Inspector) DeleteTaskByKey(qname, key string) error {
if err := base.ValidateQueueName(qname); err != nil {
return err
}
prefix, id, score, err := parseTaskKey(key)
if err != nil {
return err
}
switch prefix {
case keyPrefixPending:
return i.rdb.DeletePendingTask(qname, id)
case keyPrefixScheduled:
return i.rdb.DeleteScheduledTask(qname, id, score)
case keyPrefixRetry:
return i.rdb.DeleteRetryTask(qname, id, score)
case keyPrefixArchived:
return i.rdb.DeleteArchivedTask(qname, id, score)
default:
return fmt.Errorf("invalid key")
}
}
// RunAllScheduledTasks transition all scheduled tasks to pending state from the given queue,
// and reports the number of tasks transitioned.
func (i *Inspector) RunAllScheduledTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.RunAllScheduledTasks(qname)
return int(n), err
}
// RunAllRetryTasks transition all retry tasks to pending state from the given queue,
// and reports the number of tasks transitioned.
func (i *Inspector) RunAllRetryTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.RunAllRetryTasks(qname)
return int(n), err
}
// RunAllArchivedTasks transition all archived tasks to pending state from the given queue,
// and reports the number of tasks transitioned.
func (i *Inspector) RunAllArchivedTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.RunAllArchivedTasks(qname)
return int(n), err
}
// RunTaskByKey transition a task to pending state given task key and queue name.
func (i *Inspector) RunTaskByKey(qname, key string) error {
if err := base.ValidateQueueName(qname); err != nil {
return err
}
prefix, id, score, err := parseTaskKey(key)
if err != nil {
return err
}
switch prefix {
case keyPrefixScheduled:
return i.rdb.RunScheduledTask(qname, id, score)
case keyPrefixRetry:
return i.rdb.RunRetryTask(qname, id, score)
case keyPrefixArchived:
return i.rdb.RunArchivedTask(qname, id, score)
case keyPrefixPending:
return fmt.Errorf("task is already pending for run")
default:
return fmt.Errorf("invalid key")
}
}
// ArchiveAllPendingTasks archives all pending tasks from the given queue,
// and reports the number of tasks archived.
func (i *Inspector) ArchiveAllPendingTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.ArchiveAllPendingTasks(qname)
return int(n), err
}
// ArchiveAllScheduledTasks archives all scheduled tasks from the given queue,
// and reports the number of tasks archiveed.
func (i *Inspector) ArchiveAllScheduledTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.ArchiveAllScheduledTasks(qname)
return int(n), err
}
// ArchiveAllRetryTasks archives all retry tasks from the given queue,
// and reports the number of tasks archiveed.
func (i *Inspector) ArchiveAllRetryTasks(qname string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.ArchiveAllRetryTasks(qname)
return int(n), err
}
// ArchiveTaskByKey archives a task with the given key in the given queue.
func (i *Inspector) ArchiveTaskByKey(qname, key string) error {
if err := base.ValidateQueueName(qname); err != nil {
return err
}
prefix, id, score, err := parseTaskKey(key)
if err != nil {
return err
}
switch prefix {
case keyPrefixPending:
return i.rdb.ArchivePendingTask(qname, id)
case keyPrefixScheduled:
return i.rdb.ArchiveScheduledTask(qname, id, score)
case keyPrefixRetry:
return i.rdb.ArchiveRetryTask(qname, id, score)
case keyPrefixArchived:
return fmt.Errorf("task is already archived")
default:
return fmt.Errorf("invalid key")
}
}
// CancelActiveTask sends a signal to cancel processing of the task with
// the given id. CancelActiveTask is best-effort, which means that it does not
// guarantee that the task with the given id will be canceled. The return
// value only indicates whether the cancelation signal has been sent.
func (i *Inspector) CancelActiveTask(id string) error {
return i.rdb.PublishCancelation(id)
}
// PauseQueue pauses task processing on the specified queue.
// If the queue is already paused, it will return a non-nil error.
func (i *Inspector) PauseQueue(qname string) error {
if err := base.ValidateQueueName(qname); err != nil {
return err
}
return i.rdb.Pause(qname)
}
// UnpauseQueue resumes task processing on the specified queue.
// If the queue is not paused, it will return a non-nil error.
func (i *Inspector) UnpauseQueue(qname string) error {
if err := base.ValidateQueueName(qname); err != nil {
return err
}
return i.rdb.Unpause(qname)
}
// Servers return a list of running servers' information.
func (i *Inspector) Servers() ([]*ServerInfo, error) {
servers, err := i.rdb.ListServers()
if err != nil {
return nil, err
}
workers, err := i.rdb.ListWorkers()
if err != nil {
return nil, err
}
m := make(map[string]*ServerInfo) // ServerInfo keyed by serverID
for _, s := range servers {
m[s.ServerID] = &ServerInfo{
ID: s.ServerID,
Host: s.Host,
PID: s.PID,
Concurrency: s.Concurrency,
Queues: s.Queues,
StrictPriority: s.StrictPriority,
Started: s.Started,
Status: s.Status,
ActiveWorkers: make([]*WorkerInfo, 0),
}
}
for _, w := range workers {
srvInfo, ok := m[w.ServerID]
if !ok {
continue
}
wrkInfo := &WorkerInfo{
Started: w.Started,
Deadline: w.Deadline,
Task: &ActiveTask{
Task: asynq.NewTask(w.Type, w.Payload),
ID: w.ID,
Queue: w.Queue,
},
}
srvInfo.ActiveWorkers = append(srvInfo.ActiveWorkers, wrkInfo)
}
var out []*ServerInfo
for _, srvInfo := range m {
out = append(out, srvInfo)
}
return out, nil
}
// ServerInfo describes a running Server instance.
type ServerInfo struct {
// Unique Identifier for the server.
ID string
// Host machine on which the server is running.
Host string
// PID of the process in which the server is running.
PID int
// Server configuration details.
// See Config doc for field descriptions.
Concurrency int
Queues map[string]int
StrictPriority bool
// Time the server started.
Started time.Time
// Status indicates the status of the server.
// TODO: Update comment with more details.
Status string
// A List of active workers currently processing tasks.
ActiveWorkers []*WorkerInfo
}
// WorkerInfo describes a running worker processing a task.
type WorkerInfo struct {
// The task the worker is processing.
Task *ActiveTask
// Time the worker started processing the task.
Started time.Time
// Time the worker needs to finish processing the task by.
Deadline time.Time
}
// ClusterKeySlot returns an integer identifying the hash slot the given queue hashes to.
func (i *Inspector) ClusterKeySlot(qname string) (int64, error) {
return i.rdb.ClusterKeySlot(qname)
}
// ClusterNode describes a node in redis cluster.
type ClusterNode struct {
// Node ID in the cluster.
ID string
// Address of the node.
Addr string
}
// ClusterNodes returns a list of nodes the given queue belongs to.
func (i *Inspector) ClusterNodes(qname string) ([]ClusterNode, error) {
nodes, err := i.rdb.ClusterNodes(qname)
if err != nil {
return nil, err
}
var res []ClusterNode
for _, node := range nodes {
res = append(res, ClusterNode{ID: node.ID, Addr: node.Addr})
}
return res, nil
}
// SchedulerEntry holds information about a periodic task registered with a scheduler.
type SchedulerEntry struct {
// Identifier of this entry.
ID string
// Spec describes the schedule of this entry.
Spec string
// Periodic Task registered for this entry.
Task *asynq.Task
// Opts is the options for the periodic task.
Opts []asynq.Option
// Next shows the next time the task will be enqueued.
Next time.Time
// Prev shows the last time the task was enqueued.
// Zero time if task was never enqueued.
Prev time.Time
}
// SchedulerEntries returns a list of all entries registered with
// currently running schedulers.
func (i *Inspector) SchedulerEntries() ([]*SchedulerEntry, error) {
var entries []*SchedulerEntry
res, err := i.rdb.ListSchedulerEntries()
if err != nil {
return nil, err
}
for _, e := range res {
task := asynq.NewTask(e.Type, e.Payload)
var opts []asynq.Option
for _, s := range e.Opts {
if o, err := parseOption(s); err == nil {
// ignore bad data
opts = append(opts, o)
}
}
entries = append(entries, &SchedulerEntry{
ID: e.ID,
Spec: e.Spec,
Task: task,
Opts: opts,
Next: e.Next,
Prev: e.Prev,
})
}
return entries, nil
}
// parseOption interprets a string s as an Option and returns the Option if parsing is successful,
// otherwise returns non-nil error.
func parseOption(s string) (asynq.Option, error) {
fn, arg := parseOptionFunc(s), parseOptionArg(s)
switch fn {
case "Queue":
qname, err := strconv.Unquote(arg)
if err != nil {
return nil, err
}
return asynq.Queue(qname), nil
case "MaxRetry":
n, err := strconv.Atoi(arg)
if err != nil {
return nil, err
}
return asynq.MaxRetry(n), nil
case "Timeout":
d, err := time.ParseDuration(arg)
if err != nil {
return nil, err
}
return asynq.Timeout(d), nil
case "Deadline":
t, err := time.Parse(time.UnixDate, arg)
if err != nil {
return nil, err
}
return asynq.Deadline(t), nil
case "Unique":
d, err := time.ParseDuration(arg)
if err != nil {
return nil, err
}
return asynq.Unique(d), nil
case "ProcessAt":
t, err := time.Parse(time.UnixDate, arg)
if err != nil {
return nil, err
}
return asynq.ProcessAt(t), nil
case "ProcessIn":
d, err := time.ParseDuration(arg)
if err != nil {
return nil, err
}
return asynq.ProcessIn(d), nil
default:
return nil, fmt.Errorf("cannot not parse option string %q", s)
}
}
func parseOptionFunc(s string) string {
i := strings.Index(s, "(")
return s[:i]
}
func parseOptionArg(s string) string {
i := strings.Index(s, "(")
if i >= 0 {
j := strings.Index(s, ")")
if j > i {
return s[i+1 : j]
}
}
return ""
}
// SchedulerEnqueueEvent holds information about an enqueue event by a scheduler.
type SchedulerEnqueueEvent struct {
// ID of the task that was enqueued.
TaskID string
// Time the task was enqueued.
EnqueuedAt time.Time
}
// ListSchedulerEnqueueEvents retrieves a list of enqueue events from the specified scheduler entry.
//
// By default, it retrieves the first 30 tasks.
func (i *Inspector) ListSchedulerEnqueueEvents(entryID string, opts ...ListOption) ([]*SchedulerEnqueueEvent, error) {
opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
data, err := i.rdb.ListSchedulerEnqueueEvents(entryID, pgn)
if err != nil {
return nil, err
}
var events []*SchedulerEnqueueEvent
for _, e := range data {
events = append(events, &SchedulerEnqueueEvent{TaskID: e.TaskID, EnqueuedAt: e.EnqueuedAt})
}
return events, nil
}

2712
inspeq/inspector_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -7,20 +7,23 @@ package asynqtest
import ( import (
"encoding/json" "encoding/json"
"math"
"sort" "sort"
"testing" "testing"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
) )
// ZSetEntry is an entry in redis sorted set. // EquateInt64Approx returns a Comparer option that treats int64 values
type ZSetEntry struct { // to be equal if they are within the given margin.
Msg *base.TaskMessage func EquateInt64Approx(margin int64) cmp.Option {
Score float64 return cmp.Comparer(func(a, b int64) bool {
return math.Abs(float64(a-b)) <= float64(margin)
})
} }
// SortMsgOpt is a cmp.Option to sort base.TaskMessage for comparing slice of task messages. // SortMsgOpt is a cmp.Option to sort base.TaskMessage for comparing slice of task messages.
@@ -33,17 +36,17 @@ var SortMsgOpt = cmp.Transformer("SortTaskMessages", func(in []*base.TaskMessage
}) })
// SortZSetEntryOpt is an cmp.Option to sort ZSetEntry for comparing slice of zset entries. // SortZSetEntryOpt is an cmp.Option to sort ZSetEntry for comparing slice of zset entries.
var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) []ZSetEntry { var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []base.Z) []base.Z {
out := append([]ZSetEntry(nil), in...) // Copy input to avoid mutating it out := append([]base.Z(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
return out[i].Msg.ID.String() < out[j].Msg.ID.String() return out[i].Message.ID.String() < out[j].Message.ID.String()
}) })
return out return out
}) })
// SortProcessInfoOpt is a cmp.Option to sort base.ProcessInfo for comparing slice of process info. // SortServerInfoOpt is a cmp.Option to sort base.ServerInfo for comparing slice of process info.
var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.ProcessInfo) []*base.ProcessInfo { var SortServerInfoOpt = cmp.Transformer("SortServerInfo", func(in []*base.ServerInfo) []*base.ServerInfo {
out := append([]*base.ProcessInfo(nil), in...) // Copy input to avoid mutating it out := append([]*base.ServerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host { if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host return out[i].Host < out[j].Host
@@ -57,7 +60,25 @@ var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.Proc
var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo { var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo {
out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
return out[i].ID.String() < out[j].ID.String() return out[i].ID < out[j].ID
})
return out
})
// SortSchedulerEntryOpt is a cmp.Option to sort base.SchedulerEntry for comparing slice of entries.
var SortSchedulerEntryOpt = cmp.Transformer("SortSchedulerEntry", func(in []*base.SchedulerEntry) []*base.SchedulerEntry {
out := append([]*base.SchedulerEntry(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].Spec < out[j].Spec
})
return out
})
// SortSchedulerEnqueueEventOpt is a cmp.Option to sort base.SchedulerEnqueueEvent for comparing slice of events.
var SortSchedulerEnqueueEventOpt = cmp.Transformer("SortSchedulerEnqueueEvent", func(in []*base.SchedulerEnqueueEvent) []*base.SchedulerEnqueueEvent {
out := append([]*base.SchedulerEnqueueEvent(nil), in...)
sort.Slice(out, func(i, j int) bool {
return out[i].EnqueuedAt.Unix() < out[j].EnqueuedAt.Unix()
}) })
return out return out
}) })
@@ -74,27 +95,37 @@ var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload. // NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage { func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage {
return &base.TaskMessage{ return NewTaskMessageWithQueue(taskType, payload, base.DefaultQueueName)
ID: xid.New(),
Type: taskType,
Queue: base.DefaultQueueName,
Retry: 25,
Payload: payload,
}
} }
// NewTaskMessageWithQueue returns a new instance of TaskMessage given a // NewTaskMessageWithQueue returns a new instance of TaskMessage given a
// task type, payload and queue name. // task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage { func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage {
return &base.TaskMessage{ return &base.TaskMessage{
ID: xid.New(), ID: uuid.New(),
Type: taskType, Type: taskType,
Queue: qname, Queue: qname,
Retry: 25, Retry: 25,
Payload: payload, Payload: payload,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
} }
} }
// TaskMessageAfterRetry returns an updated copy of t after retry.
// It increments retry count and sets the error message.
func TaskMessageAfterRetry(t base.TaskMessage, errMsg string) *base.TaskMessage {
t.Retried = t.Retried + 1
t.ErrorMsg = errMsg
return &t
}
// TaskMessageWithError returns an updated copy of t with the given error message.
func TaskMessageWithError(t base.TaskMessage, errMsg string) *base.TaskMessage {
t.ErrorMsg = errMsg
return &t
}
// MustMarshal marshals given task message and returns a json string. // MustMarshal marshals given task message and returns a json string.
// Calling test will fail if marshaling errors out. // Calling test will fail if marshaling errors out.
func MustMarshal(tb testing.TB, msg *base.TaskMessage) string { func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
@@ -141,51 +172,113 @@ func MustUnmarshalSlice(tb testing.TB, data []string) []*base.TaskMessage {
} }
// FlushDB deletes all the keys of the currently selected DB. // FlushDB deletes all the keys of the currently selected DB.
func FlushDB(tb testing.TB, r *redis.Client) { func FlushDB(tb testing.TB, r redis.UniversalClient) {
tb.Helper() tb.Helper()
switch r := r.(type) {
case *redis.Client:
if err := r.FlushDB().Err(); err != nil { if err := r.FlushDB().Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
} case *redis.ClusterClient:
err := r.ForEachMaster(func(c *redis.Client) error {
// SeedEnqueuedQueue initializes the specified queue with the given messages. if err := c.FlushAll().Err(); err != nil {
// return err
// If queue name option is not passed, it defaults to the default queue. }
func SeedEnqueuedQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage, queueOpt ...string) { return nil
tb.Helper() })
queue := base.DefaultQueue if err != nil {
if len(queueOpt) > 0 { tb.Fatal(err)
queue = base.QueueKey(queueOpt[0]) }
} }
r.SAdd(base.AllQueues, queue)
seedRedisList(tb, r, queue, msgs)
} }
// SeedInProgressQueue initializes the in-progress queue with the given messages. // SeedPendingQueue initializes the specified queue with the given messages.
func SeedInProgressQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage) { func SeedPendingQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
tb.Helper() tb.Helper()
seedRedisList(tb, r, base.InProgressQueue, msgs) r.SAdd(base.AllQueues, qname)
seedRedisList(tb, r, base.QueueKey(qname), msgs)
}
// SeedActiveQueue initializes the active queue with the given messages.
func SeedActiveQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
tb.Helper()
r.SAdd(base.AllQueues, qname)
seedRedisList(tb, r, base.ActiveKey(qname), msgs)
} }
// SeedScheduledQueue initializes the scheduled queue with the given messages. // SeedScheduledQueue initializes the scheduled queue with the given messages.
func SeedScheduledQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) { func SeedScheduledQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
seedRedisZSet(tb, r, base.ScheduledQueue, entries) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.ScheduledKey(qname), entries)
} }
// SeedRetryQueue initializes the retry queue with the given messages. // SeedRetryQueue initializes the retry queue with the given messages.
func SeedRetryQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) { func SeedRetryQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
seedRedisZSet(tb, r, base.RetryQueue, entries) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.RetryKey(qname), entries)
} }
// SeedDeadQueue initializes the dead queue with the given messages. // SeedArchivedQueue initializes the archived queue with the given messages.
func SeedDeadQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) { func SeedArchivedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
seedRedisZSet(tb, r, base.DeadQueue, entries) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.ArchivedKey(qname), entries)
} }
func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.TaskMessage) { // SeedDeadlines initializes the deadlines set with the given entries.
func SeedDeadlines(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper()
r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.DeadlinesKey(qname), entries)
}
// SeedAllPendingQueues initializes all of the specified queues with the given messages.
//
// pending maps a queue name to a list of messages.
func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[string][]*base.TaskMessage) {
for q, msgs := range pending {
SeedPendingQueue(tb, r, msgs, q)
}
}
// SeedAllActiveQueues initializes all of the specified active queues with the given messages.
func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[string][]*base.TaskMessage) {
for q, msgs := range active {
SeedActiveQueue(tb, r, msgs, q)
}
}
// SeedAllScheduledQueues initializes all of the specified scheduled queues with the given entries.
func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled map[string][]base.Z) {
for q, entries := range scheduled {
SeedScheduledQueue(tb, r, entries, q)
}
}
// SeedAllRetryQueues initializes all of the specified retry queues with the given entries.
func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string][]base.Z) {
for q, entries := range retry {
SeedRetryQueue(tb, r, entries, q)
}
}
// SeedAllArchivedQueues initializes all of the specified archived queues with the given entries.
func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[string][]base.Z) {
for q, entries := range archived {
SeedArchivedQueue(tb, r, entries, q)
}
}
// SeedAllDeadlines initializes all of the deadlines with the given entries.
func SeedAllDeadlines(tb testing.TB, r redis.UniversalClient, deadlines map[string][]base.Z) {
for q, entries := range deadlines {
SeedDeadlines(tb, r, entries, q)
}
}
func seedRedisList(tb testing.TB, c redis.UniversalClient, key string, msgs []*base.TaskMessage) {
data := MustMarshalSlice(tb, msgs) data := MustMarshalSlice(tb, msgs)
for _, s := range data { for _, s := range data {
if err := c.LPush(key, s).Err(); err != nil { if err := c.LPush(key, s).Err(); err != nil {
@@ -194,86 +287,86 @@ func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.Task
} }
} }
func seedRedisZSet(tb testing.TB, c *redis.Client, key string, items []ZSetEntry) { func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string, items []base.Z) {
for _, item := range items { for _, item := range items {
z := &redis.Z{Member: MustMarshal(tb, item.Msg), Score: float64(item.Score)} z := &redis.Z{Member: MustMarshal(tb, item.Message), Score: float64(item.Score)}
if err := c.ZAdd(key, z).Err(); err != nil { if err := c.ZAdd(key, z).Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
} }
} }
// GetEnqueuedMessages returns all task messages in the specified queue. // GetPendingMessages returns all pending messages in the given queue.
// func GetPendingMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
// If queue name option is not passed, it defaults to the default queue.
func GetEnqueuedMessages(tb testing.TB, r *redis.Client, queueOpt ...string) []*base.TaskMessage {
tb.Helper() tb.Helper()
queue := base.DefaultQueue return getListMessages(tb, r, base.QueueKey(qname))
if len(queueOpt) > 0 {
queue = base.QueueKey(queueOpt[0])
}
return getListMessages(tb, r, queue)
} }
// GetInProgressMessages returns all task messages in the in-progress queue. // GetActiveMessages returns all active messages in the given queue.
func GetInProgressMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetActiveMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getListMessages(tb, r, base.InProgressQueue) return getListMessages(tb, r, base.ActiveKey(qname))
} }
// GetScheduledMessages returns all task messages in the scheduled queue. // GetScheduledMessages returns all scheduled task messages in the given queue.
func GetScheduledMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetScheduledMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.ScheduledQueue) return getZSetMessages(tb, r, base.ScheduledKey(qname))
} }
// GetRetryMessages returns all task messages in the retry queue. // GetRetryMessages returns all retry messages in the given queue.
func GetRetryMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetRetryMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.RetryQueue) return getZSetMessages(tb, r, base.RetryKey(qname))
} }
// GetDeadMessages returns all task messages in the dead queue. // GetArchivedMessages returns all archived messages in the given queue.
func GetDeadMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetArchivedMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.DeadQueue) return getZSetMessages(tb, r, base.ArchivedKey(qname))
} }
// GetScheduledEntries returns all task messages and its score in the scheduled queue. // GetScheduledEntries returns all scheduled messages and its score in the given queue.
func GetScheduledEntries(tb testing.TB, r *redis.Client) []ZSetEntry { func GetScheduledEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.ScheduledQueue) return getZSetEntries(tb, r, base.ScheduledKey(qname))
} }
// GetRetryEntries returns all task messages and its score in the retry queue. // GetRetryEntries returns all retry messages and its score in the given queue.
func GetRetryEntries(tb testing.TB, r *redis.Client) []ZSetEntry { func GetRetryEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.RetryQueue) return getZSetEntries(tb, r, base.RetryKey(qname))
} }
// GetDeadEntries returns all task messages and its score in the dead queue. // GetArchivedEntries returns all archived messages and its score in the given queue.
func GetDeadEntries(tb testing.TB, r *redis.Client) []ZSetEntry { func GetArchivedEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.DeadQueue) return getZSetEntries(tb, r, base.ArchivedKey(qname))
} }
func getListMessages(tb testing.TB, r *redis.Client, list string) []*base.TaskMessage { // GetDeadlinesEntries returns all task messages and its score in the deadlines set for the given queue.
func GetDeadlinesEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper()
return getZSetEntries(tb, r, base.DeadlinesKey(qname))
}
func getListMessages(tb testing.TB, r redis.UniversalClient, list string) []*base.TaskMessage {
data := r.LRange(list, 0, -1).Val() data := r.LRange(list, 0, -1).Val()
return MustUnmarshalSlice(tb, data) return MustUnmarshalSlice(tb, data)
} }
func getZSetMessages(tb testing.TB, r *redis.Client, zset string) []*base.TaskMessage { func getZSetMessages(tb testing.TB, r redis.UniversalClient, zset string) []*base.TaskMessage {
data := r.ZRange(zset, 0, -1).Val() data := r.ZRange(zset, 0, -1).Val()
return MustUnmarshalSlice(tb, data) return MustUnmarshalSlice(tb, data)
} }
func getZSetEntries(tb testing.TB, r *redis.Client, zset string) []ZSetEntry { func getZSetEntries(tb testing.TB, r redis.UniversalClient, zset string) []base.Z {
data := r.ZRangeWithScores(zset, 0, -1).Val() data := r.ZRangeWithScores(zset, 0, -1).Val()
var entries []ZSetEntry var entries []base.Z
for _, z := range data { for _, z := range data {
entries = append(entries, ZSetEntry{ entries = append(entries, base.Z{
Msg: MustUnmarshal(tb, z.Member.(string)), Message: MustUnmarshal(tb, z.Member.(string)),
Score: z.Score, Score: int64(z.Score),
}) })
} }
return entries return entries

View File

@@ -7,58 +7,136 @@ package base
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"sort"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/rs/xid" "github.com/go-redis/redis/v7"
"github.com/google/uuid"
) )
// Version of asynq library and CLI.
const Version = "0.17.1"
// DefaultQueueName is the queue name used if none are specified by user. // DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default" const DefaultQueueName = "default"
// Redis keys // DefaultQueue is the redis key for the default queue.
var DefaultQueue = QueueKey(DefaultQueueName)
// Global Redis keys.
const ( const (
AllProcesses = "asynq:ps" // ZSET AllServers = "asynq:servers" // ZSET
psPrefix = "asynq:ps:" // STRING - asynq:ps:<host>:<pid>
AllWorkers = "asynq:workers" // ZSET AllWorkers = "asynq:workers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid> AllSchedulers = "asynq:schedulers" // ZSET
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
AllQueues = "asynq:queues" // SET AllQueues = "asynq:queues" // SET
DefaultQueue = QueuePrefix + DefaultQueueName // LIST
ScheduledQueue = "asynq:scheduled" // ZSET
RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST
CancelChannel = "asynq:cancel" // PubSub channel CancelChannel = "asynq:cancel" // PubSub channel
) )
// ValidateQueueName validates a given qname to be used as a queue name.
// Returns nil if valid, otherwise returns non-nil error.
func ValidateQueueName(qname string) error {
if len(qname) == 0 {
return fmt.Errorf("queue name must contain one or more characters")
}
return nil
}
// QueueKey returns a redis key for the given queue name. // QueueKey returns a redis key for the given queue name.
func QueueKey(qname string) string { func QueueKey(qname string) string {
return QueuePrefix + strings.ToLower(qname) return fmt.Sprintf("asynq:{%s}", qname)
} }
// ProcessedKey returns a redis key for processed count for the given day. // ActiveKey returns a redis key for the active tasks.
func ProcessedKey(t time.Time) string { func ActiveKey(qname string) string {
return processedPrefix + t.UTC().Format("2006-01-02") return fmt.Sprintf("asynq:{%s}:active", qname)
} }
// FailureKey returns a redis key for failure count for the given day. // ScheduledKey returns a redis key for the scheduled tasks.
func FailureKey(t time.Time) string { func ScheduledKey(qname string) string {
return failurePrefix + t.UTC().Format("2006-01-02") return fmt.Sprintf("asynq:{%s}:scheduled", qname)
} }
// ProcessInfoKey returns a redis key for process info. // RetryKey returns a redis key for the retry tasks.
func ProcessInfoKey(hostname string, pid int) string { func RetryKey(qname string) string {
return fmt.Sprintf("%s%s:%d", psPrefix, hostname, pid) return fmt.Sprintf("asynq:{%s}:retry", qname)
} }
// WorkersKey returns a redis key for the workers given hostname and pid. // ArchivedKey returns a redis key for the archived tasks.
func WorkersKey(hostname string, pid int) string { func ArchivedKey(qname string) string {
return fmt.Sprintf("%s%s:%d", workersPrefix, hostname, pid) return fmt.Sprintf("asynq:{%s}:archived", qname)
}
// DeadlinesKey returns a redis key for the deadlines.
func DeadlinesKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:deadlines", qname)
}
// PausedKey returns a redis key to indicate that the given queue is paused.
func PausedKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:paused", qname)
}
// ProcessedKey returns a redis key for processed count for the given day for the queue.
func ProcessedKey(qname string, t time.Time) string {
return fmt.Sprintf("asynq:{%s}:processed:%s", qname, t.UTC().Format("2006-01-02"))
}
// FailedKey returns a redis key for failure count for the given day for the queue.
func FailedKey(qname string, t time.Time) string {
return fmt.Sprintf("asynq:{%s}:failed:%s", qname, t.UTC().Format("2006-01-02"))
}
// ServerInfoKey returns a redis key for process info.
func ServerInfoKey(hostname string, pid int, serverID string) string {
return fmt.Sprintf("asynq:servers:{%s:%d:%s}", hostname, pid, serverID)
}
// WorkersKey returns a redis key for the workers given hostname, pid, and server ID.
func WorkersKey(hostname string, pid int, serverID string) string {
return fmt.Sprintf("asynq:workers:{%s:%d:%s}", hostname, pid, serverID)
}
// SchedulerEntriesKey returns a redis key for the scheduler entries given scheduler ID.
func SchedulerEntriesKey(schedulerID string) string {
return fmt.Sprintf("asynq:schedulers:{%s}", schedulerID)
}
// SchedulerHistoryKey returns a redis key for the scheduler's history for the given entry.
func SchedulerHistoryKey(entryID string) string {
return fmt.Sprintf("asynq:scheduler_history:%s", entryID)
}
// UniqueKey returns a redis key with the given type, payload, and queue name.
func UniqueKey(qname, tasktype string, payload map[string]interface{}) string {
return fmt.Sprintf("asynq:{%s}:unique:%s:%s", qname, tasktype, serializePayload(payload))
}
func serializePayload(payload map[string]interface{}) string {
if payload == nil {
return "nil"
}
type entry struct {
k string
v interface{}
}
var es []entry
for k, v := range payload {
es = append(es, entry{k, v})
}
// sort entries by key
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
var b strings.Builder
for _, e := range es {
if b.Len() > 0 {
b.WriteString(",")
}
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
}
return b.String()
} }
// TaskMessage is the internal representation of a task with additional metadata fields. // TaskMessage is the internal representation of a task with additional metadata fields.
@@ -71,7 +149,7 @@ type TaskMessage struct {
Payload map[string]interface{} Payload map[string]interface{}
// ID is a unique identifier for each task. // ID is a unique identifier for each task.
ID xid.ID ID uuid.UUID
// Queue is a name this message should be enqueued to. // Queue is a name this message should be enqueued to.
Queue string Queue string
@@ -85,163 +163,117 @@ type TaskMessage struct {
// ErrorMsg holds the error message from the last failure. // ErrorMsg holds the error message from the last failure.
ErrorMsg string ErrorMsg string
// Timeout specifies how long a task may run. // Timeout specifies timeout in seconds.
// The string value should be compatible with time.Duration.ParseDuration. // If task processing doesn't complete within the timeout, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the archive.
// //
// Zero means no limit. // Use zero to indicate no timeout.
Timeout string Timeout int64
// Deadline specifies the deadline for the task. // Deadline specifies the deadline for the task in Unix time,
// Task won't be processed if it exceeded its deadline. // the number of seconds elapsed since January 1, 1970 UTC.
// The string shoulbe be in RFC3339 format. // If task processing doesn't complete before the deadline, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the archive.
// //
// time.Time's zero value means no deadline. // Use zero to indicate no deadline.
Deadline string Deadline int64
// UniqueKey holds the redis key used for uniqueness lock for this task.
//
// Empty string indicates that no uniqueness lock was used.
UniqueKey string
} }
// ProcessState holds process level information. // EncodeMessage marshals the given task message in JSON and returns an encoded string.
// func EncodeMessage(msg *TaskMessage) (string, error) {
// ProcessStates are safe for concurrent use by multiple goroutines. b, err := json.Marshal(msg)
type ProcessState struct { if err != nil {
mu sync.Mutex // guards all data fields return "", err
concurrency int }
queues map[string]int return string(b), nil
strictPriority bool
pid int
host string
status PStatus
started time.Time
workers map[string]*workerStats
} }
// PStatus represents status of a process. // DecodeMessage unmarshals the given encoded string and returns a decoded task message.
type PStatus int func DecodeMessage(s string) (*TaskMessage, error) {
d := json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, err
}
return &msg, nil
}
// Z represents sorted set member.
type Z struct {
Message *TaskMessage
Score int64
}
// ServerStatus represents status of a server.
// ServerStatus methods are concurrency safe.
type ServerStatus struct {
mu sync.Mutex
val ServerStatusValue
}
// NewServerStatus returns a new status instance given an initial value.
func NewServerStatus(v ServerStatusValue) *ServerStatus {
return &ServerStatus{val: v}
}
type ServerStatusValue int
const ( const (
// StatusIdle indicates process is in idle state. // StatusIdle indicates the server is in idle state.
StatusIdle PStatus = iota StatusIdle ServerStatusValue = iota
// StatusRunning indicates process is up and processing tasks. // StatusRunning indicates the server is up and active.
StatusRunning StatusRunning
// StatusStopped indicates process is up but not processing new tasks. // StatusQuiet indicates the server is up but not active.
StatusQuiet
// StatusStopped indicates the server server has been stopped.
StatusStopped StatusStopped
) )
var statuses = []string{ var statuses = []string{
"idle", "idle",
"running", "running",
"quiet",
"stopped", "stopped",
} }
func (s PStatus) String() string { func (s *ServerStatus) String() string {
if StatusIdle <= s && s <= StatusStopped { s.mu.Lock()
return statuses[s] defer s.mu.Unlock()
if StatusIdle <= s.val && s.val <= StatusStopped {
return statuses[s.val]
} }
return "unknown status" return "unknown status"
} }
type workerStats struct { // Get returns the status value.
msg *TaskMessage func (s *ServerStatus) Get() ServerStatusValue {
started time.Time s.mu.Lock()
v := s.val
s.mu.Unlock()
return v
} }
// NewProcessState returns a new instance of ProcessState. // Set sets the status value.
func NewProcessState(host string, pid, concurrency int, queues map[string]int, strict bool) *ProcessState { func (s *ServerStatus) Set(v ServerStatusValue) {
return &ProcessState{ s.mu.Lock()
host: host, s.val = v
pid: pid, s.mu.Unlock()
concurrency: concurrency,
queues: cloneQueueConfig(queues),
strictPriority: strict,
status: StatusIdle,
workers: make(map[string]*workerStats),
}
} }
// SetStatus updates the state of process. // ServerInfo holds information about a running server.
func (ps *ProcessState) SetStatus(status PStatus) { type ServerInfo struct {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.status = status
}
// SetStarted records when the process started processing.
func (ps *ProcessState) SetStarted(t time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.started = t
}
// AddWorkerStats records when a worker started and which task it's processing.
func (ps *ProcessState) AddWorkerStats(msg *TaskMessage, started time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.workers[msg.ID.String()] = &workerStats{msg, started}
}
// DeleteWorkerStats removes a worker's entry from the process state.
func (ps *ProcessState) DeleteWorkerStats(msg *TaskMessage) {
ps.mu.Lock()
defer ps.mu.Unlock()
delete(ps.workers, msg.ID.String())
}
// Get returns current state of process as a ProcessInfo.
func (ps *ProcessState) Get() *ProcessInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
return &ProcessInfo{
Host: ps.host,
PID: ps.pid,
Concurrency: ps.concurrency,
Queues: cloneQueueConfig(ps.queues),
StrictPriority: ps.strictPriority,
Status: ps.status.String(),
Started: ps.started,
ActiveWorkerCount: len(ps.workers),
}
}
// GetWorkers returns a list of currently running workers' info.
func (ps *ProcessState) GetWorkers() []*WorkerInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
var res []*WorkerInfo
for _, w := range ps.workers {
res = append(res, &WorkerInfo{
Host: ps.host,
PID: ps.pid,
ID: w.msg.ID,
Type: w.msg.Type,
Queue: w.msg.Queue,
Payload: clonePayload(w.msg.Payload),
Started: w.started,
})
}
return res
}
func cloneQueueConfig(qcfg map[string]int) map[string]int {
res := make(map[string]int)
for qname, n := range qcfg {
res[qname] = n
}
return res
}
func clonePayload(payload map[string]interface{}) map[string]interface{} {
res := make(map[string]interface{})
for k, v := range payload {
res[k] = v
}
return res
}
// ProcessInfo holds information about a running background worker process.
type ProcessInfo struct {
Host string Host string
PID int PID int
ServerID string
Concurrency int Concurrency int
Queues map[string]int Queues map[string]int
StrictPriority bool StrictPriority bool
@@ -254,14 +286,50 @@ type ProcessInfo struct {
type WorkerInfo struct { type WorkerInfo struct {
Host string Host string
PID int PID int
ID xid.ID ServerID string
ID string
Type string Type string
Queue string Queue string
Payload map[string]interface{} Payload map[string]interface{}
Started time.Time Started time.Time
Deadline time.Time
} }
// Cancelations is a collection that holds cancel functions for all in-progress tasks. // SchedulerEntry holds information about a periodic task registered with a scheduler.
type SchedulerEntry struct {
// Identifier of this entry.
ID string
// Spec describes the schedule of this entry.
Spec string
// Type is the task type of the periodic task.
Type string
// Payload is the payload of the periodic task.
Payload map[string]interface{}
// Opts is the options for the periodic task.
Opts []string
// Next shows the next time the task will be enqueued.
Next time.Time
// Prev shows the last time the task was enqueued.
// Zero time if task was never enqueued.
Prev time.Time
}
// SchedulerEnqueueEvent holds information about an enqueue event by a scheduler.
type SchedulerEnqueueEvent struct {
// ID of the task that was enqueued.
TaskID string
// Time the task was enqueued.
EnqueuedAt time.Time
}
// Cancelations is a collection that holds cancel functions for all active tasks.
// //
// Cancelations are safe for concurrent use by multipel goroutines. // Cancelations are safe for concurrent use by multipel goroutines.
type Cancelations struct { type Cancelations struct {
@@ -298,13 +366,25 @@ func (c *Cancelations) Get(id string) (fn context.CancelFunc, ok bool) {
return fn, ok return fn, ok
} }
// GetAll returns all cancel funcs. // Broker is a message broker that supports operations to manage task queues.
func (c *Cancelations) GetAll() []context.CancelFunc { //
c.mu.Lock() // See rdb.RDB as a reference implementation.
defer c.mu.Unlock() type Broker interface {
var res []context.CancelFunc Ping() error
for _, fn := range c.cancelFuncs { Enqueue(msg *TaskMessage) error
res = append(res, fn) EnqueueUnique(msg *TaskMessage, ttl time.Duration) error
} Dequeue(qnames ...string) (*TaskMessage, time.Time, error)
return res Done(msg *TaskMessage) error
Requeue(msg *TaskMessage) error
Schedule(msg *TaskMessage, processAt time.Time) error
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
Archive(msg *TaskMessage, errMsg string) error
CheckAndEnqueue(qnames ...string) error
ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*TaskMessage, error)
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
ClearServerState(host string, pid int, serverID string) error
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers
PublishCancelation(id string) error
Close() error
} }

View File

@@ -6,13 +6,13 @@ package base
import ( import (
"context" "context"
"math/rand" "encoding/json"
"sync" "sync"
"testing" "testing"
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/rs/xid" "github.com/google/uuid"
) )
func TestQueueKey(t *testing.T) { func TestQueueKey(t *testing.T) {
@@ -20,7 +20,8 @@ func TestQueueKey(t *testing.T) {
qname string qname string
want string want string
}{ }{
{"custom", "asynq:queues:custom"}, {"default", "asynq:{default}"},
{"custom", "asynq:{custom}"},
} }
for _, tc := range tests { for _, tc := range tests {
@@ -31,56 +32,162 @@ func TestQueueKey(t *testing.T) {
} }
} }
func TestProcessedKey(t *testing.T) { func TestActiveKey(t *testing.T) {
tests := []struct { tests := []struct {
input time.Time qname string
want string want string
}{ }{
{time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:processed:2019-11-14"}, {"default", "asynq:{default}:active"},
{time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:processed:2020-12-01"}, {"custom", "asynq:{custom}:active"},
{time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:processed:2020-01-06"},
} }
for _, tc := range tests { for _, tc := range tests {
got := ProcessedKey(tc.input) got := ActiveKey(tc.qname)
if got != tc.want {
t.Errorf("ActiveKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestDeadlinesKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:deadlines"},
{"custom", "asynq:{custom}:deadlines"},
}
for _, tc := range tests {
got := DeadlinesKey(tc.qname)
if got != tc.want {
t.Errorf("DeadlinesKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestScheduledKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:scheduled"},
{"custom", "asynq:{custom}:scheduled"},
}
for _, tc := range tests {
got := ScheduledKey(tc.qname)
if got != tc.want {
t.Errorf("ScheduledKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestRetryKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:retry"},
{"custom", "asynq:{custom}:retry"},
}
for _, tc := range tests {
got := RetryKey(tc.qname)
if got != tc.want {
t.Errorf("RetryKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestArchivedKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:archived"},
{"custom", "asynq:{custom}:archived"},
}
for _, tc := range tests {
got := ArchivedKey(tc.qname)
if got != tc.want {
t.Errorf("ArchivedKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestPausedKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:paused"},
{"custom", "asynq:{custom}:paused"},
}
for _, tc := range tests {
got := PausedKey(tc.qname)
if got != tc.want {
t.Errorf("PausedKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestProcessedKey(t *testing.T) {
tests := []struct {
qname string
input time.Time
want string
}{
{"default", time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:{default}:processed:2019-11-14"},
{"critical", time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:{critical}:processed:2020-12-01"},
{"default", time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:{default}:processed:2020-01-06"},
}
for _, tc := range tests {
got := ProcessedKey(tc.qname, tc.input)
if got != tc.want { if got != tc.want {
t.Errorf("ProcessedKey(%v) = %q, want %q", tc.input, got, tc.want) t.Errorf("ProcessedKey(%v) = %q, want %q", tc.input, got, tc.want)
} }
} }
} }
func TestFailureKey(t *testing.T) { func TestFailedKey(t *testing.T) {
tests := []struct { tests := []struct {
qname string
input time.Time input time.Time
want string want string
}{ }{
{time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:failure:2019-11-14"}, {"default", time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:{default}:failed:2019-11-14"},
{time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:failure:2020-12-01"}, {"custom", time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:{custom}:failed:2020-12-01"},
{time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:failure:2020-01-06"}, {"low", time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:{low}:failed:2020-01-06"},
} }
for _, tc := range tests { for _, tc := range tests {
got := FailureKey(tc.input) got := FailedKey(tc.qname, tc.input)
if got != tc.want { if got != tc.want {
t.Errorf("FailureKey(%v) = %q, want %q", tc.input, got, tc.want) t.Errorf("FailureKey(%v) = %q, want %q", tc.input, got, tc.want)
} }
} }
} }
func TestProcessInfoKey(t *testing.T) { func TestServerInfoKey(t *testing.T) {
tests := []struct { tests := []struct {
hostname string hostname string
pid int pid int
sid string
want string want string
}{ }{
{"localhost", 9876, "asynq:ps:localhost:9876"}, {"localhost", 9876, "server123", "asynq:servers:{localhost:9876:server123}"},
{"127.0.0.1", 1234, "asynq:ps:127.0.0.1:1234"}, {"127.0.0.1", 1234, "server987", "asynq:servers:{127.0.0.1:1234:server987}"},
} }
for _, tc := range tests { for _, tc := range tests {
got := ProcessInfoKey(tc.hostname, tc.pid) got := ServerInfoKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want { if got != tc.want {
t.Errorf("ProcessInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want) t.Errorf("ServerInfoKey(%q, %d, %q) = %q, want %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
} }
} }
} }
@@ -89,80 +196,184 @@ func TestWorkersKey(t *testing.T) {
tests := []struct { tests := []struct {
hostname string hostname string
pid int pid int
sid string
want string want string
}{ }{
{"localhost", 9876, "asynq:workers:localhost:9876"}, {"localhost", 9876, "server1", "asynq:workers:{localhost:9876:server1}"},
{"127.0.0.1", 1234, "asynq:workers:127.0.0.1:1234"}, {"127.0.0.1", 1234, "server2", "asynq:workers:{127.0.0.1:1234:server2}"},
} }
for _, tc := range tests { for _, tc := range tests {
got := WorkersKey(tc.hostname, tc.pid) got := WorkersKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want { if got != tc.want {
t.Errorf("WorkersKey(%q, %d) = %q, want = %q", tc.hostname, tc.pid, got, tc.want) t.Errorf("WorkersKey(%q, %d, %q) = %q, want = %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
} }
} }
} }
// Test for process state being accessed by multiple goroutines. func TestSchedulerEntriesKey(t *testing.T) {
// Run with -race flag to check for data race. tests := []struct {
func TestProcessStateConcurrentAccess(t *testing.T) { schedulerID string
ps := NewProcessState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false) want string
var wg sync.WaitGroup }{
started := time.Now() {"localhost:9876:scheduler123", "asynq:schedulers:{localhost:9876:scheduler123}"},
msgs := []*TaskMessage{ {"127.0.0.1:1234:scheduler987", "asynq:schedulers:{127.0.0.1:1234:scheduler987}"},
&TaskMessage{ID: xid.New(), Type: "type1", Payload: map[string]interface{}{"user_id": 42}},
&TaskMessage{ID: xid.New(), Type: "type2"},
&TaskMessage{ID: xid.New(), Type: "type3"},
} }
// Simulate hearbeater calling SetStatus and SetStarted. for _, tc := range tests {
got := SchedulerEntriesKey(tc.schedulerID)
if got != tc.want {
t.Errorf("SchedulerEntriesKey(%q) = %q, want %q", tc.schedulerID, got, tc.want)
}
}
}
func TestSchedulerHistoryKey(t *testing.T) {
tests := []struct {
entryID string
want string
}{
{"entry876", "asynq:scheduler_history:entry876"},
{"entry345", "asynq:scheduler_history:entry345"},
}
for _, tc := range tests {
got := SchedulerHistoryKey(tc.entryID)
if got != tc.want {
t.Errorf("SchedulerHistoryKey(%q) = %q, want %q",
tc.entryID, got, tc.want)
}
}
}
func TestUniqueKey(t *testing.T) {
tests := []struct {
desc string
qname string
tasktype string
payload map[string]interface{}
want string
}{
{
"with primitive types",
"default",
"email:send",
map[string]interface{}{"a": 123, "b": "hello", "c": true},
"asynq:{default}:unique:email:send:a=123,b=hello,c=true",
},
{
"with unsorted keys",
"default",
"email:send",
map[string]interface{}{"b": "hello", "c": true, "a": 123},
"asynq:{default}:unique:email:send:a=123,b=hello,c=true",
},
{
"with composite types",
"default",
"email:send",
map[string]interface{}{
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
"names": []string{"bob", "mike", "rob"}},
"asynq:{default}:unique:email:send:address=map[city:Boston line:123 Main St state:MA],names=[bob mike rob]",
},
{
"with complex types",
"default",
"email:send",
map[string]interface{}{
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
"duration": time.Hour},
"asynq:{default}:unique:email:send:duration=1h0m0s,time=2020-07-28 00:00:00 +0000 UTC",
},
{
"with nil payload",
"default",
"reindex",
nil,
"asynq:{default}:unique:reindex:nil",
},
}
for _, tc := range tests {
got := UniqueKey(tc.qname, tc.tasktype, tc.payload)
if got != tc.want {
t.Errorf("%s: UniqueKey(%q, %q, %v) = %q, want %q", tc.desc, tc.qname, tc.tasktype, tc.payload, got, tc.want)
}
}
}
func TestMessageEncoding(t *testing.T) {
id := uuid.New()
tests := []struct {
in *TaskMessage
out *TaskMessage
}{
{
in: &TaskMessage{
Type: "task1",
Payload: map[string]interface{}{"a": 1, "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
},
out: &TaskMessage{
Type: "task1",
Payload: map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
},
},
}
for _, tc := range tests {
encoded, err := EncodeMessage(tc.in)
if err != nil {
t.Errorf("EncodeMessage(msg) returned error: %v", err)
continue
}
decoded, err := DecodeMessage(encoded)
if err != nil {
t.Errorf("DecodeMessage(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(tc.out, decoded); diff != "" {
t.Errorf("Decoded message == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.out, diff)
}
}
}
// Test for status being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestStatusConcurrentAccess(t *testing.T) {
status := NewServerStatus(StatusIdle)
var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
ps.SetStarted(started) status.Get()
ps.SetStatus(StatusRunning) _ = status.String()
}() }()
// Simulate processor starting worker goroutines.
for _, msg := range msgs {
wg.Add(1)
ps.AddWorkerStats(msg, time.Now())
go func(msg *TaskMessage) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
ps.DeleteWorkerStats(msg)
}(msg)
}
// Simulate hearbeater calling Get and GetWorkers
wg.Add(1) wg.Add(1)
go func() { go func() {
wg.Done() defer wg.Done()
for i := 0; i < 5; i++ { status.Set(StatusStopped)
ps.Get() _ = status.String()
ps.GetWorkers()
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
}
}() }()
wg.Wait() wg.Wait()
want := &ProcessInfo{
Host: "127.0.0.1",
PID: 1234,
Concurrency: 10,
Queues: map[string]int{"default": 1},
StrictPriority: false,
Status: "running",
Started: started,
ActiveWorkerCount: 0,
}
got := ps.Get()
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("(*ProcessState).Get() = %+v, want %+v; (-want,+got)\n%s",
got, want, diff)
}
} }
// Test for cancelations being accessed by multiple goroutines. // Test for cancelations being accessed by multiple goroutines.
@@ -208,9 +419,4 @@ func TestCancelationsConcurrentAccess(t *testing.T) {
if ok { if ok {
t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2) t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2)
} }
funcs := c.GetAll()
if len(funcs) != 2 {
t.Errorf("(*Cancelations).GetAll() returns %d functions, want 2", len(funcs))
}
} }

View File

@@ -6,43 +6,210 @@
package log package log
import ( import (
"fmt"
"io" "io"
stdlog "log" stdlog "log"
"os" "os"
"sync"
) )
func NewLogger(out io.Writer) *Logger { // Base supports logging at various log levels.
return &Logger{ type Base interface {
stdlog.New(out, "", stdlog.Ldate|stdlog.Ltime|stdlog.Lmicroseconds|stdlog.LUTC), // Debug logs a message at Debug level.
} Debug(args ...interface{})
// Info logs a message at Info level.
Info(args ...interface{})
// Warn logs a message at Warning level.
Warn(args ...interface{})
// Error logs a message at Error level.
Error(args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(args ...interface{})
} }
type Logger struct { // baseLogger is a wrapper object around log.Logger from the standard library.
// It supports logging at various log levels.
type baseLogger struct {
*stdlog.Logger *stdlog.Logger
} }
func (l *Logger) Debug(format string, args ...interface{}) { // Debug logs a message at Debug level.
format = "DEBUG: " + format func (l *baseLogger) Debug(args ...interface{}) {
l.Printf(format, args...) l.prefixPrint("DEBUG: ", args...)
} }
func (l *Logger) Info(format string, args ...interface{}) { // Info logs a message at Info level.
format = "INFO: " + format func (l *baseLogger) Info(args ...interface{}) {
l.Printf(format, args...) l.prefixPrint("INFO: ", args...)
} }
func (l *Logger) Warn(format string, args ...interface{}) { // Warn logs a message at Warning level.
format = "WARN: " + format func (l *baseLogger) Warn(args ...interface{}) {
l.Printf(format, args...) l.prefixPrint("WARN: ", args...)
} }
func (l *Logger) Error(format string, args ...interface{}) { // Error logs a message at Error level.
format = "ERROR: " + format func (l *baseLogger) Error(args ...interface{}) {
l.Printf(format, args...) l.prefixPrint("ERROR: ", args...)
} }
func (l *Logger) Fatal(format string, args ...interface{}) { // Fatal logs a message at Fatal level
format = "FATAL: " + format // and process will exit with status set to 1.
l.Printf(format, args...) func (l *baseLogger) Fatal(args ...interface{}) {
l.prefixPrint("FATAL: ", args...)
os.Exit(1) os.Exit(1)
} }
func (l *baseLogger) prefixPrint(prefix string, args ...interface{}) {
args = append([]interface{}{prefix}, args...)
l.Print(args...)
}
// newBase creates and returns a new instance of baseLogger.
func newBase(out io.Writer) *baseLogger {
prefix := fmt.Sprintf("asynq: pid=%d ", os.Getpid())
return &baseLogger{
stdlog.New(out, prefix, stdlog.Ldate|stdlog.Ltime|stdlog.Lmicroseconds|stdlog.LUTC),
}
}
// NewLogger creates and returns a new instance of Logger.
// Log level is set to DebugLevel by default.
func NewLogger(base Base) *Logger {
if base == nil {
base = newBase(os.Stderr)
}
return &Logger{base: base, level: DebugLevel}
}
// Logger logs message to io.Writer at various log levels.
type Logger struct {
base Base
mu sync.Mutex
// Minimum log level for this logger.
// Message with level lower than this level won't be outputted.
level Level
}
// Level represents a log level.
type Level int32
const (
// DebugLevel is the lowest level of logging.
// Debug logs are intended for debugging and development purposes.
DebugLevel Level = iota
// InfoLevel is used for general informational log messages.
InfoLevel
// WarnLevel is used for undesired but relatively expected events,
// which may indicate a problem.
WarnLevel
// ErrorLevel is used for undesired and unexpected events that
// the program can recover from.
ErrorLevel
// FatalLevel is used for undesired and unexpected events that
// the program cannot recover from.
FatalLevel
)
// String is part of the fmt.Stringer interface.
//
// Used for testing and debugging purposes.
func (l Level) String() string {
switch l {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warning"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
default:
return "unknown"
}
}
// canLogAt reports whether logger can log at level v.
func (l *Logger) canLogAt(v Level) bool {
l.mu.Lock()
defer l.mu.Unlock()
return v >= l.level
}
func (l *Logger) Debug(args ...interface{}) {
if !l.canLogAt(DebugLevel) {
return
}
l.base.Debug(args...)
}
func (l *Logger) Info(args ...interface{}) {
if !l.canLogAt(InfoLevel) {
return
}
l.base.Info(args...)
}
func (l *Logger) Warn(args ...interface{}) {
if !l.canLogAt(WarnLevel) {
return
}
l.base.Warn(args...)
}
func (l *Logger) Error(args ...interface{}) {
if !l.canLogAt(ErrorLevel) {
return
}
l.base.Error(args...)
}
func (l *Logger) Fatal(args ...interface{}) {
if !l.canLogAt(FatalLevel) {
return
}
l.base.Fatal(args...)
}
func (l *Logger) Debugf(format string, args ...interface{}) {
l.Debug(fmt.Sprintf(format, args...))
}
func (l *Logger) Infof(format string, args ...interface{}) {
l.Info(fmt.Sprintf(format, args...))
}
func (l *Logger) Warnf(format string, args ...interface{}) {
l.Warn(fmt.Sprintf(format, args...))
}
func (l *Logger) Errorf(format string, args ...interface{}) {
l.Error(fmt.Sprintf(format, args...))
}
func (l *Logger) Fatalf(format string, args ...interface{}) {
l.Fatal(fmt.Sprintf(format, args...))
}
// SetLevel sets the logger level.
// It panics if v is less than DebugLevel or greater than FatalLevel.
func (l *Logger) SetLevel(v Level) {
l.mu.Lock()
defer l.mu.Unlock()
if v < DebugLevel || v > FatalLevel {
panic("log: invalid log level")
}
l.level = v
}

View File

@@ -13,6 +13,7 @@ import (
// regexp for timestamps // regexp for timestamps
const ( const (
rgxPID = `[0-9]+`
rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]` rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]`
rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]` rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]`
rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]` rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]`
@@ -29,18 +30,20 @@ func TestLoggerDebug(t *testing.T) {
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s DEBUG: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s DEBUG: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Debug(tc.message) logger.Debug(tc.message)
@@ -50,7 +53,7 @@ func TestLoggerDebug(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Debug(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
@@ -61,18 +64,20 @@ func TestLoggerInfo(t *testing.T) {
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Info(tc.message) logger.Info(tc.message)
@@ -82,7 +87,7 @@ func TestLoggerInfo(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
@@ -93,18 +98,20 @@ func TestLoggerWarn(t *testing.T) {
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Warn(tc.message) logger.Warn(tc.message)
@@ -114,7 +121,7 @@ func TestLoggerWarn(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Warn(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
@@ -125,18 +132,20 @@ func TestLoggerError(t *testing.T) {
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Error(tc.message) logger.Error(tc.message)
@@ -146,8 +155,234 @@ func TestLoggerError(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Error(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
} }
type formatTester struct {
desc string
format string
args []interface{}
wantPattern string // regexp that log output must match
}
func TestLoggerDebugf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with DEBUG prefix",
format: "hello, %s!",
args: []interface{}{"Gopher"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, Gopher!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Debugf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Debugf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerInfof(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with INFO prefix",
format: "%d,%d,%d",
args: []interface{}{1, 2, 3},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: 1,2,3\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Infof(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Infof(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerWarnf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with WARN prefix",
format: "hello, %s",
args: []interface{}{"Gophers"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, Gophers\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Warnf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Warnf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerErrorf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with ERROR prefix",
format: "hello, %s",
args: []interface{}{"Gophers"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, Gophers\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Errorf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Errorf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerWithLowerLevels(t *testing.T) {
// Logger should not log messages at a level
// lower than the specified level.
tests := []struct {
level Level
op string
}{
// with level one above
{InfoLevel, "Debug"},
{InfoLevel, "Debugf"},
{WarnLevel, "Info"},
{WarnLevel, "Infof"},
{ErrorLevel, "Warn"},
{ErrorLevel, "Warnf"},
{FatalLevel, "Error"},
{FatalLevel, "Errorf"},
// with skip level
{WarnLevel, "Debug"},
{ErrorLevel, "Infof"},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.SetLevel(tc.level)
switch tc.op {
case "Debug":
logger.Debug("hello")
case "Debugf":
logger.Debugf("hello, %s", "world")
case "Info":
logger.Info("hello")
case "Infof":
logger.Infof("hello, %s", "world")
case "Warn":
logger.Warn("hello")
case "Warnf":
logger.Warnf("hello, %s", "world")
case "Error":
logger.Error("hello")
case "Errorf":
logger.Errorf("hello, %s", "world")
default:
t.Fatalf("unexpected op: %q", tc.op)
}
if buf.String() != "" {
t.Errorf("logger.%s outputted log message when level is set to %v", tc.op, tc.level)
}
}
}
func TestLoggerWithSameOrHigherLevels(t *testing.T) {
// Logger should log messages at a level
// same as or higher than the specified level.
tests := []struct {
level Level
op string
}{
// same level
{DebugLevel, "Debug"},
{InfoLevel, "Infof"},
{WarnLevel, "Warn"},
{ErrorLevel, "Errorf"},
// higher level
{DebugLevel, "Info"},
{InfoLevel, "Warnf"},
{WarnLevel, "Error"},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.SetLevel(tc.level)
switch tc.op {
case "Debug":
logger.Debug("hello")
case "Debugf":
logger.Debugf("hello, %s", "world")
case "Info":
logger.Info("hello")
case "Infof":
logger.Infof("hello, %s", "world")
case "Warn":
logger.Warn("hello")
case "Warnf":
logger.Warnf("hello, %s", "world")
case "Error":
logger.Error("hello")
case "Errorf":
logger.Errorf("hello, %s", "world")
default:
t.Fatalf("unexpected op: %q", tc.op)
}
if buf.String() == "" {
t.Errorf("logger.%s did not output log message when level is set to %v", tc.op, tc.level)
}
}
}

View File

@@ -5,37 +5,262 @@
package rdb package rdb
import ( import (
"fmt"
"testing" "testing"
"time"
"github.com/go-redis/redis/v7" "github.com/hibiken/asynq/internal/asynqtest"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
func BenchmarkDone(b *testing.B) { func BenchmarkEnqueue(b *testing.B) {
r := redis.NewClient(&redis.Options{ r := setup(b)
Addr: "localhost:6379", msg := asynqtest.NewTaskMessage("task1", nil)
DB: 8,
})
h.FlushDB(b, r)
// populate in-progress queue with messages
var inProgress []*base.TaskMessage
for i := 0; i < 40; i++ {
inProgress = append(inProgress,
h.NewTaskMessage("send_email", map[string]interface{}{"subject": "hello", "recipient_id": 123}))
}
h.SeedInProgressQueue(b, r, inProgress)
rdb := NewRDB(r)
b.ResetTimer() b.ResetTimer()
for n := 0; n < b.N; n++ {
for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
msg := h.NewTaskMessage("reindex", map[string]interface{}{"config": "path/to/config/file"}) asynqtest.FlushDB(b, r.client)
r.LPush(base.InProgressQueue, h.MustMarshal(b, msg))
b.StartTimer() b.StartTimer()
rdb.Done(msg) if err := r.Enqueue(msg); err != nil {
b.Fatalf("Enqueue failed: %v", err)
}
}
}
func BenchmarkEnqueueUnique(b *testing.B) {
r := setup(b)
msg := &base.TaskMessage{
Type: "task1",
Payload: nil,
Queue: base.DefaultQueueName,
UniqueKey: base.UniqueKey("default", "task1", nil),
}
uniqueTTL := 5 * time.Minute
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
b.StartTimer()
if err := r.EnqueueUnique(msg, uniqueTTL); err != nil {
b.Fatalf("EnqueueUnique failed: %v", err)
}
}
}
func BenchmarkSchedule(b *testing.B) {
r := setup(b)
msg := asynqtest.NewTaskMessage("task1", nil)
processAt := time.Now().Add(3 * time.Minute)
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
b.StartTimer()
if err := r.Schedule(msg, processAt); err != nil {
b.Fatalf("Schedule failed: %v", err)
}
}
}
func BenchmarkScheduleUnique(b *testing.B) {
r := setup(b)
msg := &base.TaskMessage{
Type: "task1",
Payload: nil,
Queue: base.DefaultQueueName,
UniqueKey: base.UniqueKey("default", "task1", nil),
}
processAt := time.Now().Add(3 * time.Minute)
uniqueTTL := 5 * time.Minute
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
b.StartTimer()
if err := r.ScheduleUnique(msg, processAt, uniqueTTL); err != nil {
b.Fatalf("EnqueueUnique failed: %v", err)
}
}
}
func BenchmarkDequeueSingleQueue(b *testing.B) {
r := setup(b)
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
for i := 0; i < 10; i++ {
m := asynqtest.NewTaskMessageWithQueue(
fmt.Sprintf("task%d", i), nil, base.DefaultQueueName)
if err := r.Enqueue(m); err != nil {
b.Fatalf("Enqueue failed: %v", err)
}
}
b.StartTimer()
if _, _, err := r.Dequeue(base.DefaultQueueName); err != nil {
b.Fatalf("Dequeue failed: %v", err)
}
}
}
func BenchmarkDequeueMultipleQueues(b *testing.B) {
qnames := []string{"critical", "default", "low"}
r := setup(b)
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
for i := 0; i < 10; i++ {
for _, qname := range qnames {
m := asynqtest.NewTaskMessageWithQueue(
fmt.Sprintf("%s_task%d", qname, i), nil, qname)
if err := r.Enqueue(m); err != nil {
b.Fatalf("Enqueue failed: %v", err)
}
}
}
b.StartTimer()
if _, _, err := r.Dequeue(qnames...); err != nil {
b.Fatalf("Dequeue failed: %v", err)
}
}
}
func BenchmarkDone(b *testing.B) {
r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Done(msgs[0]); err != nil {
b.Fatalf("Done failed: %v", err)
}
}
}
func BenchmarkRetry(b *testing.B) {
r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Retry(msgs[0], time.Now().Add(1*time.Minute), "error"); err != nil {
b.Fatalf("Retry failed: %v", err)
}
}
}
func BenchmarkArchive(b *testing.B) {
r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Archive(msgs[0], "error"); err != nil {
b.Fatalf("Archive failed: %v", err)
}
}
}
func BenchmarkRequeue(b *testing.B) {
r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Requeue(msgs[0]); err != nil {
b.Fatalf("Requeue failed: %v", err)
}
}
}
func BenchmarkCheckAndEnqueue(b *testing.B) {
r := setup(b)
now := time.Now()
var zs []base.Z
for i := -100; i < 100; i++ {
msg := asynqtest.NewTaskMessage(fmt.Sprintf("task%d", i), nil)
score := now.Add(time.Duration(i) * time.Second).Unix()
zs = append(zs, base.Z{Message: msg, Score: score})
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
asynqtest.FlushDB(b, r.client)
asynqtest.SeedScheduledQueue(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.CheckAndEnqueue(base.DefaultQueueName); err != nil {
b.Fatalf("CheckAndEnqueue failed: %v", err)
}
} }
} }

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,7 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"strconv"
"time" "time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
@@ -22,17 +23,20 @@ var (
// ErrTaskNotFound indicates that a task that matches the given identifier was not found. // ErrTaskNotFound indicates that a task that matches the given identifier was not found.
ErrTaskNotFound = errors.New("could not find a task") ErrTaskNotFound = errors.New("could not find a task")
// ErrDuplicateTask indicates that another task with the same unique key holds the uniqueness lock.
ErrDuplicateTask = errors.New("task already exists")
) )
const statsTTL = 90 * 24 * time.Hour // 90 days const statsTTL = 90 * 24 * time.Hour // 90 days
// RDB is a client interface to query and mutate task queues. // RDB is a client interface to query and mutate task queues.
type RDB struct { type RDB struct {
client *redis.Client client redis.UniversalClient
} }
// NewRDB returns a new instance of RDB. // NewRDB returns a new instance of RDB.
func NewRDB(client *redis.Client) *RDB { func NewRDB(client redis.UniversalClient) *RDB {
return &RDB{client} return &RDB{client}
} }
@@ -41,277 +45,443 @@ func (r *RDB) Close() error {
return r.client.Close() return r.client.Close()
} }
// KEYS[1] -> asynq:queues:<qname> // Ping checks the connection with redis server.
// KEYS[2] -> asynq:queues func (r *RDB) Ping() error {
// ARGV[1] -> task message data return r.client.Ping().Err()
var enqueueCmd = redis.NewScript(` }
redis.call("LPUSH", KEYS[1], ARGV[1])
redis.call("SADD", KEYS[2], KEYS[1])
return 1`)
// Enqueue inserts the given task to the tail of the queue. // Enqueue inserts the given task to the tail of the queue.
func (r *RDB) Enqueue(msg *base.TaskMessage) error { func (r *RDB) Enqueue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
key := base.QueueKey(msg.Queue) key := base.QueueKey(msg.Queue)
return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, bytes).Err() return r.client.LPush(key, encoded).Err()
} }
// Dequeue queries given queues in order and pops a task message if there is one and returns it. // KEYS[1] -> unique key
// KEYS[2] -> asynq:{<qname>}
// ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> task message data
var enqueueUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then
return 0
end
redis.call("LPUSH", KEYS[2], ARGV[3])
return 1
`)
// EnqueueUnique inserts the given task if the task's uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
res, err := enqueueUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, base.QueueKey(msg.Queue)},
msg.ID.String(), int(ttl.Seconds()), encoded).Result()
if err != nil {
return err
}
n, ok := res.(int64)
if !ok {
return fmt.Errorf("could not cast %v to int64", res)
}
if n == 0 {
return ErrDuplicateTask
}
return nil
}
// Dequeue queries given queues in order and pops a task message
// off a queue if one exists and returns the message and deadline.
// Dequeue skips a queue if the queue is paused.
// If all queues are empty, ErrNoProcessableTask error is returned. // If all queues are empty, ErrNoProcessableTask error is returned.
func (r *RDB) Dequeue(qnames ...string) (*base.TaskMessage, error) { func (r *RDB) Dequeue(qnames ...string) (msg *base.TaskMessage, deadline time.Time, err error) {
var data string data, d, err := r.dequeue(qnames...)
var err error
if len(qnames) == 1 {
data, err = r.dequeueSingle(base.QueueKey(qnames[0]))
} else {
var keys []string
for _, q := range qnames {
keys = append(keys, base.QueueKey(q))
}
data, err = r.dequeue(keys...)
}
if err == redis.Nil {
return nil, ErrNoProcessableTask
}
if err != nil { if err != nil {
return nil, err return nil, time.Time{}, err
} }
var msg base.TaskMessage if msg, err = base.DecodeMessage(data); err != nil {
err = json.Unmarshal([]byte(data), &msg) return nil, time.Time{}, err
if err != nil {
return nil, err
} }
return &msg, nil return msg, time.Unix(d, 0), nil
} }
func (r *RDB) dequeueSingle(queue string) (data string, err error) { // KEYS[1] -> asynq:{<qname>}
// timeout needed to avoid blocking forever // KEYS[2] -> asynq:{<qname>}:paused
return r.client.BRPopLPush(queue, base.InProgressQueue, time.Second).Result() // KEYS[3] -> asynq:{<qname>}:active
} // KEYS[4] -> asynq:{<qname>}:deadlines
// ARGV[1] -> current time in Unix time
// KEYS[1] -> asynq:in_progress //
// ARGV -> List of queues to query in order // dequeueCmd checks whether a queue is paused first, before
// calling RPOPLPUSH to pop a task from the queue.
// It computes the task deadline by inspecting Timout and Deadline fields,
// and inserts the task with deadlines set.
var dequeueCmd = redis.NewScript(` var dequeueCmd = redis.NewScript(`
local res if redis.call("EXISTS", KEYS[2]) == 0 then
for _, qkey in ipairs(ARGV) do local msg = redis.call("RPOPLPUSH", KEYS[1], KEYS[3])
res = redis.call("RPOPLPUSH", qkey, KEYS[1]) if msg then
if res then local decoded = cjson.decode(msg)
return res local timeout = decoded["Timeout"]
local deadline = decoded["Deadline"]
local score
if timeout ~= 0 and deadline ~= 0 then
score = math.min(ARGV[1]+timeout, deadline)
elseif timeout ~= 0 then
score = ARGV[1] + timeout
elseif deadline ~= 0 then
score = deadline
else
return redis.error_reply("asynq internal error: both timeout and deadline are not set")
end
redis.call("ZADD", KEYS[4], score, msg)
return {msg, score}
end end
end end
return res`) return nil`)
func (r *RDB) dequeue(queues ...string) (data string, err error) { func (r *RDB) dequeue(qnames ...string) (msgjson string, deadline int64, err error) {
var args []interface{} for _, qname := range qnames {
for _, qkey := range queues { keys := []string{
args = append(args, qkey) base.QueueKey(qname),
base.PausedKey(qname),
base.ActiveKey(qname),
base.DeadlinesKey(qname),
} }
res, err := dequeueCmd.Run(r.client, []string{base.InProgressQueue}, args...).Result() res, err := dequeueCmd.Run(r.client, keys, time.Now().Unix()).Result()
if err == redis.Nil {
continue
} else if err != nil {
return "", 0, err
}
data, err := cast.ToSliceE(res)
if err != nil { if err != nil {
return "", err return "", 0, err
} }
return cast.ToStringE(res) if len(data) != 2 {
return "", 0, fmt.Errorf("asynq: internal error: dequeue command returned %d values", len(data))
}
if msgjson, err = cast.ToStringE(data[0]); err != nil {
return "", 0, err
}
if deadline, err = cast.ToInt64E(data[1]); err != nil {
return "", 0, err
}
return msgjson, deadline, nil
}
return "", 0, ErrNoProcessableTask
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:processed:<yyyy-mm-dd> // KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value // ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp // ARGV[2] -> stats expiration timestamp
// Note: LREM count ZERO means "remove all elements equal to val"
var doneCmd = redis.NewScript(` var doneCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
local n = redis.call("INCR", KEYS[2]) return redis.error_reply("NOT FOUND")
end
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[2], ARGV[2]) redis.call("EXPIREAT", KEYS[3], ARGV[2])
end end
return redis.status_reply("OK") return redis.status_reply("OK")
`) `)
// Done removes the task from in-progress queue to mark the task as done. // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// KEYS[4] -> unique key
// ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp
// ARGV[3] -> task ID
var doneUniqueCmd = redis.NewScript(`
if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[2])
end
if redis.call("GET", KEYS[4]) == ARGV[3] then
redis.call("DEL", KEYS[4])
end
return redis.status_reply("OK")
`)
// Done removes the task from active queue to mark the task as done.
// It removes a uniqueness lock acquired by the task, if any.
func (r *RDB) Done(msg *base.TaskMessage) error { func (r *RDB) Done(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
now := time.Now() now := time.Now()
processedKey := base.ProcessedKey(now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return doneCmd.Run(r.client, keys := []string{
[]string{base.InProgressQueue, processedKey}, base.ActiveKey(msg.Queue),
bytes, expireAt.Unix()).Err() base.DeadlinesKey(msg.Queue),
base.ProcessedKey(msg.Queue, now),
}
args := []interface{}{encoded, expireAt.Unix()}
if len(msg.UniqueKey) > 0 {
keys = append(keys, msg.UniqueKey)
args = append(args, msg.ID.String())
return doneUniqueCmd.Run(r.client, keys, args...).Err()
}
return doneCmd.Run(r.client, keys, args...).Err()
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:queues:<qname> // KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:{<qname>}
// ARGV[1] -> base.TaskMessage value // ARGV[1] -> base.TaskMessage value
// Note: Use RPUSH to push to the head of the queue. // Note: Use RPUSH to push to the head of the queue.
var requeueCmd = redis.NewScript(` var requeueCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
redis.call("RPUSH", KEYS[2], ARGV[1]) return redis.error_reply("NOT FOUND")
end
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
redis.call("RPUSH", KEYS[3], ARGV[1])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Requeue moves the task from in-progress queue to the specified queue. // Requeue moves the task from active queue to the specified queue.
func (r *RDB) Requeue(msg *base.TaskMessage) error { func (r *RDB) Requeue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
return requeueCmd.Run(r.client, return requeueCmd.Run(r.client,
[]string{base.InProgressQueue, base.QueueKey(msg.Queue)}, []string{base.ActiveKey(msg.Queue), base.DeadlinesKey(msg.Queue), base.QueueKey(msg.Queue)},
string(bytes)).Err() encoded).Err()
} }
// Schedule adds the task to the backlog queue to be processed in the future. // Schedule adds the task to the backlog queue to be processed in the future.
func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error { func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
score := float64(processAt.Unix()) score := float64(processAt.Unix())
return r.client.ZAdd(base.ScheduledQueue, return r.client.ZAdd(base.ScheduledKey(msg.Queue), &redis.Z{Score: score, Member: encoded}).Err()
&redis.Z{Member: string(bytes), Score: score}).Err()
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> unique key
// KEYS[2] -> asynq:retry // KEYS[2] -> asynq:{<qname>}:scheduled
// KEYS[3] -> asynq:processed:<yyyy-mm-dd> // ARGV[1] -> task ID
// KEYS[4] -> asynq:failure:<yyyy-mm-dd> // ARGV[2] -> uniqueness lock TTL
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue // ARGV[3] -> score (process_at timestamp)
// ARGV[4] -> task message
var scheduleUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then
return 0
end
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[4])
return 1
`)
// ScheduleUnique adds the task to the backlog queue to be processed in the future if the uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
score := float64(processAt.Unix())
res, err := scheduleUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, base.ScheduledKey(msg.Queue)},
msg.ID.String(), int(ttl.Seconds()), score, encoded).Result()
if err != nil {
return err
}
n, ok := res.(int64)
if !ok {
return fmt.Errorf("could not cast %v to int64", res)
}
if n == 0 {
return ErrDuplicateTask
}
return nil
}
// KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:{<qname>}:retry
// KEYS[4] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// KEYS[5] -> asynq:{<qname>}:failed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.ActiveQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue // ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp // ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp // ARGV[4] -> stats expiration timestamp
var retryCmd = redis.NewScript(` var retryCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2]) return redis.error_reply("NOT FOUND")
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[4])
end end
local m = redis.call("INCR", KEYS[4]) if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
if tonumber(m) == 1 then return redis.error_reply("NOT FOUND")
end
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[4])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[4]) redis.call("EXPIREAT", KEYS[4], ARGV[4])
end end
local m = redis.call("INCR", KEYS[5])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[5], ARGV[4])
end
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Retry moves the task from in-progress to retry queue, incrementing retry count // Retry moves the task from active to retry queue, incrementing retry count
// and assigning error message to the task message. // and assigning error message to the task message.
func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error { func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
bytesToRemove, err := json.Marshal(msg) msgToRemove, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
modified := *msg modified := *msg
modified.Retried++ modified.Retried++
modified.ErrorMsg = errMsg modified.ErrorMsg = errMsg
bytesToAdd, err := json.Marshal(&modified) msgToAdd, err := base.EncodeMessage(&modified)
if err != nil { if err != nil {
return err return err
} }
now := time.Now() now := time.Now()
processedKey := base.ProcessedKey(now) processedKey := base.ProcessedKey(msg.Queue, now)
failureKey := base.FailureKey(now) failedKey := base.FailedKey(msg.Queue, now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return retryCmd.Run(r.client, return retryCmd.Run(r.client,
[]string{base.InProgressQueue, base.RetryQueue, processedKey, failureKey}, []string{base.ActiveKey(msg.Queue), base.DeadlinesKey(msg.Queue), base.RetryKey(msg.Queue), processedKey, failedKey},
string(bytesToRemove), string(bytesToAdd), processAt.Unix(), expireAt.Unix()).Err() msgToRemove, msgToAdd, processAt.Unix(), expireAt.Unix()).Err()
} }
const ( const (
maxDeadTasks = 10000 maxArchiveSize = 10000 // maximum number of tasks in archive
deadExpirationInDays = 90 archivedExpirationInDays = 90 // number of days before an archived task gets deleted permanently
) )
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:dead // KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:processed:<yyyy-mm-dd> // KEYS[3] -> asynq:{<qname>}:archived
// KEYS[4] -> asynq.failure:<yyyy-mm-dd> // KEYS[4] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue // KEYS[5] -> asynq:{<qname>}:failed:<yyyy-mm-dd>
// ARGV[2] -> base.TaskMessage value to add to Dead queue // ARGV[1] -> base.TaskMessage value to remove
// ARGV[2] -> base.TaskMessage value to add
// ARGV[3] -> died_at UNIX timestamp // ARGV[3] -> died_at UNIX timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago) // ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100) // ARGV[5] -> max number of tasks in archive (e.g., 100)
// ARGV[6] -> stats expiration timestamp // ARGV[6] -> stats expiration timestamp
var killCmd = redis.NewScript(` var archiveCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2]) return redis.error_reply("NOT FOUND")
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[6])
end end
local m = redis.call("INCR", KEYS[4]) if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
if tonumber(m) == 1 then return redis.error_reply("NOT FOUND")
end
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[3], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[3], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[4])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[6]) redis.call("EXPIREAT", KEYS[4], ARGV[6])
end end
local m = redis.call("INCR", KEYS[5])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[5], ARGV[6])
end
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Kill sends the task to "dead" queue from in-progress queue, assigning // Archive sends the given task to archive, attaching the error message to the task.
// the error message to the task. // It also trims the archive by timestamp and set size.
// It also trims the set by timestamp and set size. func (r *RDB) Archive(msg *base.TaskMessage, errMsg string) error {
func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error { msgToRemove, err := base.EncodeMessage(msg)
bytesToRemove, err := json.Marshal(msg)
if err != nil { if err != nil {
return err return err
} }
modified := *msg modified := *msg
modified.ErrorMsg = errMsg modified.ErrorMsg = errMsg
bytesToAdd, err := json.Marshal(&modified) msgToAdd, err := base.EncodeMessage(&modified)
if err != nil { if err != nil {
return err return err
} }
now := time.Now() now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago limit := now.AddDate(0, 0, -archivedExpirationInDays).Unix() // 90 days ago
processedKey := base.ProcessedKey(now) processedKey := base.ProcessedKey(msg.Queue, now)
failureKey := base.FailureKey(now) failedKey := base.FailedKey(msg.Queue, now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return killCmd.Run(r.client, return archiveCmd.Run(r.client,
[]string{base.InProgressQueue, base.DeadQueue, processedKey, failureKey}, []string{base.ActiveKey(msg.Queue), base.DeadlinesKey(msg.Queue), base.ArchivedKey(msg.Queue), processedKey, failedKey},
string(bytesToRemove), string(bytesToAdd), now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err() msgToRemove, msgToAdd, now.Unix(), limit, maxArchiveSize, expireAt.Unix()).Err()
} }
// KEYS[1] -> asynq:in_progress // CheckAndEnqueue checks for scheduled/retry tasks for the given queues
// ARGV[1] -> queue prefix //and enqueues any tasks that are ready to be processed.
var requeueAllCmd = redis.NewScript(` func (r *RDB) CheckAndEnqueue(qnames ...string) error {
local msgs = redis.call("LRANGE", KEYS[1], 0, -1) for _, qname := range qnames {
if err := r.forwardAll(base.ScheduledKey(qname), base.QueueKey(qname)); err != nil {
return err
}
if err := r.forwardAll(base.RetryKey(qname), base.QueueKey(qname)); err != nil {
return err
}
}
return nil
}
// KEYS[1] -> source queue (e.g. asynq:{<qname>:scheduled or asynq:{<qname>}:retry})
// KEYS[2] -> destination queue (e.g. asynq:{<qname>})
// ARGV[1] -> current unix time
// Note: Script moves tasks up to 100 at a time to keep the runtime of script short.
var forwardCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1], "LIMIT", 0, 100)
for _, msg in ipairs(msgs) do for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg) redis.call("LPUSH", KEYS[2], msg)
local qkey = ARGV[1] .. decoded["Queue"] redis.call("ZREM", KEYS[1], msg)
redis.call("RPUSH", qkey, msg)
redis.call("LREM", KEYS[1], 0, msg)
end end
return table.getn(msgs)`) return table.getn(msgs)`)
// RequeueAll moves all tasks from in-progress list to the queue // forward moves tasks with a score less than the current unix time
// and reports the number of tasks restored. // from the src zset to the dst list. It returns the number of tasks moved.
func (r *RDB) RequeueAll() (int64, error) { func (r *RDB) forward(src, dst string) (int, error) {
res, err := requeueAllCmd.Run(r.client, []string{base.InProgressQueue}, base.QueuePrefix).Result() now := float64(time.Now().Unix())
res, err := forwardCmd.Run(r.client, []string{src, dst}, now).Result()
if err != nil { if err != nil {
return 0, err return 0, err
} }
n, ok := res.(int64) return cast.ToInt(res), nil
if !ok {
return 0, fmt.Errorf("could not cast %v to int64", res)
}
return n, nil
} }
// CheckAndEnqueue checks for all scheduled tasks and enqueues any tasks that // forwardAll moves tasks with a score less than the current unix time from the src zset,
// have to be processed. // until there's no more tasks.
// func (r *RDB) forwardAll(src, dst string) (err error) {
// qnames specifies to which queues to send tasks. n := 1
func (r *RDB) CheckAndEnqueue(qnames ...string) error { for n != 0 {
delayed := []string{base.ScheduledQueue, base.RetryQueue} n, err = r.forward(src, dst)
for _, zset := range delayed {
var err error
if len(qnames) == 1 {
err = r.forwardSingle(zset, base.QueueKey(qnames[0]))
} else {
err = r.forward(zset)
}
if err != nil { if err != nil {
return err return err
} }
@@ -319,110 +489,128 @@ func (r *RDB) CheckAndEnqueue(qnames ...string) error {
return nil return nil
} }
// KEYS[1] -> source queue (e.g. scheduled or retry queue) // ListDeadlineExceeded returns a list of task messages that have exceeded the deadline from the given queues.
// ARGV[1] -> current unix time func (r *RDB) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) {
// ARGV[2] -> queue prefix var msgs []*base.TaskMessage
var forwardCmd = redis.NewScript(` opt := &redis.ZRangeBy{
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1]) Min: "-inf",
for _, msg in ipairs(msgs) do Max: strconv.FormatInt(deadline.Unix(), 10),
local decoded = cjson.decode(msg) }
local qkey = ARGV[2] .. decoded["Queue"] for _, qname := range qnames {
redis.call("LPUSH", qkey, msg) res, err := r.client.ZRangeByScore(base.DeadlinesKey(qname), opt).Result()
redis.call("ZREM", KEYS[1], msg) if err != nil {
end return nil, err
return msgs`) }
for _, s := range res {
// forward moves all tasks with a score less than the current unix time msg, err := base.DecodeMessage(s)
// from the src zset. if err != nil {
func (r *RDB) forward(src string) error { return nil, err
now := float64(time.Now().Unix()) }
return forwardCmd.Run(r.client, msgs = append(msgs, msg)
[]string{src}, now, base.QueuePrefix).Err() }
}
return msgs, nil
} }
// KEYS[1] -> source queue (e.g. scheduled or retry queue) // KEYS[1] -> asynq:servers:{<host:pid:sid>}
// KEYS[2] -> destination queue // KEYS[2] -> asynq:workers:{<host:pid:sid>}
var forwardSingleCmd = redis.NewScript(` // ARGV[1] -> TTL in seconds
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1]) // ARGV[2] -> server info
for _, msg in ipairs(msgs) do // ARGV[3:] -> alternate key-value pair of (worker id, worker data)
redis.call("LPUSH", KEYS[2], msg)
redis.call("ZREM", KEYS[1], msg)
end
return msgs`)
// forwardSingle moves all tasks with a score less than the current unix time
// from the src zset to dst list.
func (r *RDB) forwardSingle(src, dst string) error {
now := float64(time.Now().Unix())
return forwardSingleCmd.Run(r.client,
[]string{src, dst}, now).Err()
}
// KEYS[1] -> asynq:ps:<host:pid>
// KEYS[2] -> asynq:ps
// KEYS[3] -> asynq:workers<host:pid>
// keys[4] -> asynq:workers
// ARGV[1] -> expiration time
// ARGV[2] -> TTL in seconds
// ARGV[3] -> process info
// ARGV[4:] -> alternate key-value pair of (worker id, worker data)
// Note: Add key to ZSET with expiration time as score. // Note: Add key to ZSET with expiration time as score.
// ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996 // ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996
var writeProcessInfoCmd = redis.NewScript(` var writeServerStateCmd = redis.NewScript(`
redis.call("SETEX", KEYS[1], ARGV[2], ARGV[3]) redis.call("SETEX", KEYS[1], ARGV[1], ARGV[2])
redis.call("ZADD", KEYS[2], ARGV[1], KEYS[1]) redis.call("DEL", KEYS[2])
redis.call("DEL", KEYS[3]) for i = 3, table.getn(ARGV)-1, 2 do
for i = 4, table.getn(ARGV)-1, 2 do redis.call("HSET", KEYS[2], ARGV[i], ARGV[i+1])
redis.call("HSET", KEYS[3], ARGV[i], ARGV[i+1])
end end
redis.call("EXPIRE", KEYS[3], ARGV[2]) redis.call("EXPIRE", KEYS[2], ARGV[1])
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// WriteProcessState writes process state data to redis with expiration set to the value ttl. // WriteServerState writes server state data to redis with expiration set to the value ttl.
func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error { func (r *RDB) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
info := ps.Get()
bytes, err := json.Marshal(info) bytes, err := json.Marshal(info)
if err != nil { if err != nil {
return err return err
} }
var args []interface{} // args to the lua script
exp := time.Now().Add(ttl).UTC() exp := time.Now().Add(ttl).UTC()
workers := ps.GetWorkers() args := []interface{}{ttl.Seconds(), bytes} // args to the lua script
args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes)
for _, w := range workers { for _, w := range workers {
bytes, err := json.Marshal(w) bytes, err := json.Marshal(w)
if err != nil { if err != nil {
continue // skip bad data continue // skip bad data
} }
args = append(args, w.ID.String(), bytes) args = append(args, w.ID, bytes)
} }
pkey := base.ProcessInfoKey(info.Host, info.PID) skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
wkey := base.WorkersKey(info.Host, info.PID) wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
return writeProcessInfoCmd.Run(r.client, if err := r.client.ZAdd(base.AllServers, &redis.Z{Score: float64(exp.Unix()), Member: skey}).Err(); err != nil {
[]string{pkey, base.AllProcesses, wkey, base.AllWorkers}, return err
args...).Err() }
if err := r.client.ZAdd(base.AllWorkers, &redis.Z{Score: float64(exp.Unix()), Member: wkey}).Err(); err != nil {
return err
}
return writeServerStateCmd.Run(r.client, []string{skey, wkey}, args...).Err()
} }
// KEYS[1] -> asynq:ps // KEYS[1] -> asynq:servers:{<host:pid:sid>}
// KEYS[2] -> asynq:ps:<host:pid> // KEYS[2] -> asynq:workers:{<host:pid:sid>}
// KEYS[3] -> asynq:workers var clearServerStateCmd = redis.NewScript(`
// KEYS[4] -> asynq:workers<host:pid> redis.call("DEL", KEYS[1])
var clearProcessInfoCmd = redis.NewScript(`
redis.call("ZREM", KEYS[1], KEYS[2])
redis.call("DEL", KEYS[2]) redis.call("DEL", KEYS[2])
redis.call("ZREM", KEYS[3], KEYS[4])
redis.call("DEL", KEYS[4])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// ClearProcessState deletes process state data from redis. // ClearServerState deletes server state data from redis.
func (r *RDB) ClearProcessState(ps *base.ProcessState) error { func (r *RDB) ClearServerState(host string, pid int, serverID string) error {
info := ps.Get() skey := base.ServerInfoKey(host, pid, serverID)
host, pid := info.Host, info.PID wkey := base.WorkersKey(host, pid, serverID)
pkey := base.ProcessInfoKey(host, pid) if err := r.client.ZRem(base.AllServers, skey).Err(); err != nil {
wkey := base.WorkersKey(host, pid) return err
return clearProcessInfoCmd.Run(r.client, }
[]string{base.AllProcesses, pkey, base.AllWorkers, wkey}).Err() if err := r.client.ZRem(base.AllWorkers, wkey).Err(); err != nil {
return err
}
return clearServerStateCmd.Run(r.client, []string{skey, wkey}).Err()
}
// KEYS[1] -> asynq:schedulers:{<schedulerID>}
// ARGV[1] -> TTL in seconds
// ARGV[2:] -> schedler entries
var writeSchedulerEntriesCmd = redis.NewScript(`
redis.call("DEL", KEYS[1])
for i = 2, #ARGV do
redis.call("LPUSH", KEYS[1], ARGV[i])
end
redis.call("EXPIRE", KEYS[1], ARGV[1])
return redis.status_reply("OK")`)
// WriteSchedulerEntries writes scheduler entries data to redis with expiration set to the value ttl.
func (r *RDB) WriteSchedulerEntries(schedulerID string, entries []*base.SchedulerEntry, ttl time.Duration) error {
args := []interface{}{ttl.Seconds()}
for _, e := range entries {
bytes, err := json.Marshal(e)
if err != nil {
continue // skip bad data
}
args = append(args, bytes)
}
exp := time.Now().Add(ttl).UTC()
key := base.SchedulerEntriesKey(schedulerID)
err := r.client.ZAdd(base.AllSchedulers, &redis.Z{Score: float64(exp.Unix()), Member: key}).Err()
if err != nil {
return err
}
return writeSchedulerEntriesCmd.Run(r.client, []string{key}, args...).Err()
}
// ClearSchedulerEntries deletes scheduler entries data from redis.
func (r *RDB) ClearSchedulerEntries(scheduelrID string) error {
key := base.SchedulerEntriesKey(scheduelrID)
if err := r.client.ZRem(base.AllSchedulers, key).Err(); err != nil {
return err
}
return r.client.Del(key).Err()
} }
// CancelationPubSub returns a pubsub for cancelation messages. // CancelationPubSub returns a pubsub for cancelation messages.
@@ -440,3 +628,32 @@ func (r *RDB) CancelationPubSub() (*redis.PubSub, error) {
func (r *RDB) PublishCancelation(id string) error { func (r *RDB) PublishCancelation(id string) error {
return r.client.Publish(base.CancelChannel, id).Err() return r.client.Publish(base.CancelChannel, id).Err()
} }
// KEYS[1] -> asynq:scheduler_history:<entryID>
// ARGV[1] -> enqueued_at timestamp
// ARGV[2] -> serialized SchedulerEnqueueEvent data
// ARGV[3] -> max number of events to be persisted
var recordSchedulerEnqueueEventCmd = redis.NewScript(`
redis.call("ZREMRANGEBYRANK", KEYS[1], 0, -ARGV[3])
redis.call("ZADD", KEYS[1], ARGV[1], ARGV[2])
return redis.status_reply("OK")`)
// Maximum number of enqueue events to store per entry.
const maxEvents = 1000
// RecordSchedulerEnqueueEvent records the time when the given task was enqueued.
func (r *RDB) RecordSchedulerEnqueueEvent(entryID string, event *base.SchedulerEnqueueEvent) error {
key := base.SchedulerHistoryKey(entryID)
data, err := json.Marshal(event)
if err != nil {
return err
}
return recordSchedulerEnqueueEventCmd.Run(
r.client, []string{key}, event.EnqueuedAt.Unix(), data, maxEvents).Err()
}
// ClearSchedulerHistory deletes the enqueue event history for the given scheduler entry.
func (r *RDB) ClearSchedulerHistory(entryID string) error {
key := base.SchedulerHistoryKey(entryID)
return r.client.Del(key).Err()
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,199 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package testbroker exports a broker implementation that should be used in package testing.
package testbroker
import (
"errors"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
)
var errRedisDown = errors.New("asynqtest: redis is down")
// TestBroker is a broker implementation which enables
// to simulate Redis failure in tests.
type TestBroker struct {
mu sync.Mutex
sleeping bool
// real broker
real base.Broker
}
// Make sure TestBroker implements Broker interface at compile time.
var _ base.Broker = (*TestBroker)(nil)
func NewTestBroker(b base.Broker) *TestBroker {
return &TestBroker{real: b}
}
func (tb *TestBroker) Sleep() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = true
}
func (tb *TestBroker) Wakeup() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = false
}
func (tb *TestBroker) Enqueue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Enqueue(msg)
}
func (tb *TestBroker) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.EnqueueUnique(msg, ttl)
}
func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, time.Time{}, errRedisDown
}
return tb.real.Dequeue(qnames...)
}
func (tb *TestBroker) Done(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Done(msg)
}
func (tb *TestBroker) Requeue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Requeue(msg)
}
func (tb *TestBroker) Schedule(msg *base.TaskMessage, processAt time.Time) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Schedule(msg, processAt)
}
func (tb *TestBroker) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ScheduleUnique(msg, processAt, ttl)
}
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Retry(msg, processAt, errMsg)
}
func (tb *TestBroker) Archive(msg *base.TaskMessage, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Archive(msg, errMsg)
}
func (tb *TestBroker) CheckAndEnqueue(qnames ...string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.CheckAndEnqueue(qnames...)
}
func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.ListDeadlineExceeded(deadline, qnames...)
}
func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.WriteServerState(info, workers, ttl)
}
func (tb *TestBroker) ClearServerState(host string, pid int, serverID string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ClearServerState(host, pid, serverID)
}
func (tb *TestBroker) CancelationPubSub() (*redis.PubSub, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.CancelationPubSub()
}
func (tb *TestBroker) PublishCancelation(id string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.PublishCancelation(id)
}
func (tb *TestBroker) Ping() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Ping()
}
func (tb *TestBroker) Close() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Close()
}

View File

@@ -5,6 +5,7 @@
package asynq package asynq
import ( import (
"encoding/json"
"fmt" "fmt"
"time" "time"
@@ -30,6 +31,29 @@ func (p Payload) Has(key string) bool {
return ok return ok
} }
func toInt(v interface{}) (int, error) {
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return int(val), nil
default:
return cast.ToIntE(v)
}
}
// String returns a string representation of payload data.
func (p Payload) String() string {
return fmt.Sprint(p.data)
}
// MarshalJSON returns the JSON encoding of payload data.
func (p Payload) MarshalJSON() ([]byte, error) {
return json.Marshal(p.data)
}
// GetString returns a string value if a string type is associated with // GetString returns a string value if a string type is associated with
// the key, otherwise reports an error. // the key, otherwise reports an error.
func (p Payload) GetString(key string) (string, error) { func (p Payload) GetString(key string) (string, error) {
@@ -47,7 +71,7 @@ func (p Payload) GetInt(key string) (int, error) {
if !ok { if !ok {
return 0, &errKeyNotFound{key} return 0, &errKeyNotFound{key}
} }
return cast.ToIntE(v) return toInt(v)
} }
// GetFloat64 returns a float64 value if a numeric type is associated with // GetFloat64 returns a float64 value if a numeric type is associated with
@@ -57,7 +81,12 @@ func (p Payload) GetFloat64(key string) (float64, error) {
if !ok { if !ok {
return 0, &errKeyNotFound{key} return 0, &errKeyNotFound{key}
} }
switch v := v.(type) {
case json.Number:
return v.Float64()
default:
return cast.ToFloat64E(v) return cast.ToFloat64E(v)
}
} }
// GetBool returns a boolean value if a boolean type is associated with // GetBool returns a boolean value if a boolean type is associated with
@@ -87,7 +116,20 @@ func (p Payload) GetIntSlice(key string) ([]int, error) {
if !ok { if !ok {
return nil, &errKeyNotFound{key} return nil, &errKeyNotFound{key}
} }
switch v := v.(type) {
case []interface{}:
var res []int
for _, elem := range v {
val, err := toInt(elem)
if err != nil {
return nil, err
}
res = append(res, int(val))
}
return res, nil
default:
return cast.ToIntSliceE(v) return cast.ToIntSliceE(v)
}
} }
// GetStringMap returns a map of string to empty interface // GetStringMap returns a map of string to empty interface
@@ -131,7 +173,20 @@ func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
if !ok { if !ok {
return nil, &errKeyNotFound{key} return nil, &errKeyNotFound{key}
} }
switch v := v.(type) {
case map[string]interface{}:
res := make(map[string]int)
for key, val := range v {
ival, err := toInt(val)
if err != nil {
return nil, err
}
res[key] = ival
}
return res, nil
default:
return cast.ToStringMapIntE(v) return cast.ToStringMapIntE(v)
}
} }
// GetStringMapBool returns a map of string to boolean // GetStringMapBool returns a map of string to boolean
@@ -162,5 +217,14 @@ func (p Payload) GetDuration(key string) (time.Duration, error) {
if !ok { if !ok {
return 0, &errKeyNotFound{key} return 0, &errKeyNotFound{key}
} }
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return time.Duration(val), nil
default:
return cast.ToDurationE(v) return cast.ToDurationE(v)
}
} }

View File

@@ -6,341 +6,631 @@ package asynq
import ( import (
"encoding/json" "encoding/json"
"fmt"
"testing" "testing"
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
func TestPayloadGet(t *testing.T) { type payloadTest struct {
names := []string{"luke", "anakin", "rey"} data map[string]interface{}
primes := []int{2, 3, 5, 7, 11, 13, 17} key string
user := map[string]interface{}{"name": "Ken", "score": 3.14} nonkey string
location := map[string]string{"address": "123 Main St.", "state": "NY", "zipcode": "10002"} }
favs := map[string][]string{
"movies": []string{"forrest gump", "star wars"},
"tv_shows": []string{"game of thrones", "HIMYM", "breaking bad"},
}
counter := map[string]int{
"a": 1,
"b": 101,
"c": 42,
}
features := map[string]bool{
"A": false,
"B": true,
"C": true,
}
now := time.Now()
duration := 15 * time.Minute
data := map[string]interface{}{ func TestPayloadString(t *testing.T) {
"greeting": "Hello", tests := []payloadTest{
"user_id": 9876, {
"pi": 3.1415, data: map[string]interface{}{"name": "gopher"},
"enabled": false, key: "name",
"names": names, nonkey: "unknown",
"primes": primes, },
"user": user,
"location": location,
"favs": favs,
"counter": counter,
"features": features,
"timestamp": now,
"duration": duration,
} }
payload := Payload{data}
gotStr, err := payload.GetString("greeting") for _, tc := range tests {
if gotStr != "Hello" || err != nil { payload := Payload{tc.data}
got, err := payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil", t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil",
"greeting", gotStr, err, "Hello") tc.key, got, err, tc.data[tc.key])
} }
gotInt, err := payload.GetInt("user_id") // encode and then decode task messsage.
if gotInt != 9876 || err != nil { in := h.NewTaskMessage("testing", tc.data)
t.Errorf("Payload.GetInt(%q) = %v, %v, want, %v, nil", encoded, err := base.EncodeMessage(in)
"user_id", gotInt, err, 9876) if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
} }
gotFloat, err := payload.GetFloat64("pi") // access non-existent key.
if gotFloat != 3.1415 || err != nil { got, err = payload.GetString(tc.nonkey)
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want, %v, nil", if err == nil || got != "" {
"pi", gotFloat, err, 3.141592) t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
tc.key, got, err)
} }
gotBool, err := payload.GetBool("enabled")
if gotBool != false || err != nil {
t.Errorf("Payload.GetBool(%q) = %v, %v, want, %v, nil",
"enabled", gotBool, err, false)
}
gotStrSlice, err := payload.GetStringSlice("names")
if diff := cmp.Diff(gotStrSlice, names); diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"names", gotStrSlice, err, names, diff)
}
gotIntSlice, err := payload.GetIntSlice("primes")
if diff := cmp.Diff(gotIntSlice, primes); diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"primes", gotIntSlice, err, primes, diff)
}
gotStrMap, err := payload.GetStringMap("user")
if diff := cmp.Diff(gotStrMap, user); diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"user", gotStrMap, err, user, diff)
}
gotStrMapStr, err := payload.GetStringMapString("location")
if diff := cmp.Diff(gotStrMapStr, location); diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"location", gotStrMapStr, err, location, diff)
}
gotStrMapStrSlice, err := payload.GetStringMapStringSlice("favs")
if diff := cmp.Diff(gotStrMapStrSlice, favs); diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"favs", gotStrMapStrSlice, err, favs, diff)
}
gotStrMapInt, err := payload.GetStringMapInt("counter")
if diff := cmp.Diff(gotStrMapInt, counter); diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"counter", gotStrMapInt, err, counter, diff)
}
gotStrMapBool, err := payload.GetStringMapBool("features")
if diff := cmp.Diff(gotStrMapBool, features); diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"features", gotStrMapBool, err, features, diff)
}
gotTime, err := payload.GetTime("timestamp")
if !gotTime.Equal(now) {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
"timestamp", gotTime, err, now)
}
gotDuration, err := payload.GetDuration("duration")
if gotDuration != duration {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
"duration", gotDuration, err, duration)
} }
} }
func TestPayloadGetWithMarshaling(t *testing.T) { func TestPayloadInt(t *testing.T) {
names := []string{"luke", "anakin", "rey"} tests := []payloadTest{
primes := []int{2, 3, 5, 7, 11, 13, 17} {
user := map[string]interface{}{"name": "Ken", "score": 3.14} data: map[string]interface{}{"user_id": 42},
location := map[string]string{"address": "123 Main St.", "state": "NY", "zipcode": "10002"} key: "user_id",
favs := map[string][]string{ nonkey: "unknown",
"movies": []string{"forrest gump", "star wars"}, },
"tv_shows": []string{"game of throwns", "HIMYM", "breaking bad"},
} }
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetInt(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadFloat64(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"pi": 3.14},
key: "pi",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetFloat64(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadBool(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"enabled": true},
key: "enabled",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetBool(tc.nonkey)
if err == nil || got != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
tc.key, got, err)
}
}
}
func TestPayloadStringSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"names": []string{"luke", "rey", "anakin"}},
key: "names",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadIntSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"nums": []int{9, 8, 7}},
key: "nums",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetIntSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetIntSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetIntSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMap(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user": map[string]interface{}{"name": "Jon Doe", "score": 2.2}},
key: "user",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMap(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMap(tc.key)
ignoreOpt := cmpopts.IgnoreMapEntries(func(key string, val interface{}) bool {
switch val.(type) {
case json.Number:
return true
default:
return false
}
})
diff = cmp.Diff(got, tc.data[tc.key], ignoreOpt)
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil;(-want,+got)\n%s",
tc.key, got, err, tc.data[tc.key], diff)
}
// access non-existent key.
got, err = payload.GetStringMap(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"address": map[string]string{"line": "123 Main St", "city": "San Francisco", "state": "CA"}},
key: "address",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapString(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapString(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapString(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapStringSlice(t *testing.T) {
favs := map[string][]string{
"movies": {"forrest gump", "star wars"},
"tv_shows": {"game of thrones", "HIMYM", "breaking bad"},
}
tests := []payloadTest{
{
data: map[string]interface{}{"favorites": favs},
key: "favorites",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapInt(t *testing.T) {
counter := map[string]int{ counter := map[string]int{
"a": 1, "a": 1,
"b": 101, "b": 101,
"c": 42, "c": 42,
} }
tests := []payloadTest{
{
data: map[string]interface{}{"counts": counter},
key: "counts",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapInt(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapInt(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapInt(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapBool(t *testing.T) {
features := map[string]bool{ features := map[string]bool{
"A": false, "A": false,
"B": true, "B": true,
"C": true, "C": true,
} }
now := time.Now() tests := []payloadTest{
duration := 15 * time.Minute {
data: map[string]interface{}{"features": features},
key: "features",
nonkey: "unknown",
},
}
in := Payload{map[string]interface{}{ for _, tc := range tests {
"subject": "Hello", payload := Payload{tc.data}
"recipient_id": 9876,
"pi": 3.14, got, err := payload.GetStringMapBool(tc.key)
"enabled": true, diff := cmp.Diff(got, tc.data[tc.key])
"names": names, if err != nil || diff != "" {
"primes": primes, t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
"user": user, tc.key, got, err, tc.data[tc.key])
"location": location, }
"favs": favs,
"counter": counter, // encode and then decode task messsage.
"features": features, in := h.NewTaskMessage("testing", tc.data)
"timestamp": now, encoded, err := base.EncodeMessage(in)
"duration": duration,
}}
// encode and then decode task messsage
inMsg := h.NewTaskMessage("testing", in.data)
data, err := json.Marshal(inMsg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var outMsg base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(data, &outMsg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
out := Payload{outMsg.Payload} payload = Payload{out.Payload}
got, err = payload.GetStringMapBool(tc.key)
gotStr, err := out.GetString("subject") diff = cmp.Diff(got, tc.data[tc.key])
if gotStr != "Hello" || err != nil { if err != nil || diff != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want %q, nil", t.Errorf("With Marshaling: Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
"subject", gotStr, err, "Hello") tc.key, got, err, tc.data[tc.key])
} }
gotInt, err := out.GetInt("recipient_id") // access non-existent key.
if gotInt != 9876 || err != nil { got, err = payload.GetStringMapBool(tc.nonkey)
t.Errorf("Payload.GetInt(%q) = %v, %v; want %v, nil", if err == nil || got != nil {
"recipient_id", gotInt, err, 9876) t.Errorf("Payload.GetStringMapBool(%q) = %v, %v; want nil, error",
tc.key, got, err)
} }
gotFloat, err := out.GetFloat64("pi")
if gotFloat != 3.14 || err != nil {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want %v, nil",
"pi", gotFloat, err, 3.14)
}
gotBool, err := out.GetBool("enabled")
if gotBool != true || err != nil {
t.Errorf("Payload.GetBool(%q) = %v, %v; want %v, nil",
"enabled", gotBool, err, true)
}
gotStrSlice, err := out.GetStringSlice("names")
if diff := cmp.Diff(gotStrSlice, names); diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"names", gotStrSlice, err, names, diff)
}
gotIntSlice, err := out.GetIntSlice("primes")
if diff := cmp.Diff(gotIntSlice, primes); diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"primes", gotIntSlice, err, primes, diff)
}
gotStrMap, err := out.GetStringMap("user")
if diff := cmp.Diff(gotStrMap, user); diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"user", gotStrMap, err, user, diff)
}
gotStrMapStr, err := out.GetStringMapString("location")
if diff := cmp.Diff(gotStrMapStr, location); diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"location", gotStrMapStr, err, location, diff)
}
gotStrMapStrSlice, err := out.GetStringMapStringSlice("favs")
if diff := cmp.Diff(gotStrMapStrSlice, favs); diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"favs", gotStrMapStrSlice, err, favs, diff)
}
gotStrMapInt, err := out.GetStringMapInt("counter")
if diff := cmp.Diff(gotStrMapInt, counter); diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"counter", gotStrMapInt, err, counter, diff)
}
gotStrMapBool, err := out.GetStringMapBool("features")
if diff := cmp.Diff(gotStrMapBool, features); diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"features", gotStrMapBool, err, features, diff)
}
gotTime, err := out.GetTime("timestamp")
if !gotTime.Equal(now) {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
"timestamp", gotTime, err, now)
}
gotDuration, err := out.GetDuration("duration")
if gotDuration != duration {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
"duration", gotDuration, err, duration)
} }
} }
func TestPayloadKeyNotFound(t *testing.T) { func TestPayloadTime(t *testing.T) {
payload := Payload{nil} tests := []payloadTest{
{
key := "something" data: map[string]interface{}{"current": time.Now()},
gotStr, err := payload.GetString(key) key: "current",
if err == nil || gotStr != "" { nonkey: "unknown",
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error", },
key, gotStr, err)
} }
gotInt, err := payload.GetInt(key) for _, tc := range tests {
if err == nil || gotInt != 0 { payload := Payload{tc.data}
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
key, gotInt, err) got, err := payload.GetTime(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
} }
gotFloat, err := payload.GetFloat64(key) // encode and then decode task messsage.
if err == nil || gotFloat != 0 { in := h.NewTaskMessage("testing", tc.data)
t.Errorf("Payload.GetFloat64(%q = %v, %v; want 0, error", encoded, err := base.EncodeMessage(in)
key, gotFloat, err) if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetTime(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
} }
gotBool, err := payload.GetBool(key) // access non-existent key.
if err == nil || gotBool != false { got, err = payload.GetTime(tc.nonkey)
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error", if err == nil || !got.IsZero() {
key, gotBool, err) t.Errorf("Payload.GetTime(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Time{})
}
}
}
func TestPayloadDuration(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"duration": 15 * time.Minute},
key: "duration",
nonkey: "unknown",
},
} }
gotStrSlice, err := payload.GetStringSlice(key) for _, tc := range tests {
if err == nil || gotStrSlice != nil { payload := Payload{tc.data}
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
key, gotStrSlice, err) got, err := payload.GetDuration(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
} }
gotIntSlice, err := payload.GetIntSlice(key) // encode and then decode task messsage.
if err == nil || gotIntSlice != nil { in := h.NewTaskMessage("testing", tc.data)
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error", encoded, err := base.EncodeMessage(in)
key, gotIntSlice, err) if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetDuration(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
} }
gotStrMap, err := payload.GetStringMap(key) // access non-existent key.
if err == nil || gotStrMap != nil { got, err = payload.GetDuration(tc.nonkey)
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error", if err == nil || got != 0 {
key, gotStrMap, err) t.Errorf("Payload.GetDuration(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Duration(0))
} }
gotStrMapStr, err := payload.GetStringMapString(key)
if err == nil || gotStrMapStr != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
key, gotStrMapStr, err)
}
gotStrMapStrSlice, err := payload.GetStringMapStringSlice(key)
if err == nil || gotStrMapStrSlice != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
key, gotStrMapStrSlice, err)
}
gotStrMapInt, err := payload.GetStringMapInt(key)
if err == nil || gotStrMapInt != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want nil, error",
key, gotStrMapInt, err)
}
gotStrMapBool, err := payload.GetStringMapBool(key)
if err == nil || gotStrMapBool != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want nil, error",
key, gotStrMapBool, err)
}
gotTime, err := payload.GetTime(key)
if err == nil || !gotTime.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, error",
key, gotTime, err, time.Time{})
}
gotDuration, err := payload.GetDuration(key)
if err == nil || gotDuration != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want 0, error",
key, gotDuration, err)
} }
} }
@@ -356,3 +646,30 @@ func TestPayloadHas(t *testing.T) {
t.Errorf("Payload.Has(%q) = true, want false", "name") t.Errorf("Payload.Has(%q) = true, want false", "name")
} }
} }
func TestPayloadDebuggingStrings(t *testing.T) {
data := map[string]interface{}{
"foo": 123,
"bar": "hello",
"baz": false,
}
payload := Payload{data: data}
if payload.String() != fmt.Sprint(data) {
t.Errorf("Payload.String() = %q, want %q",
payload.String(), fmt.Sprint(data))
}
got, err := payload.MarshalJSON()
if err != nil {
t.Fatal(err)
}
want, err := json.Marshal(data)
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(got, want); diff != "" {
t.Errorf("Payload.MarhsalJSON() = %s, want %s; (-want,+got)\n%s",
got, want, diff)
}
}

View File

@@ -6,22 +6,25 @@ package asynq
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"math/rand" "math/rand"
"runtime"
"runtime/debug"
"sort" "sort"
"strings"
"sync" "sync"
"time" "time"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"golang.org/x/time/rate" "golang.org/x/time/rate"
) )
type processor struct { type processor struct {
logger Logger logger *log.Logger
rdb *rdb.RDB broker base.Broker
ps *base.ProcessState
handler Handler handler Handler
@@ -30,10 +33,12 @@ type processor struct {
// orderedQueues is set only in strict-priority mode. // orderedQueues is set only in strict-priority mode.
orderedQueues []string orderedQueues []string
retryDelayFunc retryDelayFunc retryDelayFunc RetryDelayFunc
errHandler ErrorHandler errHandler ErrorHandler
shutdownTimeout time.Duration
// channel via which to send sync requests to syncer. // channel via which to send sync requests to syncer.
syncRequestCh chan<- *syncRequest syncRequestCh chan<- *syncRequest
@@ -49,43 +54,59 @@ type processor struct {
done chan struct{} done chan struct{}
once sync.Once once sync.Once
// abort channel is closed when the shutdown of the "processor" goroutine starts. // quit channel is closed when the shutdown of the "processor" goroutine starts.
abort chan struct{}
// quit channel communicates to the in-flight worker goroutines to stop.
quit chan struct{} quit chan struct{}
// cancelations is a set of cancel functions for all in-progress tasks. // abort channel communicates to the in-flight worker goroutines to stop.
abort chan struct{}
// cancelations is a set of cancel functions for all active tasks.
cancelations *base.Cancelations cancelations *base.Cancelations
starting chan<- *workerInfo
finished chan<- *base.TaskMessage
} }
type retryDelayFunc func(n int, err error, task *Task) time.Duration type processorParams struct {
logger *log.Logger
broker base.Broker
retryDelayFunc RetryDelayFunc
syncCh chan<- *syncRequest
cancelations *base.Cancelations
concurrency int
queues map[string]int
strictPriority bool
errHandler ErrorHandler
shutdownTimeout time.Duration
starting chan<- *workerInfo
finished chan<- *base.TaskMessage
}
// newProcessor constructs a new processor. // newProcessor constructs a new processor.
func newProcessor(l Logger, r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc, func newProcessor(params processorParams) *processor {
syncCh chan<- *syncRequest, c *base.Cancelations, errHandler ErrorHandler) *processor { queues := normalizeQueues(params.queues)
info := ps.Get()
qcfg := normalizeQueueCfg(info.Queues)
orderedQueues := []string(nil) orderedQueues := []string(nil)
if info.StrictPriority { if params.strictPriority {
orderedQueues = sortByPriority(qcfg) orderedQueues = sortByPriority(queues)
} }
return &processor{ return &processor{
logger: l, logger: params.logger,
rdb: r, broker: params.broker,
ps: ps, queueConfig: queues,
queueConfig: qcfg,
orderedQueues: orderedQueues, orderedQueues: orderedQueues,
retryDelayFunc: fn, retryDelayFunc: params.retryDelayFunc,
syncRequestCh: syncCh, syncRequestCh: params.syncCh,
cancelations: c, cancelations: params.cancelations,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1), errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, info.Concurrency), sema: make(chan struct{}, params.concurrency),
done: make(chan struct{}), done: make(chan struct{}),
abort: make(chan struct{}),
quit: make(chan struct{}), quit: make(chan struct{}),
errHandler: errHandler, abort: make(chan struct{}),
errHandler: params.errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }), handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
shutdownTimeout: params.shutdownTimeout,
starting: params.starting,
finished: params.finished,
} }
} }
@@ -93,9 +114,9 @@ func newProcessor(l Logger, r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc
// It's safe to call this method multiple times. // It's safe to call this method multiple times.
func (p *processor) stop() { func (p *processor) stop() {
p.once.Do(func() { p.once.Do(func() {
p.logger.Info("Processor shutting down...") p.logger.Debug("Processor shutting down...")
// Unblock if processor is waiting for sema token. // Unblock if processor is waiting for sema token.
close(p.abort) close(p.quit)
// Signal the processor goroutine to stop processing tasks // Signal the processor goroutine to stop processing tasks
// from the queue. // from the queue.
p.done <- struct{}{} p.done <- struct{}{}
@@ -106,35 +127,24 @@ func (p *processor) stop() {
func (p *processor) terminate() { func (p *processor) terminate() {
p.stop() p.stop()
// IDEA: Allow user to customize this timeout value. time.AfterFunc(p.shutdownTimeout, func() { close(p.abort) })
const timeout = 8 * time.Second
time.AfterFunc(timeout, func() { close(p.quit) })
p.logger.Info("Waiting for all workers to finish...") p.logger.Info("Waiting for all workers to finish...")
// send cancellation signal to all in-progress task handlers
for _, cancel := range p.cancelations.GetAll() {
cancel()
}
// block until all workers have released the token // block until all workers have released the token
for i := 0; i < cap(p.sema); i++ { for i := 0; i < cap(p.sema); i++ {
p.sema <- struct{}{} p.sema <- struct{}{}
} }
p.logger.Info("All workers have finished") p.logger.Info("All workers have finished")
p.restore() // move any unfinished tasks back to the queue.
} }
func (p *processor) start(wg *sync.WaitGroup) { func (p *processor) start(wg *sync.WaitGroup) {
// NOTE: The call to "restore" needs to complete before starting
// the processor goroutine.
p.restore()
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
for { for {
select { select {
case <-p.done: case <-p.done:
p.logger.Info("Processor done") p.logger.Debug("Processor done")
return return
default: default:
p.exec() p.exec()
@@ -146,134 +156,162 @@ func (p *processor) start(wg *sync.WaitGroup) {
// exec pulls a task out of the queue and starts a worker goroutine to // exec pulls a task out of the queue and starts a worker goroutine to
// process the task. // process the task.
func (p *processor) exec() { func (p *processor) exec() {
qnames := p.queues()
msg, err := p.rdb.Dequeue(qnames...)
if err == rdb.ErrNoProcessableTask {
// queues are empty, this is a normal behavior.
if len(p.queueConfig) > 1 {
// sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: With multiple queues, we are not using blocking pop operation and
// polling queues instead. This adds significant load to redis.
time.Sleep(time.Second)
}
return
}
if err != nil {
if p.errLogLimiter.Allow() {
p.logger.Error("Dequeue error: %v", err)
}
return
}
select { select {
case <-p.abort: case <-p.quit:
// shutdown is starting, return immediately after requeuing the message.
p.requeue(msg)
return return
case p.sema <- struct{}{}: // acquire token case p.sema <- struct{}{}: // acquire token
p.ps.AddWorkerStats(msg, time.Now()) qnames := p.queues()
msg, deadline, err := p.broker.Dequeue(qnames...)
switch {
case err == rdb.ErrNoProcessableTask:
p.logger.Debug("All queues are empty")
// Queues are empty, this is a normal behavior.
// Sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: We are not using blocking pop operation and polling queues instead.
// This adds significant load to redis.
time.Sleep(time.Second)
<-p.sema // release token
return
case err != nil:
if p.errLogLimiter.Allow() {
p.logger.Errorf("Dequeue error: %v", err)
}
<-p.sema // release token
return
}
p.starting <- &workerInfo{msg, time.Now(), deadline}
go func() { go func() {
defer func() { defer func() {
p.ps.DeleteWorkerStats(msg) p.finished <- msg
<-p.sema /* release token */ <-p.sema // release token
}() }()
resCh := make(chan error, 1) ctx, cancel := createContext(msg, deadline)
task := NewTask(msg.Type, msg.Payload)
ctx, cancel := createContext(msg)
p.cancelations.Add(msg.ID.String(), cancel) p.cancelations.Add(msg.ID.String(), cancel)
go func() { defer func() {
resCh <- perform(ctx, task, p.handler) cancel()
p.cancelations.Delete(msg.ID.String()) p.cancelations.Delete(msg.ID.String())
}() }()
// check context before starting a worker goroutine.
select { select {
case <-p.quit: case <-ctx.Done():
// time is up, quit this worker goroutine. // already canceled (e.g. deadline exceeded).
p.logger.Warn("Quitting worker. task id=%s", msg.ID) p.retryOrKill(ctx, msg, ctx.Err())
return
default:
}
resCh := make(chan error, 1)
go func() {
resCh <- p.perform(ctx, NewTask(msg.Type, msg.Payload))
}()
select {
case <-p.abort:
// time is up, push the message back to queue and quit this worker goroutine.
p.logger.Warnf("Quitting worker. task id=%s", msg.ID)
p.requeue(msg)
return
case <-ctx.Done():
p.retryOrKill(ctx, msg, ctx.Err())
return return
case resErr := <-resCh: case resErr := <-resCh:
// Note: One of three things should happen. // Note: One of three things should happen.
// 1) Done -> Removes the message from InProgress // 1) Done -> Removes the message from Active
// 2) Retry -> Removes the message from InProgress & Adds the message to Retry // 2) Retry -> Removes the message from Active & Adds the message to Retry
// 3) Kill -> Removes the message from InProgress & Adds the message to Dead // 3) Archive -> Removes the message from Active & Adds the message to archive
if resErr != nil { if resErr != nil {
if p.errHandler != nil { p.retryOrKill(ctx, msg, resErr)
p.errHandler.HandleError(task, resErr, msg.Retried, msg.Retry)
}
if msg.Retried >= msg.Retry {
p.kill(msg, resErr)
} else {
p.retry(msg, resErr)
}
return return
} }
p.markAsDone(msg) p.markAsDone(ctx, msg)
} }
}() }()
} }
} }
// restore moves all tasks from "in-progress" back to queue
// to restore all unfinished tasks.
func (p *processor) restore() {
n, err := p.rdb.RequeueAll()
if err != nil {
p.logger.Error("Could not restore unfinished tasks: %v", err)
}
if n > 0 {
p.logger.Info("Restored %d unfinished tasks back to queue", n)
}
}
func (p *processor) requeue(msg *base.TaskMessage) { func (p *processor) requeue(msg *base.TaskMessage) {
err := p.rdb.Requeue(msg) err := p.broker.Requeue(msg)
if err != nil { if err != nil {
p.logger.Error("Could not push task id=%s back to queue: %v", msg.ID, err) p.logger.Errorf("Could not push task id=%s back to queue: %v", msg.ID, err)
} else {
p.logger.Infof("Pushed task id=%s back to queue", msg.ID)
} }
} }
func (p *processor) markAsDone(msg *base.TaskMessage) { func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) {
err := p.rdb.Done(msg) err := p.broker.Done(msg)
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue) errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.ActiveKey(msg.Queue), err)
p.logger.Warn("%s; Will retry syncing", errMsg) deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Done(msg) return p.broker.Done(msg)
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline,
} }
} }
} }
func (p *processor) retry(msg *base.TaskMessage, e error) { // SkipRetry is used as a return value from Handler.ProcessTask to indicate that
// the task should not be retried and should be archived instead.
var SkipRetry = errors.New("skip retry for the task")
func (p *processor) retryOrKill(ctx context.Context, msg *base.TaskMessage, err error) {
if p.errHandler != nil {
p.errHandler.HandleError(ctx, NewTask(msg.Type, msg.Payload), err)
}
if msg.Retried >= msg.Retry || errors.Is(err, SkipRetry) {
p.logger.Warnf("Retry exhausted for task id=%s", msg.ID)
p.archive(ctx, msg, err)
} else {
p.retry(ctx, msg, err)
}
}
func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error) {
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload)) d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(d) retryAt := time.Now().Add(d)
err := p.rdb.Retry(msg, retryAt, e.Error()) err := p.broker.Retry(msg, retryAt, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.RetryKey(msg.Queue))
p.logger.Warn("%s; Will retry syncing", errMsg) deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Retry(msg, retryAt, e.Error()) return p.broker.Retry(msg, retryAt, e.Error())
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline,
} }
} }
} }
func (p *processor) kill(msg *base.TaskMessage, e error) { func (p *processor) archive(ctx context.Context, msg *base.TaskMessage, e error) {
p.logger.Warn("Retry exhausted for task id=%s", msg.ID) err := p.broker.Archive(msg, e.Error())
err := p.rdb.Kill(msg, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.ArchivedKey(msg.Queue))
p.logger.Warn("%s; Will retry syncing", errMsg) deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Kill(msg, e.Error()) return p.broker.Archive(msg, e.Error())
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline,
} }
} }
} }
@@ -296,7 +334,7 @@ func (p *processor) queues() []string {
} }
var names []string var names []string
for qname, priority := range p.queueConfig { for qname, priority := range p.queueConfig {
for i := 0; i < int(priority); i++ { for i := 0; i < priority; i++ {
names = append(names, qname) names = append(names, qname)
} }
} }
@@ -308,13 +346,26 @@ func (p *processor) queues() []string {
// perform calls the handler with the given task. // perform calls the handler with the given task.
// If the call returns without panic, it simply returns the value, // If the call returns without panic, it simply returns the value,
// otherwise, it recovers from panic and returns an error. // otherwise, it recovers from panic and returns an error.
func perform(ctx context.Context, task *Task, h Handler) (err error) { func (p *processor) perform(ctx context.Context, task *Task) (err error) {
defer func() { defer func() {
if x := recover(); x != nil { if x := recover(); x != nil {
p.logger.Errorf("recovering from panic. See the stack trace below for details:\n%s", string(debug.Stack()))
_, file, line, ok := runtime.Caller(1) // skip the first frame (panic itself)
if ok && strings.Contains(file, "runtime/") {
// The panic came from the runtime, most likely due to incorrect
// map/slice usage. The parent frame should have the real trigger.
_, file, line, ok = runtime.Caller(2)
}
// Include the file and line number info in the error, if runtime.Caller returned ok.
if ok {
err = fmt.Errorf("panic [%s:%d]: %v", file, line, x)
} else {
err = fmt.Errorf("panic: %v", x) err = fmt.Errorf("panic: %v", x)
} }
}
}() }()
return h.ProcessTask(ctx, task) return p.handler.ProcessTask(ctx, task)
} }
// uniq dedupes elements and returns a slice of unique names of length l. // uniq dedupes elements and returns a slice of unique names of length l.
@@ -360,16 +411,15 @@ func (x byPriority) Len() int { return len(x) }
func (x byPriority) Less(i, j int) bool { return x[i].priority < x[j].priority } func (x byPriority) Less(i, j int) bool { return x[i].priority < x[j].priority }
func (x byPriority) Swap(i, j int) { x[i], x[j] = x[j], x[i] } func (x byPriority) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
// normalizeQueueCfg divides priority numbers by their // normalizeQueues divides priority numbers by their greatest common divisor.
// greatest common divisor. func normalizeQueues(queues map[string]int) map[string]int {
func normalizeQueueCfg(queueCfg map[string]int) map[string]int {
var xs []int var xs []int
for _, x := range queueCfg { for _, x := range queues {
xs = append(xs, x) xs = append(xs, x)
} }
d := gcd(xs...) d := gcd(xs...)
res := make(map[string]int) res := make(map[string]int)
for q, x := range queueCfg { for q, x := range queues {
res[q] = x / d res[q] = x / d
} }
return res return res
@@ -391,20 +441,3 @@ func gcd(xs ...int) int {
} }
return res return res
} }
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage) (ctx context.Context, cancel context.CancelFunc) {
ctx = context.Background()
timeout, err := time.ParseDuration(msg.Timeout)
if err == nil && timeout != 0 {
ctx, cancel = context.WithTimeout(ctx, timeout)
}
deadline, err := time.Parse(time.RFC3339, msg.Deadline)
if err == nil && !deadline.IsZero() {
ctx, cancel = context.WithDeadline(ctx, deadline)
}
if cancel == nil {
ctx, cancel = context.WithCancel(ctx)
}
return ctx, cancel
}

View File

@@ -17,17 +17,39 @@ import (
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
) )
func TestProcessorSuccess(t *testing.T) { // fakeHeartbeater receives from starting and finished channels and do nothing.
func fakeHeartbeater(starting <-chan *workerInfo, finished <-chan *base.TaskMessage, done <-chan struct{}) {
for {
select {
case <-starting:
case <-finished:
case <-done:
return
}
}
}
// fakeSyncer receives from sync channel and do nothing.
func fakeSyncer(syncCh <-chan *syncRequest, done <-chan struct{}) {
for {
select {
case <-syncCh:
case <-done:
return
}
}
}
func TestProcessorSuccessWithSingleQueue(t *testing.T) {
r := setup(t) r := setup(t)
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("send_email", nil) m1 := h.NewTaskMessage("task1", nil)
m2 := h.NewTaskMessage("gen_thumbnail", nil) m2 := h.NewTaskMessage("task2", nil)
m3 := h.NewTaskMessage("reindex", nil) m3 := h.NewTaskMessage("task3", nil)
m4 := h.NewTaskMessage("sync", nil) m4 := h.NewTaskMessage("task4", nil)
t1 := NewTask(m1.Type, m1.Payload) t1 := NewTask(m1.Type, m1.Payload)
t2 := NewTask(m2.Type, m2.Payload) t2 := NewTask(m2.Type, m2.Payload)
@@ -35,28 +57,25 @@ func TestProcessorSuccess(t *testing.T) {
t4 := NewTask(m4.Type, m4.Payload) t4 := NewTask(m4.Type, m4.Payload)
tests := []struct { tests := []struct {
enqueued []*base.TaskMessage // initial default queue state pending []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run incoming []*base.TaskMessage // tasks to be enqueued during run
wait time.Duration // wait duration between starting and stopping processor for this test case
wantProcessed []*Task // tasks to be processed at the end wantProcessed []*Task // tasks to be processed at the end
}{ }{
{ {
enqueued: []*base.TaskMessage{m1}, pending: []*base.TaskMessage{m1},
incoming: []*base.TaskMessage{m2, m3, m4}, incoming: []*base.TaskMessage{m2, m3, m4},
wait: time.Second,
wantProcessed: []*Task{t1, t2, t3, t4}, wantProcessed: []*Task{t1, t2, t3, t4},
}, },
{ {
enqueued: []*base.TaskMessage{}, pending: []*base.TaskMessage{},
incoming: []*base.TaskMessage{m1}, incoming: []*base.TaskMessage{m1},
wait: time.Second,
wantProcessed: []*Task{t1}, wantProcessed: []*Task{t1},
}, },
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue. h.SeedPendingQueue(t, r, tc.pending, base.DefaultQueueName) // initialize default queue.
// instantiate a new processor // instantiate a new processor
var mu sync.Mutex var mu sync.Mutex
@@ -67,13 +86,30 @@ func TestProcessorSuccess(t *testing.T) {
processed = append(processed, task) processed = append(processed, task)
return nil return nil
} }
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false) starting := make(chan *workerInfo)
cancelations := base.NewCancelations() finished := make(chan *base.TaskMessage)
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil) syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: DefaultRetryDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
for _, msg := range tc.incoming { for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg) err := rdbClient.Enqueue(msg)
if err != nil { if err != nil {
@@ -81,16 +117,182 @@ func TestProcessorSuccess(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
} }
time.Sleep(tc.wait) time.Sleep(2 * time.Second) // wait for two second to allow all pending tasks to be processed.
if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
}
p.terminate() p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff) t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
} }
mu.Unlock()
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
} }
}
func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
var (
r = setup(t)
rdbClient = rdb.NewRDB(r)
m1 = h.NewTaskMessage("task1", nil)
m2 = h.NewTaskMessage("task2", nil)
m3 = h.NewTaskMessageWithQueue("task3", nil, "high")
m4 = h.NewTaskMessageWithQueue("task4", nil, "low")
t1 = NewTask(m1.Type, m1.Payload)
t2 = NewTask(m2.Type, m2.Payload)
t3 = NewTask(m3.Type, m3.Payload)
t4 = NewTask(m4.Type, m4.Payload)
)
tests := []struct {
pending map[string][]*base.TaskMessage
queues []string // list of queues to consume the tasks from
wantProcessed []*Task // tasks to be processed at the end
}{
{
pending: map[string][]*base.TaskMessage{
"default": {m1, m2},
"high": {m3},
"low": {m4},
},
queues: []string{"default", "high", "low"},
wantProcessed: []*Task{t1, t2, t3, t4},
},
}
for _, tc := range tests {
// Set up test case.
h.FlushDB(t, r)
h.SeedAllPendingQueues(t, r, tc.pending)
// Instantiate a new processor.
var mu sync.Mutex
var processed []*Task
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
processed = append(processed, task)
return nil
}
starting := make(chan *workerInfo)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: DefaultRetryDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: map[string]int{
"default": 2,
"high": 3,
"low": 1,
},
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler)
p.start(&sync.WaitGroup{})
// Wait for two second to allow all pending tasks to be processed.
time.Sleep(2 * time.Second)
// Make sure no messages are stuck in active list.
for _, qname := range tc.queues {
if l := r.LLen(base.ActiveKey(qname)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
}
}
p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
mu.Unlock()
}
}
// https://github.com/hibiken/asynq/issues/166
func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("large_number", map[string]interface{}{"data": 111111111111111111})
t1 := NewTask(m1.Type, m1.Payload)
tests := []struct {
pending []*base.TaskMessage // initial default queue state
wantProcessed []*Task // tasks to be processed at the end
}{
{
pending: []*base.TaskMessage{m1},
wantProcessed: []*Task{t1},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedPendingQueue(t, r, tc.pending, base.DefaultQueueName) // initialize default queue.
var mu sync.Mutex
var processed []*Task
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
if data, err := task.Payload.GetInt("data"); err != nil {
t.Errorf("coult not get data from payload: %v", err)
} else {
t.Logf("data == %d", data)
}
processed = append(processed, task)
return nil
}
starting := make(chan *workerInfo)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: DefaultRetryDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler)
p.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second) // wait for two second to allow all pending tasks to be processed.
if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
}
p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmpopts.IgnoreUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
mu.Unlock()
} }
} }
@@ -105,52 +307,74 @@ func TestProcessorRetry(t *testing.T) {
m4 := h.NewTaskMessage("sync", nil) m4 := h.NewTaskMessage("sync", nil)
errMsg := "something went wrong" errMsg := "something went wrong"
// r* is m* after retry wrappedSkipRetry := fmt.Errorf("%s:%w", errMsg, SkipRetry)
r1 := *m1
r1.ErrorMsg = errMsg
r2 := *m2
r2.ErrorMsg = errMsg
r2.Retried = m2.Retried + 1
r3 := *m3
r3.ErrorMsg = errMsg
r3.Retried = m3.Retried + 1
r4 := *m4
r4.ErrorMsg = errMsg
r4.Retried = m4.Retried + 1
now := time.Now() now := time.Now()
tests := []struct { tests := []struct {
enqueued []*base.TaskMessage // initial default queue state desc string // test description
pending []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run incoming []*base.TaskMessage // tasks to be enqueued during run
delay time.Duration // retry delay duration delay time.Duration // retry delay duration
handler Handler // task handler handler Handler // task handler
wait time.Duration // wait duration between starting and stopping processor for this test case wait time.Duration // wait duration between starting and stopping processor for this test case
wantRetry []h.ZSetEntry // tasks in retry queue at the end wantRetry []base.Z // tasks in retry queue at the end
wantDead []*base.TaskMessage // tasks in dead queue at the end wantArchived []*base.TaskMessage // tasks in archived queue at the end
wantErrCount int // number of times error handler should be called wantErrCount int // number of times error handler should be called
}{ }{
{ {
enqueued: []*base.TaskMessage{m1, m2}, desc: "Should automatically retry errored tasks",
pending: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{m3, m4}, incoming: []*base.TaskMessage{m3, m4},
delay: time.Minute, delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error { handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return fmt.Errorf(errMsg) return fmt.Errorf(errMsg)
}), }),
wait: time.Second, wait: 2 * time.Second,
wantRetry: []h.ZSetEntry{ wantRetry: []base.Z{
{Msg: &r2, Score: float64(now.Add(time.Minute).Unix())}, {Message: h.TaskMessageAfterRetry(*m2, errMsg), Score: now.Add(time.Minute).Unix()},
{Msg: &r3, Score: float64(now.Add(time.Minute).Unix())}, {Message: h.TaskMessageAfterRetry(*m3, errMsg), Score: now.Add(time.Minute).Unix()},
{Msg: &r4, Score: float64(now.Add(time.Minute).Unix())}, {Message: h.TaskMessageAfterRetry(*m4, errMsg), Score: now.Add(time.Minute).Unix()},
}, },
wantDead: []*base.TaskMessage{&r1}, wantArchived: []*base.TaskMessage{h.TaskMessageWithError(*m1, errMsg)},
wantErrCount: 4, wantErrCount: 4,
}, },
{
desc: "Should skip retry errored tasks",
pending: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{},
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return SkipRetry // return SkipRetry without wrapping
}),
wait: 2 * time.Second,
wantRetry: []base.Z{},
wantArchived: []*base.TaskMessage{
h.TaskMessageWithError(*m1, SkipRetry.Error()),
h.TaskMessageWithError(*m2, SkipRetry.Error()),
},
wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error
},
{
desc: "Should skip retry errored tasks (with error wrapping)",
pending: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{},
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return wrappedSkipRetry
}),
wait: 2 * time.Second,
wantRetry: []base.Z{},
wantArchived: []*base.TaskMessage{
h.TaskMessageWithError(*m1, wrappedSkipRetry.Error()),
h.TaskMessageWithError(*m2, wrappedSkipRetry.Error()),
},
wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error
},
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue. h.SeedPendingQueue(t, r, tc.pending, base.DefaultQueueName) // initialize default queue.
// instantiate a new processor // instantiate a new processor
delayFunc := func(n int, e error, t *Task) time.Duration { delayFunc := func(n int, e error, t *Task) time.Duration {
@@ -160,18 +384,33 @@ func TestProcessorRetry(t *testing.T) {
mu sync.Mutex // guards n mu sync.Mutex // guards n
n int // number of times error handler is called n int // number of times error handler is called
) )
errHandler := func(t *Task, err error, retried, maxRetry int) { errHandler := func(ctx context.Context, t *Task, err error) {
mu.Lock() mu.Lock()
defer mu.Unlock() defer mu.Unlock()
n++ n++
} }
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false) starting := make(chan *workerInfo)
cancelations := base.NewCancelations() finished := make(chan *base.TaskMessage)
p := newProcessor(testLogger, rdbClient, ps, delayFunc, nil, cancelations, ErrorHandlerFunc(errHandler)) done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: delayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: ErrorHandlerFunc(errHandler),
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = tc.handler p.handler = tc.handler
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
for _, msg := range tc.incoming { for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg) err := rdbClient.Enqueue(msg)
if err != nil { if err != nil {
@@ -179,22 +418,22 @@ func TestProcessorRetry(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
} }
time.Sleep(tc.wait) time.Sleep(tc.wait) // FIXME: This makes test flaky.
p.terminate() p.terminate()
cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to second difference in zset score cmpOpt := h.EquateInt64Approx(1) // allow up to a second difference in zset score
gotRetry := h.GetRetryEntries(t, r) gotRetry := h.GetRetryEntries(t, r, base.DefaultQueueName)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" { if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff) t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.RetryKey(base.DefaultQueueName), diff)
} }
gotDead := h.GetDeadMessages(t, r) gotDead := h.GetArchivedMessages(t, r, base.DefaultQueueName)
if diff := cmp.Diff(tc.wantDead, gotDead, h.SortMsgOpt); diff != "" { if diff := cmp.Diff(tc.wantArchived, gotDead, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.DeadQueue, diff) t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.ArchivedKey(base.DefaultQueueName), diff)
} }
if l := r.LLen(base.InProgressQueue).Val(); l != 0 { if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l) t.Errorf("%s: %q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), tc.desc, l)
} }
if n != tc.wantErrCount { if n != tc.wantErrCount {
@@ -231,9 +470,25 @@ func TestProcessorQueues(t *testing.T) {
} }
for _, tc := range tests { for _, tc := range tests {
cancelations := base.NewCancelations() starting := make(chan *workerInfo)
ps := base.NewProcessState("localhost", 1234, 10, tc.queueCfg, false) finished := make(chan *base.TaskMessage)
p := newProcessor(testLogger, nil, ps, defaultDelayFunc, nil, cancelations, nil) done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: nil,
retryDelayFunc: DefaultRetryDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: tc.queueCfg,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
got := p.queues() got := p.queues()
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" { if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s", t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s",
@@ -243,36 +498,42 @@ func TestProcessorQueues(t *testing.T) {
} }
func TestProcessorWithStrictPriority(t *testing.T) { func TestProcessorWithStrictPriority(t *testing.T) {
r := setup(t) var (
rdbClient := rdb.NewRDB(r) r = setup(t)
m1 := h.NewTaskMessage("send_email", nil) rdbClient = rdb.NewRDB(r)
m2 := h.NewTaskMessage("send_email", nil)
m3 := h.NewTaskMessage("send_email", nil)
m4 := h.NewTaskMessage("gen_thumbnail", nil)
m5 := h.NewTaskMessage("gen_thumbnail", nil)
m6 := h.NewTaskMessage("sync", nil)
m7 := h.NewTaskMessage("sync", nil)
t1 := NewTask(m1.Type, m1.Payload) m1 = h.NewTaskMessageWithQueue("task1", nil, "critical")
t2 := NewTask(m2.Type, m2.Payload) m2 = h.NewTaskMessageWithQueue("task2", nil, "critical")
t3 := NewTask(m3.Type, m3.Payload) m3 = h.NewTaskMessageWithQueue("task3", nil, "critical")
t4 := NewTask(m4.Type, m4.Payload) m4 = h.NewTaskMessageWithQueue("task4", nil, base.DefaultQueueName)
t5 := NewTask(m5.Type, m5.Payload) m5 = h.NewTaskMessageWithQueue("task5", nil, base.DefaultQueueName)
t6 := NewTask(m6.Type, m6.Payload) m6 = h.NewTaskMessageWithQueue("task6", nil, "low")
t7 := NewTask(m7.Type, m7.Payload) m7 = h.NewTaskMessageWithQueue("task7", nil, "low")
t1 = NewTask(m1.Type, m1.Payload)
t2 = NewTask(m2.Type, m2.Payload)
t3 = NewTask(m3.Type, m3.Payload)
t4 = NewTask(m4.Type, m4.Payload)
t5 = NewTask(m5.Type, m5.Payload)
t6 = NewTask(m6.Type, m6.Payload)
t7 = NewTask(m7.Type, m7.Payload)
)
defer r.Close()
tests := []struct { tests := []struct {
enqueued map[string][]*base.TaskMessage // initial queues state pending map[string][]*base.TaskMessage // initial queues state
queues []string // list of queues to consume tasks from
wait time.Duration // wait duration between starting and stopping processor for this test case wait time.Duration // wait duration between starting and stopping processor for this test case
wantProcessed []*Task // tasks to be processed at the end wantProcessed []*Task // tasks to be processed at the end
}{ }{
{ {
enqueued: map[string][]*base.TaskMessage{ pending: map[string][]*base.TaskMessage{
base.DefaultQueueName: {m4, m5}, base.DefaultQueueName: {m4, m5},
"critical": {m1, m2, m3}, "critical": {m1, m2, m3},
"low": {m6, m7}, "low": {m6, m7},
}, },
queues: []string{base.DefaultQueueName, "critical", "low"},
wait: time.Second, wait: time.Second,
wantProcessed: []*Task{t1, t2, t3, t4, t5, t6, t7}, wantProcessed: []*Task{t1, t2, t3, t4, t5, t6, t7},
}, },
@@ -280,8 +541,8 @@ func TestProcessorWithStrictPriority(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
for qname, msgs := range tc.enqueued { for qname, msgs := range tc.pending {
h.SeedEnqueuedQueue(t, r, msgs, qname) h.SeedPendingQueue(t, r, msgs, qname)
} }
// instantiate a new processor // instantiate a new processor
@@ -294,32 +555,51 @@ func TestProcessorWithStrictPriority(t *testing.T) {
return nil return nil
} }
queueCfg := map[string]int{ queueCfg := map[string]int{
"critical": 3,
base.DefaultQueueName: 2, base.DefaultQueueName: 2,
"critical": 3,
"low": 1, "low": 1,
} }
// Note: Set concurrency to 1 to make sure tasks are processed one at a time. starting := make(chan *workerInfo)
cancelations := base.NewCancelations() finished := make(chan *base.TaskMessage)
ps := base.NewProcessState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/) syncCh := make(chan *syncRequest)
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil) done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: DefaultRetryDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time.
queues: queueCfg,
strictPriority: true,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
time.Sleep(tc.wait) time.Sleep(tc.wait)
// Make sure no tasks are stuck in active list.
for _, qname := range tc.queues {
if l := r.LLen(base.ActiveKey(qname)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
}
}
p.terminate() p.terminate()
if diff := cmp.Diff(tc.wantProcessed, processed, cmp.AllowUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff) t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
} }
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
} }
} }
func TestPerform(t *testing.T) { func TestProcessorPerform(t *testing.T) {
tests := []struct { tests := []struct {
desc string desc string
handler HandlerFunc handler HandlerFunc
@@ -351,9 +631,16 @@ func TestPerform(t *testing.T) {
wantErr: true, wantErr: true,
}, },
} }
// Note: We don't need to fully initialize the processor since we are only testing
// perform method.
p := newProcessor(processorParams{
logger: testLogger,
queues: defaultQueueConfig,
})
for _, tc := range tests { for _, tc := range tests {
got := perform(context.Background(), tc.task, tc.handler) p.handler = tc.handler
got := p.perform(context.Background(), tc.task)
if !tc.wantErr && got != nil { if !tc.wantErr && got != nil {
t.Errorf("%s: perform() = %v, want nil", tc.desc, got) t.Errorf("%s: perform() = %v, want nil", tc.desc, got)
continue continue
@@ -365,84 +652,82 @@ func TestPerform(t *testing.T) {
} }
} }
func TestCreateContextWithTimeRestrictions(t *testing.T) { func TestGCD(t *testing.T) {
var (
noTimeout = time.Duration(0)
noDeadline = time.Time{}
)
tests := []struct { tests := []struct {
desc string input []int
timeout time.Duration want int
deadline time.Time
wantDeadline time.Time
}{ }{
{"only with timeout", 10 * time.Second, noDeadline, time.Now().Add(10 * time.Second)}, {[]int{6, 2, 12}, 2},
{"only with deadline", noTimeout, time.Now().Add(time.Hour), time.Now().Add(time.Hour)}, {[]int{3, 3, 3}, 3},
{"with timeout and deadline (timeout < deadline)", 10 * time.Second, time.Now().Add(time.Hour), time.Now().Add(10 * time.Second)}, {[]int{6, 3, 1}, 1},
{"with timeout and deadline (timeout > deadline)", 10 * time.Minute, time.Now().Add(30 * time.Second), time.Now().Add(30 * time.Second)}, {[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
} }
for _, tc := range tests { for _, tc := range tests {
msg := &base.TaskMessage{ got := gcd(tc.input...)
Type: "something", if got != tc.want {
ID: xid.New(), t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
Timeout: tc.timeout.String(),
Deadline: tc.deadline.Format(time.RFC3339),
}
ctx, cancel := createContext(msg)
select {
case x := <-ctx.Done():
t.Errorf("%s: <-ctx.Done() == %v, want nothing (it should block)", tc.desc, x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("%s: ctx.Deadline() returned false, want deadline to be set", tc.desc)
}
if !cmp.Equal(tc.wantDeadline, got, cmpopts.EquateApproxTime(time.Second)) {
t.Errorf("%s: ctx.Deadline() returned %v, want %v", tc.desc, got, tc.wantDeadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
} }
} }
} }
func TestCreateContextWithoutTimeRestrictions(t *testing.T) { func TestNormalizeQueues(t *testing.T) {
msg := &base.TaskMessage{ tests := []struct {
Type: "something", input map[string]int
ID: xid.New(), want map[string]int
Timeout: time.Duration(0).String(), // zero value to indicate no timeout }{
Deadline: time.Time{}.Format(time.RFC3339), // zero value to indicate no deadline {
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
} }
ctx, cancel := createContext(msg) for _, tc := range tests {
got := normalizeQueues(tc.input)
select { if diff := cmp.Diff(tc.want, got); diff != "" {
case x := <-ctx.Done(): t.Errorf("normalizeQueues(%v) = %v, want %v; (-want, +got):\n%s",
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x) tc.input, got, tc.want, diff)
default:
} }
_, ok := ctx.Deadline()
if ok {
t.Error("ctx.Deadline() returned true, want deadline to not be set")
}
cancel()
select {
case <-ctx.Done():
default:
t.Error("ctx.Done() blocked, want it to be non-blocking")
} }
} }

101
recoverer.go Normal file
View File

@@ -0,0 +1,101 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"fmt"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
type recoverer struct {
logger *log.Logger
broker base.Broker
retryDelayFunc RetryDelayFunc
// channel to communicate back to the long running "recoverer" goroutine.
done chan struct{}
// list of queues to check for deadline.
queues []string
// poll interval.
interval time.Duration
}
type recovererParams struct {
logger *log.Logger
broker base.Broker
queues []string
interval time.Duration
retryDelayFunc RetryDelayFunc
}
func newRecoverer(params recovererParams) *recoverer {
return &recoverer{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
queues: params.queues,
interval: params.interval,
retryDelayFunc: params.retryDelayFunc,
}
}
func (r *recoverer) terminate() {
r.logger.Debug("Recoverer shutting down...")
// Signal the recoverer goroutine to stop polling.
r.done <- struct{}{}
}
func (r *recoverer) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
timer := time.NewTimer(r.interval)
for {
select {
case <-r.done:
r.logger.Debug("Recoverer done")
timer.Stop()
return
case <-timer.C:
// Get all tasks which have expired 30 seconds ago or earlier.
deadline := time.Now().Add(-30 * time.Second)
msgs, err := r.broker.ListDeadlineExceeded(deadline, r.queues...)
if err != nil {
r.logger.Warn("recoverer: could not list deadline exceeded tasks")
continue
}
const errMsg = "deadline exceeded" // TODO: better error message
for _, msg := range msgs {
if msg.Retried >= msg.Retry {
r.archive(msg, errMsg)
} else {
r.retry(msg, errMsg)
}
}
}
}
}()
}
func (r *recoverer) retry(msg *base.TaskMessage, errMsg string) {
delay := r.retryDelayFunc(msg.Retried, fmt.Errorf(errMsg), NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(delay)
if err := r.broker.Retry(msg, retryAt, errMsg); err != nil {
r.logger.Warnf("recoverer: could not retry deadline exceeded task: %v", err)
}
}
func (r *recoverer) archive(msg *base.TaskMessage, errMsg string) {
if err := r.broker.Archive(msg, errMsg); err != nil {
r.logger.Warnf("recoverer: could not move task to archive: %v", err)
}
}

269
recoverer_test.go Normal file
View File

@@ -0,0 +1,269 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestRecoverer(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
t1 := h.NewTaskMessageWithQueue("task1", nil, "default")
t2 := h.NewTaskMessageWithQueue("task2", nil, "default")
t3 := h.NewTaskMessageWithQueue("task3", nil, "critical")
t4 := h.NewTaskMessageWithQueue("task4", nil, "default")
t4.Retried = t4.Retry // t4 has reached its max retry count
now := time.Now()
oneHourFromNow := now.Add(1 * time.Hour)
fiveMinutesFromNow := now.Add(5 * time.Minute)
fiveMinutesAgo := now.Add(-5 * time.Minute)
oneHourAgo := now.Add(-1 * time.Hour)
tests := []struct {
desc string
inProgress map[string][]*base.TaskMessage
deadlines map[string][]base.Z
retry map[string][]base.Z
archived map[string][]base.Z
wantActive map[string][]*base.TaskMessage
wantDeadlines map[string][]base.Z
wantRetry map[string][]*base.TaskMessage
wantArchived map[string][]*base.TaskMessage
}{
{
desc: "with one active task",
inProgress: map[string][]*base.TaskMessage{
"default": {t1},
},
deadlines: map[string][]base.Z{
"default": {{Message: t1, Score: fiveMinutesAgo.Unix()}},
},
retry: map[string][]base.Z{
"default": {},
},
archived: map[string][]base.Z{
"default": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
},
wantDeadlines: map[string][]base.Z{
"default": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
},
},
{
desc: "with a task with max-retry reached",
inProgress: map[string][]*base.TaskMessage{
"default": {t4},
"critical": {},
},
deadlines: map[string][]base.Z{
"default": {{Message: t4, Score: fiveMinutesAgo.Unix()}},
"critical": {},
},
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
archived: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantDeadlines: map[string][]base.Z{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {h.TaskMessageWithError(*t4, "deadline exceeded")},
"critical": {},
},
},
{
desc: "with multiple active tasks, and one expired",
inProgress: map[string][]*base.TaskMessage{
"default": {t1, t2},
"critical": {t3},
},
deadlines: map[string][]base.Z{
"default": {
{Message: t1, Score: oneHourAgo.Unix()},
{Message: t2, Score: fiveMinutesFromNow.Unix()},
},
"critical": {
{Message: t3, Score: oneHourFromNow.Unix()},
},
},
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
archived: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {t2},
"critical": {t3},
},
wantDeadlines: map[string][]base.Z{
"default": {{Message: t2, Score: fiveMinutesFromNow.Unix()}},
"critical": {{Message: t3, Score: oneHourFromNow.Unix()}},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
"critical": {},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
},
{
desc: "with multiple expired active tasks",
inProgress: map[string][]*base.TaskMessage{
"default": {t1, t2},
"critical": {t3},
},
deadlines: map[string][]base.Z{
"default": {
{Message: t1, Score: oneHourAgo.Unix()},
{Message: t2, Score: oneHourFromNow.Unix()},
},
"critical": {
{Message: t3, Score: fiveMinutesAgo.Unix()},
},
},
retry: map[string][]base.Z{
"default": {},
"cricial": {},
},
archived: map[string][]base.Z{
"default": {},
"cricial": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {t2},
"critical": {},
},
wantDeadlines: map[string][]base.Z{
"default": {{Message: t2, Score: oneHourFromNow.Unix()}},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
"critical": {h.TaskMessageAfterRetry(*t3, "deadline exceeded")},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
},
{
desc: "with empty active queue",
inProgress: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
deadlines: map[string][]base.Z{
"default": {},
"critical": {},
},
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
archived: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantDeadlines: map[string][]base.Z{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedAllActiveQueues(t, r, tc.inProgress)
h.SeedAllDeadlines(t, r, tc.deadlines)
h.SeedAllRetryQueues(t, r, tc.retry)
h.SeedAllArchivedQueues(t, r, tc.archived)
recoverer := newRecoverer(recovererParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default", "critical"},
interval: 1 * time.Second,
retryDelayFunc: func(n int, err error, task *Task) time.Duration { return 30 * time.Second },
})
var wg sync.WaitGroup
recoverer.start(&wg)
time.Sleep(2 * time.Second)
recoverer.terminate()
for qname, want := range tc.wantActive {
gotActive := h.GetActiveMessages(t, r, qname)
if diff := cmp.Diff(want, gotActive, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.ActiveKey(qname), diff)
}
}
for qname, want := range tc.wantDeadlines {
gotDeadlines := h.GetDeadlinesEntries(t, r, qname)
if diff := cmp.Diff(want, gotDeadlines, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.DeadlinesKey(qname), diff)
}
}
for qname, want := range tc.wantRetry {
gotRetry := h.GetRetryMessages(t, r, qname)
if diff := cmp.Diff(want, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff)
}
}
for qname, want := range tc.wantArchived {
gotDead := h.GetArchivedMessages(t, r, qname)
if diff := cmp.Diff(want, gotDead, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.ArchivedKey(qname), diff)
}
}
}
}

View File

@@ -5,65 +5,268 @@
package asynq package asynq
import ( import (
"fmt"
"os"
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/robfig/cron/v3"
) )
type scheduler struct { // A Scheduler kicks off tasks at regular intervals based on the user defined schedule.
logger Logger type Scheduler struct {
id string
status *base.ServerStatus
logger *log.Logger
client *Client
rdb *rdb.RDB rdb *rdb.RDB
cron *cron.Cron
// channel to communicate back to the long running "scheduler" goroutine. location *time.Location
done chan struct{} done chan struct{}
wg sync.WaitGroup
// poll interval on average errHandler func(task *Task, opts []Option, err error)
avgInterval time.Duration // idmap maps Scheduler's entry ID to cron.EntryID
// to avoid using cron.EntryID as the public API of
// list of queues to move the tasks into. // the Scheduler.
qnames []string idmap map[string]cron.EntryID
} }
func newScheduler(l Logger, r *rdb.RDB, avgInterval time.Duration, qcfg map[string]int) *scheduler { // NewScheduler returns a new Scheduler instance given the redis connection option.
var qnames []string // The parameter opts is optional, defaults will be used if opts is set to nil
for q := range qcfg { func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler {
qnames = append(qnames, q) c, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
} }
return &scheduler{ if opts == nil {
logger: l, opts = &SchedulerOpts{}
rdb: r, }
logger := log.NewLogger(opts.Logger)
loglevel := opts.LogLevel
if loglevel == level_unspecified {
loglevel = InfoLevel
}
logger.SetLevel(toInternalLogLevel(loglevel))
loc := opts.Location
if loc == nil {
loc = time.UTC
}
return &Scheduler{
id: generateSchedulerID(),
status: base.NewServerStatus(base.StatusIdle),
logger: logger,
client: NewClient(r),
rdb: rdb.NewRDB(c),
cron: cron.New(cron.WithLocation(loc)),
location: loc,
done: make(chan struct{}), done: make(chan struct{}),
avgInterval: avgInterval, errHandler: opts.EnqueueErrorHandler,
qnames: qnames, idmap: make(map[string]cron.EntryID),
} }
} }
func (s *scheduler) terminate() { func generateSchedulerID() string {
s.logger.Info("Scheduler shutting down...") host, err := os.Hostname()
// Signal the scheduler goroutine to stop polling. if err != nil {
s.done <- struct{}{} host = "unknown-host"
}
return fmt.Sprintf("%s:%d:%v", host, os.Getpid(), uuid.New())
} }
// start starts the "scheduler" goroutine. // SchedulerOpts specifies scheduler options.
func (s *scheduler) start(wg *sync.WaitGroup) { type SchedulerOpts struct {
wg.Add(1) // Logger specifies the logger used by the scheduler instance.
go func() { //
defer wg.Done() // If unset, the default logger is used.
Logger Logger
// LogLevel specifies the minimum log level to enable.
//
// If unset, InfoLevel is used by default.
LogLevel LogLevel
// Location specifies the time zone location.
//
// If unset, the UTC time zone (time.UTC) is used.
Location *time.Location
// EnqueueErrorHandler gets called when scheduler cannot enqueue a registered task
// due to an error.
EnqueueErrorHandler func(task *Task, opts []Option, err error)
}
// enqueueJob encapsulates the job of enqueing a task and recording the event.
type enqueueJob struct {
id uuid.UUID
cronspec string
task *Task
opts []Option
location *time.Location
logger *log.Logger
client *Client
rdb *rdb.RDB
errHandler func(task *Task, opts []Option, err error)
}
func (j *enqueueJob) Run() {
res, err := j.client.Enqueue(j.task, j.opts...)
if err != nil {
j.logger.Errorf("scheduler could not enqueue a task %+v: %v", j.task, err)
if j.errHandler != nil {
j.errHandler(j.task, j.opts, err)
}
return
}
j.logger.Debugf("scheduler enqueued a task: %+v", res)
event := &base.SchedulerEnqueueEvent{
TaskID: res.ID,
EnqueuedAt: res.EnqueuedAt.In(j.location),
}
err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event)
if err != nil {
j.logger.Errorf("scheduler could not record enqueue event of enqueued task %+v: %v", j.task, err)
}
}
// Register registers a task to be enqueued on the given schedule specified by the cronspec.
// It returns an ID of the newly registered entry.
func (s *Scheduler) Register(cronspec string, task *Task, opts ...Option) (entryID string, err error) {
job := &enqueueJob{
id: uuid.New(),
cronspec: cronspec,
task: task,
opts: opts,
location: s.location,
client: s.client,
rdb: s.rdb,
logger: s.logger,
errHandler: s.errHandler,
}
cronID, err := s.cron.AddJob(cronspec, job)
if err != nil {
return "", err
}
s.idmap[job.id.String()] = cronID
return job.id.String(), nil
}
// Unregister removes a registered entry by entry ID.
// Unregister returns a non-nil error if no entries were found for the given entryID.
func (s *Scheduler) Unregister(entryID string) error {
cronID, ok := s.idmap[entryID]
if !ok {
return fmt.Errorf("asynq: no scheduler entry found")
}
s.cron.Remove(cronID)
return nil
}
// Run starts the scheduler until an os signal to exit the program is received.
// It returns an error if scheduler is already running or has been stopped.
func (s *Scheduler) Run() error {
if err := s.Start(); err != nil {
return err
}
s.waitForSignals()
return s.Stop()
}
// Start starts the scheduler.
// It returns an error if the scheduler is already running or has been stopped.
func (s *Scheduler) Start() error {
switch s.status.Get() {
case base.StatusRunning:
return fmt.Errorf("asynq: the scheduler is already running")
case base.StatusStopped:
return fmt.Errorf("asynq: the scheduler has already been stopped")
}
s.logger.Info("Scheduler starting")
s.logger.Infof("Scheduler timezone is set to %v", s.location)
s.cron.Start()
s.wg.Add(1)
go s.runHeartbeater()
s.status.Set(base.StatusRunning)
return nil
}
// Stop stops the scheduler.
// It returns an error if the scheduler is not currently running.
func (s *Scheduler) Stop() error {
if s.status.Get() != base.StatusRunning {
return fmt.Errorf("asynq: the scheduler is not running")
}
s.logger.Info("Scheduler shutting down")
close(s.done) // signal heartbeater to stop
ctx := s.cron.Stop()
<-ctx.Done()
s.wg.Wait()
s.clearHistory()
s.client.Close()
s.rdb.Close()
s.status.Set(base.StatusStopped)
s.logger.Info("Scheduler stopped")
return nil
}
func (s *Scheduler) runHeartbeater() {
defer s.wg.Done()
ticker := time.NewTicker(5 * time.Second)
for { for {
select { select {
case <-s.done: case <-s.done:
s.logger.Info("Scheduler done") s.logger.Debugf("Scheduler heatbeater shutting down")
s.rdb.ClearSchedulerEntries(s.id)
return return
case <-time.After(s.avgInterval): case <-ticker.C:
s.exec() s.beat()
} }
} }
}()
} }
func (s *scheduler) exec() { // beat writes a snapshot of entries to redis.
if err := s.rdb.CheckAndEnqueue(s.qnames...); err != nil { func (s *Scheduler) beat() {
s.logger.Error("Could not enqueue scheduled tasks: %v", err) var entries []*base.SchedulerEntry
for _, entry := range s.cron.Entries() {
job := entry.Job.(*enqueueJob)
e := &base.SchedulerEntry{
ID: job.id.String(),
Spec: job.cronspec,
Type: job.task.Type,
Payload: job.task.Payload.data,
Opts: stringifyOptions(job.opts),
Next: entry.Next,
Prev: entry.Prev,
}
entries = append(entries, e)
}
s.logger.Debugf("Writing entries %v", entries)
if err := s.rdb.WriteSchedulerEntries(s.id, entries, 5*time.Second); err != nil {
s.logger.Warnf("Scheduler could not write heartbeat data: %v", err)
}
}
func stringifyOptions(opts []Option) []string {
var res []string
for _, opt := range opts {
res = append(res, opt.String())
}
return res
}
func (s *Scheduler) clearHistory() {
for _, entry := range s.cron.Entries() {
job := entry.Job.(*enqueueJob)
if err := s.rdb.ClearSchedulerHistory(job.id.String()); err != nil {
s.logger.Warnf("Could not clear scheduler history for entry %q: %v", job.id.String(), err)
}
} }
} }

View File

@@ -10,84 +10,153 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest" "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
) )
func TestScheduler(t *testing.T) { func TestSchedulerRegister(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newScheduler(testLogger, rdbClient, pollInterval, defaultQueueConfig)
t1 := h.NewTaskMessage("gen_thumbnail", nil)
t2 := h.NewTaskMessage("send_email", nil)
t3 := h.NewTaskMessage("reindex", nil)
t4 := h.NewTaskMessage("sync", nil)
now := time.Now()
tests := []struct { tests := []struct {
initScheduled []h.ZSetEntry // scheduled queue initial state cronspec string
initRetry []h.ZSetEntry // retry queue initial state task *Task
initQueue []*base.TaskMessage // default queue initial state opts []Option
wait time.Duration // wait duration before checking for final state wait time.Duration
wantScheduled []*base.TaskMessage // schedule queue final state queue string
wantRetry []*base.TaskMessage // retry queue final state want []*base.TaskMessage
wantQueue []*base.TaskMessage // default queue final state
}{ }{
{ {
initScheduled: []h.ZSetEntry{ cronspec: "@every 3s",
{Msg: t1, Score: float64(now.Add(time.Hour).Unix())}, task: NewTask("task1", nil),
{Msg: t2, Score: float64(now.Add(-2 * time.Second).Unix())}, opts: []Option{MaxRetry(10)},
}, wait: 10 * time.Second,
initRetry: []h.ZSetEntry{ queue: "default",
{Msg: t3, Score: float64(time.Now().Add(-500 * time.Millisecond).Unix())}, want: []*base.TaskMessage{
}, {
initQueue: []*base.TaskMessage{t4}, Type: "task1",
wait: pollInterval * 2, Payload: nil,
wantScheduled: []*base.TaskMessage{t1}, Retry: 10,
wantRetry: []*base.TaskMessage{}, Timeout: int64(defaultTimeout.Seconds()),
wantQueue: []*base.TaskMessage{t2, t3, t4}, Queue: "default",
}, },
{ {
initScheduled: []h.ZSetEntry{ Type: "task1",
{Msg: t1, Score: float64(now.Unix())}, Payload: nil,
{Msg: t2, Score: float64(now.Add(-2 * time.Second).Unix())}, Retry: 10,
{Msg: t3, Score: float64(now.Add(-500 * time.Millisecond).Unix())}, Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
{
Type: "task1",
Payload: nil,
Retry: 10,
Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
}, },
initRetry: []h.ZSetEntry{},
initQueue: []*base.TaskMessage{t4},
wait: pollInterval * 2,
wantScheduled: []*base.TaskMessage{},
wantRetry: []*base.TaskMessage{},
wantQueue: []*base.TaskMessage{t1, t2, t3, t4},
}, },
} }
r := setup(t)
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. scheduler := NewScheduler(getRedisConnOpt(t), nil)
h.SeedScheduledQueue(t, r, tc.initScheduled) // initialize scheduled queue if _, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...); err != nil {
h.SeedRetryQueue(t, r, tc.initRetry) // initialize retry queue t.Fatal(err)
h.SeedEnqueuedQueue(t, r, tc.initQueue) // initialize default queue }
var wg sync.WaitGroup if err := scheduler.Start(); err != nil {
s.start(&wg) t.Fatal(err)
}
time.Sleep(tc.wait) time.Sleep(tc.wait)
s.terminate() if err := scheduler.Stop(); err != nil {
t.Fatal(err)
gotScheduled := h.GetScheduledMessages(t, r)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.ScheduledQueue, diff)
} }
gotRetry := h.GetRetryMessages(t, r) got := asynqtest.GetPendingMessages(t, r, tc.queue)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortMsgOpt); diff != "" { if diff := cmp.Diff(tc.want, got, asynqtest.IgnoreIDOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.RetryQueue, diff) t.Errorf("mismatch found in queue %q: (-want,+got)\n%s", tc.queue, diff)
} }
}
gotEnqueued := h.GetEnqueuedMessages(t, r) }
if diff := cmp.Diff(tc.wantQueue, gotEnqueued, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.DefaultQueue, diff) func TestSchedulerWhenRedisDown(t *testing.T) {
var (
mu sync.Mutex
counter int
)
errorHandler := func(task *Task, opts []Option, err error) {
mu.Lock()
counter++
mu.Unlock()
}
// Connect to non-existent redis instance to simulate a redis server being down.
scheduler := NewScheduler(
RedisClientOpt{Addr: ":9876"},
&SchedulerOpts{EnqueueErrorHandler: errorHandler},
)
task := NewTask("test", nil)
if _, err := scheduler.Register("@every 3s", task); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
// Scheduler should attempt to enqueue the task three times (every 3s).
time.Sleep(10 * time.Second)
if err := scheduler.Stop(); err != nil {
t.Fatal(err)
}
mu.Lock()
if counter != 3 {
t.Errorf("EnqueueErrorHandler was called %d times, want 3", counter)
}
mu.Unlock()
}
func TestSchedulerUnregister(t *testing.T) {
tests := []struct {
cronspec string
task *Task
opts []Option
wait time.Duration
queue string
}{
{
cronspec: "@every 3s",
task: NewTask("task1", nil),
opts: []Option{MaxRetry(10)},
wait: 10 * time.Second,
queue: "default",
},
}
r := setup(t)
for _, tc := range tests {
scheduler := NewScheduler(getRedisConnOpt(t), nil)
entryID, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...)
if err != nil {
t.Fatal(err)
}
if err := scheduler.Unregister(entryID); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
time.Sleep(tc.wait)
if err := scheduler.Stop(); err != nil {
t.Fatal(err)
}
got := asynqtest.GetPendingMessages(t, r, tc.queue)
if len(got) != 0 {
t.Errorf("%d tasks were enqueued, want zero", len(got))
} }
} }
} }

View File

@@ -15,7 +15,7 @@ import (
// ServeMux is a multiplexer for asynchronous tasks. // ServeMux is a multiplexer for asynchronous tasks.
// It matches the type of each task against a list of registered patterns // It matches the type of each task against a list of registered patterns
// and calls the handler for the pattern that most closely matches the // and calls the handler for the pattern that most closely matches the
// taks's type name. // task's type name.
// //
// Longer patterns take precedence over shorter ones, so that if there are // Longer patterns take precedence over shorter ones, so that if there are
// handlers registered for both "images" and "images:thumbnails", // handlers registered for both "images" and "images:thumbnails",
@@ -26,6 +26,7 @@ type ServeMux struct {
mu sync.RWMutex mu sync.RWMutex
m map[string]muxEntry m map[string]muxEntry
es []muxEntry // slice of entries sorted from longest to shortest. es []muxEntry // slice of entries sorted from longest to shortest.
mws []MiddlewareFunc
} }
type muxEntry struct { type muxEntry struct {
@@ -33,6 +34,11 @@ type muxEntry struct {
pattern string pattern string
} }
// MiddlewareFunc is a function which receives an asynq.Handler and returns another asynq.Handler.
// Typically, the returned handler is a closure which does something with the context and task passed
// to it, and then calls the handler passed as parameter to the MiddlewareFunc.
type MiddlewareFunc func(Handler) Handler
// NewServeMux allocates and returns a new ServeMux. // NewServeMux allocates and returns a new ServeMux.
func NewServeMux() *ServeMux { func NewServeMux() *ServeMux {
return new(ServeMux) return new(ServeMux)
@@ -60,6 +66,9 @@ func (mux *ServeMux) Handler(t *Task) (h Handler, pattern string) {
if h == nil { if h == nil {
h, pattern = NotFoundHandler(), "" h, pattern = NotFoundHandler(), ""
} }
for i := len(mux.mws) - 1; i >= 0; i-- {
h = mux.mws[i](h)
}
return h, pattern return h, pattern
} }
@@ -130,6 +139,16 @@ func (mux *ServeMux) HandleFunc(pattern string, handler func(context.Context, *T
mux.Handle(pattern, HandlerFunc(handler)) mux.Handle(pattern, HandlerFunc(handler))
} }
// Use appends a MiddlewareFunc to the chain.
// Middlewares are executed in the order that they are applied to the ServeMux.
func (mux *ServeMux) Use(mws ...MiddlewareFunc) {
mux.mu.Lock()
defer mux.mu.Unlock()
for _, fn := range mws {
mux.mws = append(mux.mws, fn)
}
}
// NotFound returns an error indicating that the handler was not found for the given task. // NotFound returns an error indicating that the handler was not found for the given task.
func NotFound(ctx context.Context, task *Task) error { func NotFound(ctx context.Context, task *Task) error {
return fmt.Errorf("handler not found for task %q", task.Type) return fmt.Errorf("handler not found for task %q", task.Type)

View File

@@ -7,9 +7,12 @@ package asynq
import ( import (
"context" "context"
"testing" "testing"
"github.com/google/go-cmp/cmp"
) )
var called string var called string // identity of the handler that was called.
var invoked []string // list of middlewares in the order they were invoked.
// makeFakeHandler returns a handler that updates the global called variable // makeFakeHandler returns a handler that updates the global called variable
// to the given identity. // to the given identity.
@@ -20,6 +23,17 @@ func makeFakeHandler(identity string) Handler {
}) })
} }
// makeFakeMiddleware returns a middleware function that appends the given identity
//to the global invoked slice.
func makeFakeMiddleware(identity string) MiddlewareFunc {
return func(next Handler) Handler {
return HandlerFunc(func(ctx context.Context, t *Task) error {
invoked = append(invoked, identity)
return next.ProcessTask(ctx, t)
})
}
}
// A list of pattern, handler pair that is registered with mux. // A list of pattern, handler pair that is registered with mux.
var serveMuxRegister = []struct { var serveMuxRegister = []struct {
pattern string pattern string
@@ -114,3 +128,43 @@ func TestServeMuxNotFound(t *testing.T) {
} }
} }
} }
var middlewareTests = []struct {
typename string // task's type name
middlewares []string // middlewares to use. They should be called in this order.
want string // identifier of the handler that should be called
}{
{"email:signup", []string{"logging", "expiration"}, "signup email handler"},
{"csv:export", []string{}, "csv export handler"},
{"email:daily", []string{"expiration", "logging"}, "default email handler"},
}
func TestServeMuxMiddlewares(t *testing.T) {
for _, tc := range middlewareTests {
mux := NewServeMux()
for _, e := range serveMuxRegister {
mux.Handle(e.pattern, e.h)
}
var mws []MiddlewareFunc
for _, s := range tc.middlewares {
mws = append(mws, makeFakeMiddleware(s))
}
mux.Use(mws...)
invoked = []string{} // reset to empty slice
called = "" // reset to zero value
task := NewTask(tc.typename, nil)
if err := mux.ProcessTask(context.Background(), task); err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(invoked, tc.middlewares); diff != "" {
t.Errorf("invoked middlewares were %v, want %v", invoked, tc.middlewares)
}
if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
}
}
}

513
server.go Normal file
View File

@@ -0,0 +1,513 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"errors"
"fmt"
"math"
"math/rand"
"runtime"
"strings"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
)
// Server is responsible for managing the task processing.
//
// Server pulls tasks off queues and processes them.
// If the processing of a task is unsuccessful, server will schedule it for a retry.
// A task will be retried until either the task gets processed successfully
// or until it reaches its max retry count.
//
// If a task exhausts its retries, it will be moved to the archive and
// will be kept in the archive for some time until a certain condition is met
// (e.g., archive size reaches a certain limit, or the task has been in the
// archive for a certain amount of time).
type Server struct {
logger *log.Logger
broker base.Broker
status *base.ServerStatus
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
forwarder *forwarder
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
recoverer *recoverer
healthchecker *healthchecker
}
// Config specifies the server's background-task processing behavior.
type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to a zero or negative value, NewServer will overwrite the value
// to the number of CPUs usable by the currennt process.
Concurrency int
// Function to calculate retry delay for a failed task.
//
// By default, it uses exponential backoff algorithm to calculate the delay.
RetryDelayFunc RetryDelayFunc
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the server will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
//
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
//
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
// If set to true, tasks in the queue with the highest priority is processed first.
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
//
// func reportError(ctx context, task *asynq.Task, err error) {
// retried, _ := asynq.GetRetryCount(ctx)
// maxRetry, _ := asynq.GetMaxRetry(ctx)
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
// Logger specifies the logger used by the server instance.
//
// If unset, default logger is used.
Logger Logger
// LogLevel specifies the minimum log level to enable.
//
// If unset, InfoLevel is used by default.
LogLevel LogLevel
// ShutdownTimeout specifies the duration to wait to let workers finish their tasks
// before forcing them to abort when stopping the server.
//
// If unset or zero, default timeout of 8 seconds is used.
ShutdownTimeout time.Duration
// HealthCheckFunc is called periodically with any errors encountered during ping to the
// connected redis server.
HealthCheckFunc func(error)
// HealthCheckInterval specifies the interval between healthchecks.
//
// If unset or zero, the interval is set to 15 seconds.
HealthCheckInterval time.Duration
}
// An ErrorHandler handles an error occured during task processing.
type ErrorHandler interface {
HandleError(ctx context.Context, task *Task, err error)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(ctx context.Context, task *Task, err error)
// HandleError calls fn(ctx, task, err)
func (fn ErrorHandlerFunc) HandleError(ctx context.Context, task *Task, err error) {
fn(ctx, task, err)
}
// RetryDelayFunc calculates the retry delay duration for a failed task given
// the retry count, error, and the task.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
type RetryDelayFunc func(n int, e error, t *Task) time.Duration
// Logger supports logging at various log levels.
type Logger interface {
// Debug logs a message at Debug level.
Debug(args ...interface{})
// Info logs a message at Info level.
Info(args ...interface{})
// Warn logs a message at Warning level.
Warn(args ...interface{})
// Error logs a message at Error level.
Error(args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(args ...interface{})
}
// LogLevel represents logging level.
//
// It satisfies flag.Value interface.
type LogLevel int32
const (
// Note: reserving value zero to differentiate unspecified case.
level_unspecified LogLevel = iota
// DebugLevel is the lowest level of logging.
// Debug logs are intended for debugging and development purposes.
DebugLevel
// InfoLevel is used for general informational log messages.
InfoLevel
// WarnLevel is used for undesired but relatively expected events,
// which may indicate a problem.
WarnLevel
// ErrorLevel is used for undesired and unexpected events that
// the program can recover from.
ErrorLevel
// FatalLevel is used for undesired and unexpected events that
// the program cannot recover from.
FatalLevel
)
// String is part of the flag.Value interface.
func (l *LogLevel) String() string {
switch *l {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warn"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
}
panic(fmt.Sprintf("asynq: unexpected log level: %v", *l))
}
// Set is part of the flag.Value interface.
func (l *LogLevel) Set(val string) error {
switch strings.ToLower(val) {
case "debug":
*l = DebugLevel
case "info":
*l = InfoLevel
case "warn", "warning":
*l = WarnLevel
case "error":
*l = ErrorLevel
case "fatal":
*l = FatalLevel
default:
return fmt.Errorf("asynq: unsupported log level %q", val)
}
return nil
}
func toInternalLogLevel(l LogLevel) log.Level {
switch l {
case DebugLevel:
return log.DebugLevel
case InfoLevel:
return log.InfoLevel
case WarnLevel:
return log.WarnLevel
case ErrorLevel:
return log.ErrorLevel
case FatalLevel:
return log.FatalLevel
}
panic(fmt.Sprintf("asynq: unexpected log level: %v", l))
}
// DefaultRetryDelayFunc is the default RetryDelayFunc used if one is not specified in Config.
// It uses exponential back-off strategy to calculate the retry delay.
func DefaultRetryDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
// Formula taken from https://github.com/mperham/sidekiq.
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1))
return time.Duration(s) * time.Second
}
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
const (
defaultShutdownTimeout = 8 * time.Second
defaultHealthCheckInterval = 15 * time.Second
)
// NewServer returns a new Server given a redis connection option
// and background processing configuration.
func NewServer(r RedisConnOpt, cfg Config) *Server {
c, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
}
n := cfg.Concurrency
if n < 1 {
n = runtime.NumCPU()
}
delayFunc := cfg.RetryDelayFunc
if delayFunc == nil {
delayFunc = DefaultRetryDelayFunc
}
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
var qnames []string
for q := range queues {
qnames = append(qnames, q)
}
shutdownTimeout := cfg.ShutdownTimeout
if shutdownTimeout == 0 {
shutdownTimeout = defaultShutdownTimeout
}
healthcheckInterval := cfg.HealthCheckInterval
if healthcheckInterval == 0 {
healthcheckInterval = defaultHealthCheckInterval
}
logger := log.NewLogger(cfg.Logger)
loglevel := cfg.LogLevel
if loglevel == level_unspecified {
loglevel = InfoLevel
}
logger.SetLevel(toInternalLogLevel(loglevel))
rdb := rdb.NewRDB(c)
starting := make(chan *workerInfo)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
status := base.NewServerStatus(base.StatusIdle)
cancels := base.NewCancelations()
syncer := newSyncer(syncerParams{
logger: logger,
requestsCh: syncCh,
interval: 5 * time.Second,
})
heartbeater := newHeartbeater(heartbeaterParams{
logger: logger,
broker: rdb,
interval: 5 * time.Second,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
status: status,
starting: starting,
finished: finished,
})
forwarder := newForwarder(forwarderParams{
logger: logger,
broker: rdb,
queues: qnames,
interval: 5 * time.Second,
})
subscriber := newSubscriber(subscriberParams{
logger: logger,
broker: rdb,
cancelations: cancels,
})
processor := newProcessor(processorParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
syncCh: syncCh,
cancelations: cancels,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
errHandler: cfg.ErrorHandler,
shutdownTimeout: shutdownTimeout,
starting: starting,
finished: finished,
})
recoverer := newRecoverer(recovererParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
queues: qnames,
interval: 1 * time.Minute,
})
healthchecker := newHealthChecker(healthcheckerParams{
logger: logger,
broker: rdb,
interval: healthcheckInterval,
healthcheckFunc: cfg.HealthCheckFunc,
})
return &Server{
logger: logger,
broker: rdb,
status: status,
forwarder: forwarder,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
recoverer: recoverer,
healthchecker: healthchecker,
}
}
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
// One exception to this rule is when ProcessTask returns SkipRetry error.
// If the returned error is SkipRetry or the error wraps SkipRetry, retry is
// skipped and task will be archived instead.
type Handler interface {
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped.
var ErrServerStopped = errors.New("asynq: the server has been stopped")
// Run starts the background-task processing and blocks until
// an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all active workers and other
// goroutines to process the tasks.
//
// Run returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Run(handler Handler) error {
if err := srv.Start(handler); err != nil {
return err
}
srv.waitForSignals()
srv.Stop()
return nil
}
// Start starts the worker server. Once the server has started,
// it pulls tasks off queues and starts a worker goroutine for each task.
// Tasks are processed concurrently by the workers up to the number of
// concurrency specified at the initialization time.
//
// Start returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Start(handler Handler) error {
if handler == nil {
return fmt.Errorf("asynq: server cannot run with nil handler")
}
switch srv.status.Get() {
case base.StatusRunning:
return fmt.Errorf("asynq: the server is already running")
case base.StatusStopped:
return ErrServerStopped
}
srv.status.Set(base.StatusRunning)
srv.processor.handler = handler
srv.logger.Info("Starting processing")
srv.heartbeater.start(&srv.wg)
srv.healthchecker.start(&srv.wg)
srv.subscriber.start(&srv.wg)
srv.syncer.start(&srv.wg)
srv.recoverer.start(&srv.wg)
srv.forwarder.start(&srv.wg)
srv.processor.start(&srv.wg)
return nil
}
// Stop stops the worker server.
// It gracefully closes all active workers. The server will wait for
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
func (srv *Server) Stop() {
switch srv.status.Get() {
case base.StatusIdle, base.StatusStopped:
// server is not running, do nothing and return.
return
}
srv.logger.Info("Starting graceful shutdown")
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
// processor -> syncer (via syncCh)
// processor -> heartbeater (via starting, finished channels)
srv.forwarder.terminate()
srv.processor.terminate()
srv.recoverer.terminate()
srv.syncer.terminate()
srv.subscriber.terminate()
srv.healthchecker.terminate()
srv.heartbeater.terminate()
srv.wg.Wait()
srv.broker.Close()
srv.status.Set(base.StatusStopped)
srv.logger.Info("Exiting")
}
// Quiet signals the server to stop pulling new tasks off queues.
// Quiet should be used before stopping the server.
func (srv *Server) Quiet() {
srv.logger.Info("Stopping processor")
srv.processor.stop()
srv.status.Set(base.StatusQuiet)
srv.logger.Info("Processor stopped")
}

239
server_test.go Normal file
View File

@@ -0,0 +1,239 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"syscall"
"testing"
"time"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
"go.uber.org/goleak"
)
func TestServer(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
redisConnOpt := getRedisConnOpt(t)
c := NewClient(redisConnOpt)
defer c.Close()
srv := NewServer(redisConnOpt, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 456}), ProcessIn(1*time.Hour))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
srv.Stop()
}
func TestServerRun(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
done := make(chan struct{})
// Make sure server exits when receiving TERM signal.
go func() {
time.Sleep(2 * time.Second)
syscall.Kill(syscall.Getpid(), syscall.SIGTERM)
done <- struct{}{}
}()
go func() {
select {
case <-time.After(10 * time.Second):
panic("server did not stop after receiving TERM signal")
case <-done:
}
}()
mux := NewServeMux()
if err := srv.Run(mux); err != nil {
t.Fatal(err)
}
}
func TestServerErrServerStopped(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
srv.Stop()
err := srv.Start(handler)
if err != ErrServerStopped {
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err)
}
}
func TestServerErrNilHandler(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
err := srv.Start(nil)
if err == nil {
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
srv.Stop()
}
}
func TestServerErrServerRunning(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
err := srv.Start(handler)
if err == nil {
t.Error("Calling (*Server).Start(handler) on already running server did not return error")
}
srv.Stop()
}
func TestServerWithRedisDown(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.forwarder.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
testBroker.Sleep()
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
time.Sleep(3 * time.Second)
srv.Stop()
}
func TestServerWithFlakyBroker(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
redisConnOpt := getRedisConnOpt(t)
srv := NewServer(redisConnOpt, Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.forwarder.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
c := NewClient(redisConnOpt)
h := func(ctx context.Context, task *Task) error {
// force task retry.
if task.Type == "bad_task" {
return fmt.Errorf("could not process %q", task.Type)
}
time.Sleep(2 * time.Second)
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
for i := 0; i < 10; i++ {
_, err := c.Enqueue(NewTask("enqueued", nil), MaxRetry(i))
if err != nil {
t.Fatal(err)
}
_, err = c.Enqueue(NewTask("bad_task", nil))
if err != nil {
t.Fatal(err)
}
_, err = c.Enqueue(NewTask("scheduled", nil), ProcessIn(time.Duration(i)*time.Second))
if err != nil {
t.Fatal(err)
}
}
// simulate redis going down.
testBroker.Sleep()
time.Sleep(3 * time.Second)
// simulate redis comes back online.
testBroker.Wakeup()
time.Sleep(3 * time.Second)
srv.Stop()
}
func TestLogLevel(t *testing.T) {
tests := []struct {
flagVal string
want LogLevel
wantStr string
}{
{"debug", DebugLevel, "debug"},
{"Info", InfoLevel, "info"},
{"WARN", WarnLevel, "warn"},
{"warning", WarnLevel, "warn"},
{"Error", ErrorLevel, "error"},
{"fatal", FatalLevel, "fatal"},
}
for _, tc := range tests {
level := new(LogLevel)
if err := level.Set(tc.flagVal); err != nil {
t.Fatal(err)
}
if *level != tc.want {
t.Errorf("Set(%q): got %v, want %v", tc.flagVal, level, &tc.want)
continue
}
if got := level.String(); got != tc.wantStr {
t.Errorf("String() returned %q, want %q", got, tc.wantStr)
}
}
}

37
signals_unix.go Normal file
View File

@@ -0,0 +1,37 @@
// +build linux bsd darwin
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/unix"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM, SIGINT, and SIGTSTP.
// SIGTERM and SIGINT will signal the process to exit.
// SIGTSTP will signal the process to stop processing new tasks.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TSTP to stop processing new tasks")
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
for {
sig := <-sigs
if sig == unix.SIGTSTP {
srv.Quiet()
continue
}
break
}
}
func (s *Scheduler) waitForSignals() {
s.logger.Info("Send signal TERM or INT to stop the scheduler")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs
}

29
signals_windows.go Normal file
View File

@@ -0,0 +1,29 @@
// +build windows
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/windows"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM and SIGINT.
// SIGTERM and SIGINT will signal the process to exit.
//
// Note: Currently SIGTSTP is not supported for windows build.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}
func (s *Scheduler) waitForSignals() {
s.logger.Info("Send signal TERM or INT to stop the scheduler")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}

View File

@@ -6,52 +6,78 @@ package asynq
import ( import (
"sync" "sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/log"
) )
type subscriber struct { type subscriber struct {
logger Logger logger *log.Logger
rdb *rdb.RDB broker base.Broker
// channel to communicate back to the long running "subscriber" goroutine. // channel to communicate back to the long running "subscriber" goroutine.
done chan struct{} done chan struct{}
// cancelations hold cancel functions for all in-progress tasks. // cancelations hold cancel functions for all active tasks.
cancelations *base.Cancelations
// time to wait before retrying to connect to redis.
retryTimeout time.Duration
}
type subscriberParams struct {
logger *log.Logger
broker base.Broker
cancelations *base.Cancelations cancelations *base.Cancelations
} }
func newSubscriber(l Logger, rdb *rdb.RDB, cancelations *base.Cancelations) *subscriber { func newSubscriber(params subscriberParams) *subscriber {
return &subscriber{ return &subscriber{
logger: l, logger: params.logger,
rdb: rdb, broker: params.broker,
done: make(chan struct{}), done: make(chan struct{}),
cancelations: cancelations, cancelations: params.cancelations,
retryTimeout: 5 * time.Second,
} }
} }
func (s *subscriber) terminate() { func (s *subscriber) terminate() {
s.logger.Info("Subscriber shutting down...") s.logger.Debug("Subscriber shutting down...")
// Signal the subscriber goroutine to stop. // Signal the subscriber goroutine to stop.
s.done <- struct{}{} s.done <- struct{}{}
} }
func (s *subscriber) start(wg *sync.WaitGroup) { func (s *subscriber) start(wg *sync.WaitGroup) {
pubsub, err := s.rdb.CancelationPubSub()
cancelCh := pubsub.Channel()
if err != nil {
s.logger.Error("cannot subscribe to cancelation channel: %v", err)
return
}
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
var (
pubsub *redis.PubSub
err error
)
// Try until successfully connect to Redis.
for {
pubsub, err = s.broker.CancelationPubSub()
if err != nil {
s.logger.Errorf("cannot subscribe to cancelation channel: %v", err)
select {
case <-time.After(s.retryTimeout):
continue
case <-s.done:
s.logger.Debug("Subscriber done")
return
}
}
break
}
cancelCh := pubsub.Channel()
for { for {
select { select {
case <-s.done: case <-s.done:
pubsub.Close() pubsub.Close()
s.logger.Info("Subscriber done") s.logger.Debug("Subscriber done")
return return
case msg := <-cancelCh: case msg := <-cancelCh:
cancel, ok := s.cancelations.Get(msg.Payload) cancel, ok := s.cancelations.Get(msg.Payload)

View File

@@ -11,10 +11,12 @@ import (
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
) )
func TestSubscriber(t *testing.T) { func TestSubscriber(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
tests := []struct { tests := []struct {
@@ -37,16 +39,23 @@ func TestSubscriber(t *testing.T) {
cancelations := base.NewCancelations() cancelations := base.NewCancelations()
cancelations.Add(tc.registeredID, fakeCancelFunc) cancelations.Add(tc.registeredID, fakeCancelFunc)
subscriber := newSubscriber(testLogger, rdbClient, cancelations) subscriber := newSubscriber(subscriberParams{
logger: testLogger,
broker: rdbClient,
cancelations: cancelations,
})
var wg sync.WaitGroup var wg sync.WaitGroup
subscriber.start(&wg) subscriber.start(&wg)
defer subscriber.terminate()
// wait for subscriber to establish connection to pubsub channel
time.Sleep(time.Second)
if err := rdbClient.PublishCancelation(tc.publishID); err != nil { if err := rdbClient.PublishCancelation(tc.publishID); err != nil {
subscriber.terminate()
t.Fatalf("could not publish cancelation message: %v", err) t.Fatalf("could not publish cancelation message: %v", err)
} }
// allow for redis to publish message // wait for redis to publish message
time.Sleep(time.Second) time.Sleep(time.Second)
mu.Lock() mu.Lock()
@@ -58,7 +67,58 @@ func TestSubscriber(t *testing.T) {
} }
} }
mu.Unlock() mu.Unlock()
subscriber.terminate()
} }
} }
func TestSubscriberWithRedisDown(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r)
cancelations := base.NewCancelations()
subscriber := newSubscriber(subscriberParams{
logger: testLogger,
broker: testBroker,
cancelations: cancelations,
})
subscriber.retryTimeout = 1 * time.Second // set shorter retry timeout for testing purpose.
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
var wg sync.WaitGroup
subscriber.start(&wg)
defer subscriber.terminate()
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.
testBroker.Wakeup() // simulate a situation where redis server is back online.
time.Sleep(2 * time.Second) // allow subscriber to establish pubsub channel.
const id = "test"
var (
mu sync.Mutex
called bool
)
cancelations.Add(id, func() {
mu.Lock()
defer mu.Unlock()
called = true
})
if err := r.PublishCancelation(id); err != nil {
t.Fatalf("could not publish cancelation message: %v", err)
}
time.Sleep(time.Second) // wait for redis to publish message.
mu.Lock()
if !called {
t.Errorf("cancel function was not called")
}
mu.Unlock()
}

View File

@@ -7,12 +7,14 @@ package asynq
import ( import (
"sync" "sync"
"time" "time"
"github.com/hibiken/asynq/internal/log"
) )
// syncer is responsible for queuing up failed requests to redis and retry // syncer is responsible for queuing up failed requests to redis and retry
// those requests to sync state between the background process and redis. // those requests to sync state between the background process and redis.
type syncer struct { type syncer struct {
logger Logger logger *log.Logger
requestsCh <-chan *syncRequest requestsCh <-chan *syncRequest
@@ -26,19 +28,26 @@ type syncer struct {
type syncRequest struct { type syncRequest struct {
fn func() error // sync operation fn func() error // sync operation
errMsg string // error message errMsg string // error message
deadline time.Time // request should be dropped if deadline has been exceeded
} }
func newSyncer(l Logger, requestsCh <-chan *syncRequest, interval time.Duration) *syncer { type syncerParams struct {
logger *log.Logger
requestsCh <-chan *syncRequest
interval time.Duration
}
func newSyncer(params syncerParams) *syncer {
return &syncer{ return &syncer{
logger: l, logger: params.logger,
requestsCh: requestsCh, requestsCh: params.requestsCh,
done: make(chan struct{}), done: make(chan struct{}),
interval: interval, interval: params.interval,
} }
} }
func (s *syncer) terminate() { func (s *syncer) terminate() {
s.logger.Info("Syncer shutting down...") s.logger.Debug("Syncer shutting down...")
// Signal the syncer goroutine to stop. // Signal the syncer goroutine to stop.
s.done <- struct{}{} s.done <- struct{}{}
} }
@@ -57,13 +66,16 @@ func (s *syncer) start(wg *sync.WaitGroup) {
s.logger.Error(req.errMsg) s.logger.Error(req.errMsg)
} }
} }
s.logger.Info("Syncer done") s.logger.Debug("Syncer done")
return return
case req := <-s.requestsCh: case req := <-s.requestsCh:
requests = append(requests, req) requests = append(requests, req)
case <-time.After(s.interval): case <-time.After(s.interval):
var temp []*syncRequest var temp []*syncRequest
for _, req := range requests { for _, req := range requests {
if req.deadline.Before(time.Now()) {
continue // drop stale request
}
if err := req.fn(); err != nil { if err := req.fn(); err != nil {
temp = append(temp, req) temp = append(temp, req)
} }

View File

@@ -22,12 +22,17 @@ func TestSyncer(t *testing.T) {
h.NewTaskMessage("gen_thumbnail", nil), h.NewTaskMessage("gen_thumbnail", nil),
} }
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
h.SeedInProgressQueue(t, r, inProgress) h.SeedActiveQueue(t, r, inProgress, base.DefaultQueueName)
const interval = time.Second const interval = time.Second
syncRequestCh := make(chan *syncRequest) syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(testLogger, syncRequestCh, interval) syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup var wg sync.WaitGroup
syncer.start(&wg) syncer.start(&wg)
defer syncer.terminate() defer syncer.terminate()
@@ -38,21 +43,26 @@ func TestSyncer(t *testing.T) {
fn: func() error { fn: func() error {
return rdbClient.Done(m) return rdbClient.Done(m)
}, },
deadline: time.Now().Add(5 * time.Minute),
} }
} }
time.Sleep(2 * interval) // ensure that syncer runs at least once time.Sleep(2 * interval) // ensure that syncer runs at least once
gotInProgress := h.GetInProgressMessages(t, r) gotActive := h.GetActiveMessages(t, r, base.DefaultQueueName)
if l := len(gotInProgress); l != 0 { if l := len(gotActive); l != 0 {
t.Errorf("%q has length %d; want 0", base.InProgressQueue, l) t.Errorf("%q has length %d; want 0", base.ActiveKey(base.DefaultQueueName), l)
} }
} }
func TestSyncerRetry(t *testing.T) { func TestSyncerRetry(t *testing.T) {
const interval = time.Second const interval = time.Second
syncRequestCh := make(chan *syncRequest) syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(testLogger, syncRequestCh, interval) syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup var wg sync.WaitGroup
syncer.start(&wg) syncer.start(&wg)
@@ -79,6 +89,7 @@ func TestSyncerRetry(t *testing.T) {
syncRequestCh <- &syncRequest{ syncRequestCh <- &syncRequest{
fn: requestFunc, fn: requestFunc,
errMsg: "error", errMsg: "error",
deadline: time.Now().Add(5 * time.Minute),
} }
// allow syncer to retry // allow syncer to retry
@@ -90,3 +101,41 @@ func TestSyncerRetry(t *testing.T) {
} }
mu.Unlock() mu.Unlock()
} }
func TestSyncerDropsStaleRequests(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup
syncer.start(&wg)
var (
mu sync.Mutex
n int // number of times request has been processed
)
for i := 0; i < 10; i++ {
syncRequestCh <- &syncRequest{
fn: func() error {
mu.Lock()
n++
mu.Unlock()
return nil
},
deadline: time.Now().Add(time.Duration(-i) * time.Second), // already exceeded deadline
}
}
time.Sleep(2 * interval) // ensure that syncer runs at least once
syncer.terminate()
mu.Lock()
if n != 0 {
t.Errorf("requests has been processed %d times, want 0", n)
}
mu.Unlock()
}

63
tools/asynq/README.md Normal file
View File

@@ -0,0 +1,63 @@
# Asynq CLI
Asynq CLI is a command line tool to monitor the queues and tasks managed by `asynq` package.
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [Config File](#config-file)
## Installation
In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynq
This will create the asynq executable under your `$GOPATH/bin` directory.
## Usage
### Commands
To view details on any command, use `asynq help <command> <subcommand>`.
- `asynq stats`
- `asynq queue [ls inspect history rm pause unpause]`
- `asynq task [ls cancel delete archive run delete-all archive-all run-all]`
- `asynq server [ls]`
### Global flags
Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
To connect to a redis cluster, pass `--cluster` and `--cluster_addrs` flags.
By default, CLI will try to connect to a redis server running at `localhost:6379`.
```
--config string config file to set flag defaut values (default is $HOME/.asynq.yaml)
-n, --db int redis database number (default is 0)
-h, --help help for asynq
-p, --password string password to use when connecting to redis server
-u, --uri string redis server URI (default "127.0.0.1:6379")
--cluster connect to redis cluster
--cluster_addrs string list of comma-separated redis server addresses
```
## Config File
You can use a config file to set default values for the flags.
By default, `asynq` will try to read config file located in
`$HOME/.asynq.(yml|json)`. You can specify the file location via `--config` flag.
Config file example:
```yaml
uri: 127.0.0.1:6379
db: 2
password: mypassword
```
This will set the default values for `--uri`, `--db`, and `--password` flags.

129
tools/asynq/cmd/cron.go Normal file
View File

@@ -0,0 +1,129 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"time"
"github.com/hibiken/asynq/inspeq"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(cronCmd)
cronCmd.AddCommand(cronListCmd)
cronCmd.AddCommand(cronHistoryCmd)
cronHistoryCmd.Flags().Int("page", 1, "page number")
cronHistoryCmd.Flags().Int("size", 30, "page size")
}
var cronCmd = &cobra.Command{
Use: "cron",
Short: "Manage cron",
}
var cronListCmd = &cobra.Command{
Use: "ls",
Short: "List cron entries",
Run: cronList,
}
var cronHistoryCmd = &cobra.Command{
Use: "history [ENTRY_ID...]",
Short: "Show history of each cron tasks",
Args: cobra.MinimumNArgs(1),
Run: cronHistory,
}
func cronList(cmd *cobra.Command, args []string) {
inspector := createInspector()
entries, err := inspector.SchedulerEntries()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(entries) == 0 {
fmt.Println("No scheduler entries")
return
}
// Sort entries by spec.
sort.Slice(entries, func(i, j int) bool {
x, y := entries[i], entries[j]
return x.Spec < y.Spec
})
cols := []string{"EntryID", "Spec", "Type", "Payload", "Options", "Next", "Prev"}
printRows := func(w io.Writer, tmpl string) {
for _, e := range entries {
fmt.Fprintf(w, tmpl, e.ID, e.Spec, e.Task.Type, e.Task.Payload, e.Opts,
nextEnqueue(e.Next), prevEnqueue(e.Prev))
}
}
printTable(cols, printRows)
}
// Returns a string describing when the next enqueue will happen.
func nextEnqueue(nextEnqueueAt time.Time) string {
d := nextEnqueueAt.Sub(time.Now()).Round(time.Second)
if d < 0 {
return "Now"
}
return fmt.Sprintf("In %v", d)
}
// Returns a string describing when the previous enqueue was.
func prevEnqueue(prevEnqueuedAt time.Time) string {
if prevEnqueuedAt.IsZero() {
return "N/A"
}
return fmt.Sprintf("%v ago", time.Since(prevEnqueuedAt).Round(time.Second))
}
func cronHistory(cmd *cobra.Command, args []string) {
pageNum, err := cmd.Flags().GetInt("page")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
pageSize, err := cmd.Flags().GetInt("size")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
inspector := createInspector()
for i, entryID := range args {
if i > 0 {
fmt.Printf("\n%s\n", separator)
}
fmt.Println()
fmt.Printf("Entry: %s\n\n", entryID)
events, err := inspector.ListSchedulerEnqueueEvents(
entryID, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
if err != nil {
fmt.Printf("error: %v\n", err)
continue
}
if len(events) == 0 {
fmt.Printf("No scheduler enqueue events found for entry: %s\n", entryID)
continue
}
cols := []string{"TaskID", "EnqueuedAt"}
printRows := func(w io.Writer, tmpl string) {
for _, e := range events {
fmt.Fprintf(w, tmpl, e.TaskID, e.EnqueuedAt)
}
}
printTable(cols, printRows)
}
}

389
tools/asynq/cmd/migrate.go Normal file
View File

@@ -0,0 +1,389 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/spf13/cast"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var migrateCmd = &cobra.Command{
Use: "migrate",
Short: fmt.Sprintf("Migrate all tasks to be compatible with asynq v%s", base.Version),
Args: cobra.NoArgs,
Run: migrate,
}
func init() {
rootCmd.AddCommand(migrateCmd)
}
func migrate(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := createRDB()
/*** Migrate from 0.9 to 0.10, 0.11 compatible ***/
lists := []string{"asynq:in_progress"}
allQueues, err := c.SMembers(base.AllQueues).Result()
if err != nil {
printError(fmt.Errorf("could not read all queues: %v", err))
os.Exit(1)
}
lists = append(lists, allQueues...)
for _, key := range lists {
if err := migrateList(c, key); err != nil {
printError(err)
os.Exit(1)
}
}
zsets := []string{"asynq:scheduled", "asynq:retry", "asynq:dead"}
for _, key := range zsets {
if err := migrateZSet(c, key); err != nil {
printError(err)
os.Exit(1)
}
}
/*** Migrate from 0.11 to 0.12 compatible ***/
if err := createBackup(c, base.AllQueues); err != nil {
printError(err)
os.Exit(1)
}
for _, qkey := range allQueues {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if err := c.SAdd(base.AllQueues, qname).Err(); err != nil {
err = fmt.Errorf("could not add queue name %q to %q set: %v\n",
qname, base.AllQueues, err)
printError(err)
os.Exit(1)
}
}
if err := deleteBackup(c, base.AllQueues); err != nil {
printError(err)
os.Exit(1)
}
for _, qkey := range allQueues {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if exists := c.Exists(qkey).Val(); exists == 1 {
if err := c.Rename(qkey, base.QueueKey(qname)).Err(); err != nil {
printError(fmt.Errorf("could not rename key %q: %v\n", qkey, err))
os.Exit(1)
}
}
}
if err := partitionZSetMembersByQueue(c, "asynq:scheduled", base.ScheduledKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionZSetMembersByQueue(c, "asynq:retry", base.RetryKey); err != nil {
printError(err)
os.Exit(1)
}
// Note: base.DeadKey function was renamed in v0.14. We define the legacy function here since we need it for this migration script.
deadKeyFunc := func(qname string) string { return fmt.Sprintf("asynq:{%s}:dead", qname) }
if err := partitionZSetMembersByQueue(c, "asynq:dead", deadKeyFunc); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionZSetMembersByQueue(c, "asynq:deadlines", base.DeadlinesKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionListMembersByQueue(c, "asynq:in_progress", base.ActiveKey); err != nil {
printError(err)
os.Exit(1)
}
paused, err := c.SMembers("asynq:paused").Result()
if err != nil {
printError(fmt.Errorf("command SMEMBERS asynq:paused failed: %v", err))
os.Exit(1)
}
for _, qkey := range paused {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if err := r.Pause(qname); err != nil {
printError(err)
os.Exit(1)
}
}
if err := deleteKey(c, "asynq:paused"); err != nil {
printError(err)
os.Exit(1)
}
if err := deleteKey(c, "asynq:servers"); err != nil {
printError(err)
os.Exit(1)
}
if err := deleteKey(c, "asynq:workers"); err != nil {
printError(err)
os.Exit(1)
}
/*** Migrate from 0.13 to 0.14 compatible ***/
// Move all dead tasks to archived ZSET.
for _, qname := range allQueues {
zs, err := c.ZRangeWithScores(deadKeyFunc(qname), 0, -1).Result()
if err != nil {
printError(err)
os.Exit(1)
}
for _, z := range zs {
if err := c.ZAdd(base.ArchivedKey(qname), &z).Err(); err != nil {
printError(err)
os.Exit(1)
}
}
if err := deleteKey(c, deadKeyFunc(qname)); err != nil {
printError(err)
os.Exit(1)
}
}
}
func backupKey(key string) string {
return fmt.Sprintf("%s:backup", key)
}
func createBackup(c *redis.Client, key string) error {
err := c.Rename(key, backupKey(key)).Err()
if err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
return nil
}
func deleteBackup(c *redis.Client, key string) error {
return deleteKey(c, backupKey(key))
}
func deleteKey(c *redis.Client, key string) error {
exists := c.Exists(key).Val()
if exists == 0 {
// key does not exist
return nil
}
err := c.Del(key).Err()
if err != nil {
return fmt.Errorf("could not delete key %q: %v", key, err)
}
return nil
}
func printError(err error) {
fmt.Println(err)
fmt.Println()
fmt.Println("Migrate command error")
fmt.Println("Please file an issue on Github at https://github.com/hibiken/asynq/issues/new/choose")
}
func partitionZSetMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error {
zs, err := c.ZRangeWithScores(key, 0, -1).Result()
if err != nil {
return fmt.Errorf("command ZRANGE %s 0 -1 WITHSCORES failed: %v", key, err)
}
for _, z := range zs {
s := cast.ToString(z.Member)
msg, err := base.DecodeMessage(s)
if err != nil {
return fmt.Errorf("could not decode message from %q: %v", key, err)
}
if err := c.ZAdd(newKeyFunc(msg.Queue), &z).Err(); err != nil {
return fmt.Errorf("could not add %v to %q: %v", z, newKeyFunc(msg.Queue))
}
}
if err := deleteKey(c, key); err != nil {
return err
}
return nil
}
func partitionListMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error {
data, err := c.LRange(key, 0, -1).Result()
if err != nil {
return fmt.Errorf("command LRANGE %s 0 -1 failed: %v", key, err)
}
for _, s := range data {
msg, err := base.DecodeMessage(s)
if err != nil {
return fmt.Errorf("could not decode message from %q: %v", key, err)
}
if err := c.LPush(newKeyFunc(msg.Queue), s).Err(); err != nil {
return fmt.Errorf("could not add %v to %q: %v", s, newKeyFunc(msg.Queue))
}
}
if err := deleteKey(c, key); err != nil {
return err
}
return nil
}
type oldTaskMessage struct {
// Unchanged
Type string
Payload map[string]interface{}
ID uuid.UUID
Queue string
Retry int
Retried int
ErrorMsg string
UniqueKey string
// Following fields have changed.
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Timeout string
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Deadline string
}
var defaultTimeout = 30 * time.Minute
func convertMessage(old *oldTaskMessage) (*base.TaskMessage, error) {
timeout, err := time.ParseDuration(old.Timeout)
if err != nil {
return nil, fmt.Errorf("could not parse Timeout field of %+v", old)
}
deadline, err := time.Parse(time.RFC3339, old.Deadline)
if err != nil {
return nil, fmt.Errorf("could not parse Deadline field of %+v", old)
}
if timeout == 0 && deadline.IsZero() {
timeout = defaultTimeout
}
if deadline.IsZero() {
// Zero value used to be time.Time{},
// in the new schema zero value is represented by
// zero in Unix time.
deadline = time.Unix(0, 0)
}
return &base.TaskMessage{
Type: old.Type,
Payload: old.Payload,
ID: uuid.New(),
Queue: old.Queue,
Retry: old.Retry,
Retried: old.Retried,
ErrorMsg: old.ErrorMsg,
UniqueKey: old.UniqueKey,
Timeout: int64(timeout.Seconds()),
Deadline: deadline.Unix(),
}, nil
}
func deserialize(s string) (*base.TaskMessage, error) {
// Try deserializing as old message.
d := json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var old *oldTaskMessage
if err := d.Decode(&old); err != nil {
// Try deserializing as new message.
d = json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg *base.TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, fmt.Errorf("could not deserialize %s into task message: %v", s, err)
}
return msg, nil
}
return convertMessage(old)
}
func migrateZSet(c *redis.Client, key string) error {
if c.Exists(key).Val() == 0 {
// skip if key doesn't exist.
return nil
}
res, err := c.ZRangeWithScores(key, 0, -1).Result()
if err != nil {
return err
}
var msgs []*redis.Z
for _, z := range res {
s, err := cast.ToStringE(z.Member)
if err != nil {
return fmt.Errorf("could not cast to string: %v", err)
}
msg, err := deserialize(s)
if err != nil {
return err
}
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, &redis.Z{Score: z.Score, Member: encoded})
}
if err := c.Rename(key, key+":backup").Err(); err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
if err := c.ZAdd(key, msgs...).Err(); err != nil {
return fmt.Errorf("could not write new messages to %q: %v", key, err)
}
if err := c.Del(key + ":backup").Err(); err != nil {
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
}
return nil
}
func migrateList(c *redis.Client, key string) error {
if c.Exists(key).Val() == 0 {
// skip if key doesn't exist.
return nil
}
res, err := c.LRange(key, 0, -1).Result()
if err != nil {
return err
}
var msgs []interface{}
for _, s := range res {
msg, err := deserialize(s)
if err != nil {
return err
}
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, encoded)
}
if err := c.Rename(key, key+":backup").Err(); err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
if err := c.LPush(key, msgs...).Err(); err != nil {
return fmt.Errorf("could not write new messages to %q: %v", key, err)
}
if err := c.Del(key + ":backup").Err(); err != nil {
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
}
return nil
}

256
tools/asynq/cmd/queue.go Normal file
View File

@@ -0,0 +1,256 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"github.com/fatih/color"
"github.com/hibiken/asynq/inspeq"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
)
const separator = "================================================="
func init() {
rootCmd.AddCommand(queueCmd)
queueCmd.AddCommand(queueListCmd)
queueCmd.AddCommand(queueInspectCmd)
queueCmd.AddCommand(queueHistoryCmd)
queueHistoryCmd.Flags().IntP("days", "x", 10, "show data from last x days")
queueCmd.AddCommand(queuePauseCmd)
queueCmd.AddCommand(queueUnpauseCmd)
queueCmd.AddCommand(queueRemoveCmd)
queueRemoveCmd.Flags().BoolP("force", "f", false, "remove the queue regardless of its size")
}
var queueCmd = &cobra.Command{
Use: "queue",
Short: "Manage queues",
}
var queueListCmd = &cobra.Command{
Use: "ls",
Short: "List queues",
// TODO: Use RunE instead?
Run: queueList,
}
var queueInspectCmd = &cobra.Command{
Use: "inspect QUEUE [QUEUE...]",
Short: "Display detailed information on one or more queues",
Args: cobra.MinimumNArgs(1),
// TODO: Use RunE instead?
Run: queueInspect,
}
var queueHistoryCmd = &cobra.Command{
Use: "history QUEUE [QUEUE...]",
Short: "Display historical aggregate data from one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queueHistory,
}
var queuePauseCmd = &cobra.Command{
Use: "pause QUEUE [QUEUE...]",
Short: "Pause one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queuePause,
}
var queueUnpauseCmd = &cobra.Command{
Use: "unpause QUEUE [QUEUE...]",
Short: "Unpause one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queueUnpause,
}
var queueRemoveCmd = &cobra.Command{
Use: "rm QUEUE [QUEUE...]",
Short: "Remove one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queueRemove,
}
func queueList(cmd *cobra.Command, args []string) {
type queueInfo struct {
name string
keyslot int64
nodes []inspeq.ClusterNode
}
inspector := createInspector()
queues, err := inspector.Queues()
if err != nil {
fmt.Printf("error: Could not fetch list of queues: %v\n", err)
os.Exit(1)
}
var qs []queueInfo
for _, qname := range queues {
q := queueInfo{name: qname}
if useRedisCluster {
keyslot, err := inspector.ClusterKeySlot(qname)
if err != nil {
fmt.Errorf("error: Could not get cluster keyslot for %q\n", qname)
continue
}
q.keyslot = keyslot
nodes, err := inspector.ClusterNodes(qname)
if err != nil {
fmt.Errorf("error: Could not get cluster nodes for %q\n", qname)
continue
}
q.nodes = nodes
}
qs = append(qs, q)
}
if useRedisCluster {
printTable(
[]string{"Queue", "Cluster KeySlot", "Cluster Nodes"},
func(w io.Writer, tmpl string) {
for _, q := range qs {
fmt.Fprintf(w, tmpl, q.name, q.keyslot, q.nodes)
}
},
)
} else {
for _, q := range qs {
fmt.Println(q.name)
}
}
}
func queueInspect(cmd *cobra.Command, args []string) {
inspector := createInspector()
for i, qname := range args {
if i > 0 {
fmt.Printf("\n%s\n", separator)
}
fmt.Println()
stats, err := inspector.CurrentStats(qname)
if err != nil {
fmt.Printf("error: %v\n", err)
continue
}
printQueueStats(stats)
}
}
func printQueueStats(s *inspeq.QueueStats) {
bold := color.New(color.Bold)
bold.Println("Queue Info")
fmt.Printf("Name: %s\n", s.Queue)
fmt.Printf("Size: %d\n", s.Size)
fmt.Printf("Paused: %t\n\n", s.Paused)
bold.Println("Task Count by State")
printTable(
[]string{"active", "pending", "scheduled", "retry", "archived"},
func(w io.Writer, tmpl string) {
fmt.Fprintf(w, tmpl, s.Active, s.Pending, s.Scheduled, s.Retry, s.Archived)
},
)
fmt.Println()
bold.Printf("Daily Stats %s UTC\n", s.Timestamp.UTC().Format("2006-01-02"))
printTable(
[]string{"processed", "failed", "error rate"},
func(w io.Writer, tmpl string) {
var errRate string
if s.Processed == 0 {
errRate = "N/A"
} else {
errRate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
}
fmt.Fprintf(w, tmpl, s.Processed, s.Failed, errRate)
},
)
}
func queueHistory(cmd *cobra.Command, args []string) {
days, err := cmd.Flags().GetInt("days")
if err != nil {
fmt.Printf("error: Internal error: %v\n", err)
os.Exit(1)
}
inspector := createInspector()
for i, qname := range args {
if i > 0 {
fmt.Printf("\n%s\n", separator)
}
fmt.Printf("\nQueue: %s\n\n", qname)
stats, err := inspector.History(qname, days)
if err != nil {
fmt.Printf("error: %v\n", err)
continue
}
printDailyStats(stats)
}
}
func printDailyStats(stats []*inspeq.DailyStats) {
printTable(
[]string{"date (UTC)", "processed", "failed", "error rate"},
func(w io.Writer, tmpl string) {
for _, s := range stats {
var errRate string
if s.Processed == 0 {
errRate = "N/A"
} else {
errRate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
}
fmt.Fprintf(w, tmpl, s.Date.Format("2006-01-02"), s.Processed, s.Failed, errRate)
}
},
)
}
func queuePause(cmd *cobra.Command, args []string) {
inspector := createInspector()
for _, qname := range args {
err := inspector.PauseQueue(qname)
if err != nil {
fmt.Println(err)
continue
}
fmt.Printf("Successfully paused queue %q\n", qname)
}
}
func queueUnpause(cmd *cobra.Command, args []string) {
inspector := createInspector()
for _, qname := range args {
err := inspector.UnpauseQueue(qname)
if err != nil {
fmt.Println(err)
continue
}
fmt.Printf("Successfully unpaused queue %q\n", qname)
}
}
func queueRemove(cmd *cobra.Command, args []string) {
// TODO: Use inspector once RemoveQueue become public API.
force, err := cmd.Flags().GetBool("force")
if err != nil {
fmt.Printf("error: Internal error: %v\n", err)
os.Exit(1)
}
r := createRDB()
for _, qname := range args {
err = r.RemoveQueue(qname, force)
if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq queue rm --force %s'\n", err, qname)
continue
}
fmt.Printf("error: %v\n", err)
continue
}
fmt.Printf("Successfully removed queue %q\n", qname)
}
}

198
tools/asynq/cmd/root.go Normal file
View File

@@ -0,0 +1,198 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"crypto/tls"
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq"
"github.com/hibiken/asynq/inspeq"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
homedir "github.com/mitchellh/go-homedir"
"github.com/spf13/viper"
)
var cfgFile string
// Global flag variables
var (
uri string
db int
password string
useRedisCluster bool
clusterAddrs string
tlsServerName string
)
// rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{
Use: "asynq",
Short: "A monitoring tool for asynq queues",
Long: `Asynq is a montoring CLI to inspect tasks and queues managed by asynq.`,
Version: base.Version,
}
var versionOutput = fmt.Sprintf("asynq version %s\n", base.Version)
var versionCmd = &cobra.Command{
Use: "version",
Hidden: true,
Run: func(cmd *cobra.Command, args []string) {
fmt.Print(versionOutput)
},
}
// Execute adds all child commands to the root command and sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the rootCmd.
func Execute() {
if err := rootCmd.Execute(); err != nil {
fmt.Println(err)
os.Exit(1)
}
}
func init() {
cobra.OnInitialize(initConfig)
rootCmd.AddCommand(versionCmd)
rootCmd.SetVersionTemplate(versionOutput)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynq.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
rootCmd.PersistentFlags().BoolVar(&useRedisCluster, "cluster", false, "connect to redis cluster")
rootCmd.PersistentFlags().StringVar(&clusterAddrs, "cluster_addrs",
"127.0.0.1:7000,127.0.0.1:7001,127.0.0.1:7002,127.0.0.1:7003,127.0.0.1:7004,127.0.0.1:7005",
"list of comma-separated redis server addresses")
rootCmd.PersistentFlags().StringVar(&tlsServerName, "tls_server",
"", "server name for TLS validation")
// Bind flags with config.
viper.BindPFlag("uri", rootCmd.PersistentFlags().Lookup("uri"))
viper.BindPFlag("db", rootCmd.PersistentFlags().Lookup("db"))
viper.BindPFlag("password", rootCmd.PersistentFlags().Lookup("password"))
viper.BindPFlag("cluster", rootCmd.PersistentFlags().Lookup("cluster"))
viper.BindPFlag("cluster_addrs", rootCmd.PersistentFlags().Lookup("cluster_addrs"))
viper.BindPFlag("tls_server", rootCmd.PersistentFlags().Lookup("tls_server"))
}
// initConfig reads in config file and ENV variables if set.
func initConfig() {
if cfgFile != "" {
// Use config file from the flag.
viper.SetConfigFile(cfgFile)
} else {
// Find home directory.
home, err := homedir.Dir()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
// Search config in home directory with name ".asynq" (without extension).
viper.AddConfigPath(home)
viper.SetConfigName(".asynq")
}
viper.AutomaticEnv() // read in environment variables that match
// If a config file is found, read it in.
if err := viper.ReadInConfig(); err == nil {
fmt.Println("Using config file:", viper.ConfigFileUsed())
}
}
// createRDB creates a RDB instance using flag values and returns it.
func createRDB() *rdb.RDB {
var c redis.UniversalClient
if useRedisCluster {
addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
c = redis.NewClusterClient(&redis.ClusterOptions{
Addrs: addrs,
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
})
} else {
c = redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
})
}
return rdb.NewRDB(c)
}
// createRDB creates a Inspector instance using flag values and returns it.
func createInspector() *inspeq.Inspector {
var connOpt asynq.RedisConnOpt
if useRedisCluster {
addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
connOpt = asynq.RedisClusterClientOpt{
Addrs: addrs,
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
}
} else {
connOpt = asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
}
}
return inspeq.New(connOpt)
}
func getTLSConfig() *tls.Config {
tlsServer := viper.GetString("tls_server")
if tlsServer == "" {
return nil
}
return &tls.Config{ServerName: tlsServer}
}
// printTable is a helper function to print data in table format.
//
// cols is a list of headers and printRow specifies how to print rows.
//
// Example:
// type User struct {
// Name string
// Addr string
// Age int
// }
// data := []*User{{"user1", "addr1", 24}, {"user2", "addr2", 42}, ...}
// cols := []string{"Name", "Addr", "Age"}
// printRows := func(w io.Writer, tmpl string) {
// for _, u := range data {
// fmt.Fprintf(w, tmpl, u.Name, u.Addr, u.Age)
// }
// }
// printTable(cols, printRows)
func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
format := strings.Repeat("%v\t", len(cols)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
var headers []interface{}
var seps []interface{}
for _, name := range cols {
headers = append(headers, name)
seps = append(seps, strings.Repeat("-", len(name)))
}
fmt.Fprintf(tw, format, headers...)
fmt.Fprintf(tw, format, seps...)
printRows(tw, format)
tw.Flush()
}

View File

@@ -12,81 +12,72 @@ import (
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
) )
// psCmd represents the ps command func init() {
var psCmd = &cobra.Command{ rootCmd.AddCommand(serverCmd)
Use: "ps", serverCmd.AddCommand(serverListCmd)
Short: "Shows all background worker processes", }
Long: `Ps (asynqmon ps) will show all background worker processes
backed by the specified redis instance.
The command shows the following for each process: var serverCmd = &cobra.Command{
* Host and PID of the process Use: "server",
Short: "Manage servers",
}
var serverListCmd = &cobra.Command{
Use: "ls",
Short: "List servers",
Long: `Server list (asynq server ls) shows all running worker servers
pulling tasks from the given redis instance.
The command shows the following for each server:
* Host and PID of the process in which the server is running
* Number of active workers out of worker pool * Number of active workers out of worker pool
* Queue configuration * Queue configuration
* State of the worker process ("running" | "stopped") * State of the worker server ("running" | "quiet")
* Time the process was started * Time the server was started
A "running" process is processing tasks in queues. A "running" server is pulling tasks from queues and processing them.
A "stopped" process is no longer processing new tasks.`, A "quiet" server is no longer pulling new tasks from queues`,
Args: cobra.NoArgs, Run: serverList,
Run: ps,
} }
func init() { func serverList(cmd *cobra.Command, args []string) {
rootCmd.AddCommand(psCmd) r := createRDB()
}
func ps(cmd *cobra.Command, args []string) { servers, err := r.ListServers()
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
processes, err := r.ListProcesses()
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
if len(processes) == 0 { if len(servers) == 0 {
fmt.Println("No processes") fmt.Println("No running servers")
return return
} }
// sort by hostname and pid // sort by hostname and pid
sort.Slice(processes, func(i, j int) bool { sort.Slice(servers, func(i, j int) bool {
x, y := processes[i], processes[j] x, y := servers[i], servers[j]
if x.Host != y.Host { if x.Host != y.Host {
return x.Host < y.Host return x.Host < y.Host
} }
return x.PID < y.PID return x.PID < y.PID
}) })
// print processes // print server info
cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"} cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"}
printRows := func(w io.Writer, tmpl string) { printRows := func(w io.Writer, tmpl string) {
for _, ps := range processes { for _, info := range servers {
fmt.Fprintf(w, tmpl, fmt.Fprintf(w, tmpl,
ps.Host, ps.PID, ps.Status, info.Host, info.PID, info.Status,
fmt.Sprintf("%d/%d", ps.ActiveWorkerCount, ps.Concurrency), fmt.Sprintf("%d/%d", info.ActiveWorkerCount, info.Concurrency),
formatQueues(ps.Queues), timeAgo(ps.Started)) formatQueues(info.Queues), timeAgo(info.Started))
} }
} }
printTable(cols, printRows) printTable(cols, printRows)
} }
// timeAgo takes a time and returns a string of the format "<duration> ago".
func timeAgo(since time.Time) string {
d := time.Since(since).Round(time.Second)
return fmt.Sprintf("%v ago", d)
}
func formatQueues(qmap map[string]int) string { func formatQueues(qmap map[string]int) string {
// sort queues by priority and name // sort queues by priority and name
type queue struct { type queue struct {
@@ -116,3 +107,9 @@ func formatQueues(qmap map[string]int) string {
} }
return b.String() return b.String()
} }
// timeAgo takes a time and returns a string of the format "<duration> ago".
func timeAgo(since time.Time) string {
d := time.Since(since).Round(time.Second)
return fmt.Sprintf("%v ago", d)
}

View File

@@ -6,16 +6,16 @@ package cmd
import ( import (
"fmt" "fmt"
"io"
"os" "os"
"sort"
"strconv" "strconv"
"strings" "strings"
"text/tabwriter" "text/tabwriter"
"time"
"github.com/go-redis/redis/v7" "github.com/fatih/color"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
) )
// statsCmd represents the stats command // statsCmd represents the stats command
@@ -33,7 +33,7 @@ Specifically, the command shows the following:
To monitor the tasks continuously, it's recommended that you run this To monitor the tasks continuously, it's recommended that you run this
command in conjunction with the watch command. command in conjunction with the watch command.
Example: watch -n 3 asynqmon stats -> Shows current state of tasks every three seconds`, Example: watch -n 3 asynq stats -> Shows current state of tasks every three seconds`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: stats, Run: stats,
} }
@@ -52,72 +52,115 @@ func init() {
// statsCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle") // statsCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
} }
type AggregateStats struct {
Active int
Pending int
Scheduled int
Retry int
Archived int
Processed int
Failed int
Timestamp time.Time
}
func stats(cmd *cobra.Command, args []string) { func stats(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{ r := createRDB()
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
stats, err := r.CurrentStats() queues, err := r.AllQueues()
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
info, err := r.RedisInfo()
var aggStats AggregateStats
var stats []*rdb.Stats
for _, qname := range queues {
s, err := r.CurrentStats(qname)
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
fmt.Println("STATES") aggStats.Active += s.Active
printStates(stats) aggStats.Pending += s.Pending
aggStats.Scheduled += s.Scheduled
aggStats.Retry += s.Retry
aggStats.Archived += s.Archived
aggStats.Processed += s.Processed
aggStats.Failed += s.Failed
aggStats.Timestamp = s.Timestamp
stats = append(stats, s)
}
var info map[string]string
if useRedisCluster {
info, err = r.RedisClusterInfo()
} else {
info, err = r.RedisInfo()
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
bold := color.New(color.Bold)
bold.Println("Task Count by State")
printStatsByState(&aggStats)
fmt.Println() fmt.Println()
fmt.Println("QUEUES") bold.Println("Task Count by Queue")
printQueues(stats.Queues) printStatsByQueue(stats)
fmt.Println() fmt.Println()
fmt.Printf("STATS FOR %s UTC\n", stats.Timestamp.UTC().Format("2006-01-02")) bold.Printf("Daily Stats %s UTC\n", aggStats.Timestamp.UTC().Format("2006-01-02"))
printStats(stats) printSuccessFailureStats(&aggStats)
fmt.Println() fmt.Println()
fmt.Println("REDIS INFO") if useRedisCluster {
bold.Println("Redis Cluster Info")
printClusterInfo(info)
} else {
bold.Println("Redis Info")
printInfo(info) printInfo(info)
}
fmt.Println() fmt.Println()
} }
func printStates(s *rdb.Stats) { func printStatsByState(s *AggregateStats) {
format := strings.Repeat("%v\t", 5) + "\n" format := strings.Repeat("%v\t", 5) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "InProgress", "Enqueued", "Scheduled", "Retry", "Dead") fmt.Fprintf(tw, format, "active", "pending", "scheduled", "retry", "archived")
fmt.Fprintf(tw, format, "----------", "--------", "---------", "-----", "----") fmt.Fprintf(tw, format, "----------", "--------", "---------", "-----", "----")
fmt.Fprintf(tw, format, s.InProgress, s.Enqueued, s.Scheduled, s.Retry, s.Dead) fmt.Fprintf(tw, format, s.Active, s.Pending, s.Scheduled, s.Retry, s.Archived)
tw.Flush() tw.Flush()
} }
func printQueues(queues map[string]int) { func printStatsByQueue(stats []*rdb.Stats) {
var qnames, seps, counts []string var headers, seps, counts []string
for q := range queues { for _, s := range stats {
qnames = append(qnames, strings.Title(q)) title := queueTitle(s)
headers = append(headers, title)
seps = append(seps, strings.Repeat("-", len(title)))
counts = append(counts, strconv.Itoa(s.Size))
} }
sort.Strings(qnames) // sort for stable order format := strings.Repeat("%v\t", len(headers)) + "\n"
for _, q := range qnames {
seps = append(seps, strings.Repeat("-", len(q)))
counts = append(counts, strconv.Itoa(queues[strings.ToLower(q)]))
}
format := strings.Repeat("%v\t", len(qnames)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, toInterfaceSlice(qnames)...) fmt.Fprintf(tw, format, toInterfaceSlice(headers)...)
fmt.Fprintf(tw, format, toInterfaceSlice(seps)...) fmt.Fprintf(tw, format, toInterfaceSlice(seps)...)
fmt.Fprintf(tw, format, toInterfaceSlice(counts)...) fmt.Fprintf(tw, format, toInterfaceSlice(counts)...)
tw.Flush() tw.Flush()
} }
func printStats(s *rdb.Stats) { func queueTitle(s *rdb.Stats) string {
var b strings.Builder
b.WriteString(s.Queue)
if s.Paused {
b.WriteString(" (paused)")
}
return b.String()
}
func printSuccessFailureStats(s *AggregateStats) {
format := strings.Repeat("%v\t", 3) + "\n" format := strings.Repeat("%v\t", 3) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "Processed", "Failed", "Error Rate") fmt.Fprintf(tw, format, "processed", "failed", "error rate")
fmt.Fprintf(tw, format, "---------", "------", "----------") fmt.Fprintf(tw, format, "---------", "------", "----------")
var errrate string var errrate string
if s.Processed == 0 { if s.Processed == 0 {
@@ -132,7 +175,7 @@ func printStats(s *rdb.Stats) {
func printInfo(info map[string]string) { func printInfo(info map[string]string) {
format := strings.Repeat("%v\t", 5) + "\n" format := strings.Repeat("%v\t", 5) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "Version", "Uptime", "Connections", "Memory Usage", "Peak Memory Usage") fmt.Fprintf(tw, format, "version", "uptime", "connections", "memory usage", "peak memory usage")
fmt.Fprintf(tw, format, "-------", "------", "-----------", "------------", "-----------------") fmt.Fprintf(tw, format, "-------", "------", "-----------", "------------", "-----------------")
fmt.Fprintf(tw, format, fmt.Fprintf(tw, format,
info["redis_version"], info["redis_version"],
@@ -144,6 +187,19 @@ func printInfo(info map[string]string) {
tw.Flush() tw.Flush()
} }
func printClusterInfo(info map[string]string) {
printTable(
[]string{"State", "Known Nodes", "Cluster Size"},
func(w io.Writer, tmpl string) {
fmt.Fprintf(w, tmpl,
strings.ToUpper(info["cluster_state"]),
info["cluster_known_nodes"],
info["cluster_size"],
)
},
)
}
func toInterfaceSlice(strs []string) []interface{} { func toInterfaceSlice(strs []string) []interface{} {
var res []interface{} var res []interface{}
for _, s := range strs { for _, s := range strs {

467
tools/asynq/cmd/task.go Normal file
View File

@@ -0,0 +1,467 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"time"
"github.com/hibiken/asynq/inspeq"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(taskCmd)
taskCmd.AddCommand(taskListCmd)
taskListCmd.Flags().StringP("queue", "q", "", "queue to inspect")
taskListCmd.Flags().StringP("state", "s", "", "state of the tasks to inspect")
taskListCmd.Flags().Int("page", 1, "page number")
taskListCmd.Flags().Int("size", 30, "page size")
taskListCmd.MarkFlagRequired("queue")
taskListCmd.MarkFlagRequired("state")
taskCmd.AddCommand(taskCancelCmd)
taskCmd.AddCommand(taskArchiveCmd)
taskArchiveCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskArchiveCmd.Flags().StringP("key", "k", "", "key of the task")
taskArchiveCmd.MarkFlagRequired("queue")
taskArchiveCmd.MarkFlagRequired("key")
taskCmd.AddCommand(taskDeleteCmd)
taskDeleteCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskDeleteCmd.Flags().StringP("key", "k", "", "key of the task")
taskDeleteCmd.MarkFlagRequired("queue")
taskDeleteCmd.MarkFlagRequired("key")
taskCmd.AddCommand(taskRunCmd)
taskRunCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskRunCmd.Flags().StringP("key", "k", "", "key of the task")
taskRunCmd.MarkFlagRequired("queue")
taskRunCmd.MarkFlagRequired("key")
taskCmd.AddCommand(taskArchiveAllCmd)
taskArchiveAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
taskArchiveAllCmd.Flags().StringP("state", "s", "", "state of the tasks")
taskArchiveAllCmd.MarkFlagRequired("queue")
taskArchiveAllCmd.MarkFlagRequired("state")
taskCmd.AddCommand(taskDeleteAllCmd)
taskDeleteAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
taskDeleteAllCmd.Flags().StringP("state", "s", "", "state of the tasks")
taskDeleteAllCmd.MarkFlagRequired("queue")
taskDeleteAllCmd.MarkFlagRequired("state")
taskCmd.AddCommand(taskRunAllCmd)
taskRunAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
taskRunAllCmd.Flags().StringP("state", "s", "", "state of the tasks")
taskRunAllCmd.MarkFlagRequired("queue")
taskRunAllCmd.MarkFlagRequired("state")
}
var taskCmd = &cobra.Command{
Use: "task",
Short: "Manage tasks",
}
var taskListCmd = &cobra.Command{
Use: "ls --queue=QUEUE --state=STATE",
Short: "List tasks",
Long: `List tasks of the given state from the specified queue.
The value for the state flag should be one of:
- active
- pending
- scheduled
- retry
- archived
List opeartion paginates the result set.
By default, the command fetches the first 30 tasks.
Use --page and --size flags to specify the page number and size.
Example:
To list pending tasks from "default" queue, run
asynq task ls --queue=default --state=pending
To list the tasks from the second page, run
asynq task ls --queue=default --state=pending --page=1`,
Run: taskList,
}
var taskCancelCmd = &cobra.Command{
Use: "cancel TASK_ID [TASK_ID...]",
Short: "Cancel one or more active tasks",
Args: cobra.MinimumNArgs(1),
Run: taskCancel,
}
var taskArchiveCmd = &cobra.Command{
Use: "archive --queue=QUEUE --key=KEY",
Short: "Archive a task with the given key",
Args: cobra.NoArgs,
Run: taskArchive,
}
var taskDeleteCmd = &cobra.Command{
Use: "delete --queue=QUEUE --key=KEY",
Short: "Delete a task with the given key",
Args: cobra.NoArgs,
Run: taskDelete,
}
var taskRunCmd = &cobra.Command{
Use: "run --queue=QUEUE --key=KEY",
Short: "Run a task with the given key",
Args: cobra.NoArgs,
Run: taskRun,
}
var taskArchiveAllCmd = &cobra.Command{
Use: "archive-all --queue=QUEUE --state=STATE",
Short: "Archive all tasks in the given state",
Args: cobra.NoArgs,
Run: taskArchiveAll,
}
var taskDeleteAllCmd = &cobra.Command{
Use: "delete-all --queue=QUEUE --key=KEY",
Short: "Delete all tasks in the given state",
Args: cobra.NoArgs,
Run: taskDeleteAll,
}
var taskRunAllCmd = &cobra.Command{
Use: "run-all --queue=QUEUE --key=KEY",
Short: "Run all tasks in the given state",
Args: cobra.NoArgs,
Run: taskRunAll,
}
func taskList(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
pageNum, err := cmd.Flags().GetInt("page")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
pageSize, err := cmd.Flags().GetInt("size")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
switch state {
case "active":
listActiveTasks(qname, pageNum, pageSize)
case "pending":
listPendingTasks(qname, pageNum, pageSize)
case "scheduled":
listScheduledTasks(qname, pageNum, pageSize)
case "retry":
listRetryTasks(qname, pageNum, pageSize)
case "archived":
listArchivedTasks(qname, pageNum, pageSize)
default:
fmt.Printf("error: state=%q is not supported\n", state)
os.Exit(1)
}
}
func listActiveTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListActiveTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No active tasks in %q queue\n", qname)
return
}
printTable(
[]string{"ID", "Type", "Payload"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload)
}
},
)
}
func listPendingTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListPendingTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No pending tasks in %q queue\n", qname)
return
}
printTable(
[]string{"Key", "Type", "Payload"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload)
}
},
)
}
func listScheduledTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListScheduledTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No scheduled tasks in %q queue\n", qname)
return
}
printTable(
[]string{"Key", "Type", "Payload", "Process In"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
processIn := fmt.Sprintf("%.0f seconds",
t.NextProcessAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, processIn)
}
},
)
}
func listRetryTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListRetryTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No retry tasks in %q queue\n", qname)
return
}
printTable(
[]string{"Key", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
var nextRetry string
if d := t.NextProcessAt.Sub(time.Now()); d > 0 {
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
} else {
nextRetry = "right now"
}
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, nextRetry, t.LastError, t.Retried, t.MaxRetry)
}
},
)
}
func listArchivedTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListArchivedTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No archived tasks in %q queue\n", qname)
return
}
printTable(
[]string{"Key", "Type", "Payload", "Last Failed", "Last Error"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, t.LastFailedAt, t.LastError)
}
})
}
func taskCancel(cmd *cobra.Command, args []string) {
r := createRDB()
for _, id := range args {
err := r.PublishCancelation(id)
if err != nil {
fmt.Printf("error: could not send cancelation signal: %v\n", err)
continue
}
fmt.Printf("Sent cancelation signal for task %s\n", id)
}
}
func taskArchive(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
key, err := cmd.Flags().GetString("key")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
err = i.ArchiveTaskByKey(qname, key)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Println("task archived")
}
func taskDelete(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
key, err := cmd.Flags().GetString("key")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
err = i.DeleteTaskByKey(qname, key)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Println("task deleted")
}
func taskRun(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
key, err := cmd.Flags().GetString("key")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
err = i.RunTaskByKey(qname, key)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Println("task is now pending")
}
func taskArchiveAll(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
var n int
switch state {
case "pending":
n, err = i.ArchiveAllPendingTasks(qname)
case "scheduled":
n, err = i.ArchiveAllScheduledTasks(qname)
case "retry":
n, err = i.ArchiveAllRetryTasks(qname)
default:
fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1)
}
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("%d tasks archived\n", n)
}
func taskDeleteAll(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
var n int
switch state {
case "pending":
n, err = i.DeleteAllPendingTasks(qname)
case "scheduled":
n, err = i.DeleteAllScheduledTasks(qname)
case "retry":
n, err = i.DeleteAllRetryTasks(qname)
case "archived":
n, err = i.DeleteAllArchivedTasks(qname)
default:
fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1)
}
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("%d tasks deleted\n", n)
}
func taskRunAll(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
var n int
switch state {
case "scheduled":
n, err = i.RunAllScheduledTasks(qname)
case "retry":
n, err = i.RunAllRetryTasks(qname)
case "archived":
n, err = i.RunAllArchivedTasks(qname)
default:
fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1)
}
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("%d tasks are now pending\n", n)
}

View File

@@ -4,7 +4,7 @@
package main package main
import "github.com/hibiken/asynq/tools/asynqmon/cmd" import "github.com/hibiken/asynq/tools/asynq/cmd"
func main() { func main() {
cmd.Execute() cmd.Execute()

View File

@@ -1,165 +0,0 @@
# Asynqmon
Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Stats](#stats)
- [History](#history)
- [Process Status](#process-status)
- [List](#list)
- [Enqueue](#enqueue)
- [Delete](#delete)
- [Kill](#kill)
- [Cancel](#cancel)
- [Config File](#config-file)
## Installation
In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynqmon
This will create the asynqmon executable under your `$GOPATH/bin` directory.
## Quickstart
The tool has a few commands to inspect the state of tasks and queues.
Run `asynqmon help` to see all the available commands.
Asynqmon needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
By default, Asynqmon will try to connect to a redis server running at `localhost:6379`.
### Stats
Stats command gives the overview of the current state of tasks and queues. You can run it in conjunction with `watch` command to repeatedly run `stats`.
Example:
watch -n 3 asynqmon stats
This will run `asynqmon stats` command every 3 seconds.
![Gif](/docs/assets/asynqmon_stats.gif)
### History
History command shows the number of processed and failed tasks from the last x days.
By default, it shows the stats from the last 10 days. Use `--days` to specify the number of days.
Example:
asynqmon history --days=30
![Gif](/docs/assets/asynqmon_history.gif)
### Process Status
PS (ProcessStatus) command shows the list of running worker processes.
Example:
asynqmon ps
![Gif](/docs/assets/asynqmon_ps.gif)
### List
List command shows all tasks in the specified state in a table format
Example:
asynqmon ls retry
asynqmon ls scheduled
asynqmon ls dead
asynqmon ls enqueued:default
asynqmon ls inprogress
### Enqueue
There are two commands to enqueue tasks.
Command `enq` takes a task ID and moves the task to **Enqueued** state. You can obtain the task ID by running `ls` command.
Example:
asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example:
asynqmon enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state.
### Delete
There are two commands for task deletion.
Command `del` takes a task ID and deletes the task. You can obtain the task ID by running `ls` command.
Example:
asynqmon del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state.
Example:
asynqmon delall retry
Running the above command will delete all **Retry** tasks.
### Kill
There are two commands to kill (i.e. move to dead state) tasks.
Command `kill` takes a task ID and kills the task. You can obtain the task ID by running `ls` command.
Example:
asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state.
Example:
asynqmon killall retry
Running the above command will move all **Retry** tasks to **Dead** state.
### Cancel
Command `cancel` takes a task ID and sends a cancelation signal to the goroutine processing the specified task.
You can obtain the task ID by running `ls` command.
The task should be in "in-progress" state.
Handler implementation needs to be context aware in order to actually stop processing.
Example:
asynqmon cancel bnogo8gt6toe23vhef0g
## Config File
You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server.
By default, `asynqmon` will try to read config file located in
`$HOME/.asynqmon.(yaml|json)`. You can specify the file location via `--config` flag.
Config file example:
```yaml
uri: 127.0.0.1:6379
db: 2
password: mypassword
```
This will set the default values for `--uri`, `--db`, and `--password` flags.

View File

@@ -1,53 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// cancelCmd represents the cancel command
var cancelCmd = &cobra.Command{
Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynqmon cancel) will send a cancelation signal to the goroutine processing
the specified task.
The command takes one argument which specifies the task to cancel.
The task should be in in-progress state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing.
Example: asynqmon cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: cancel,
}
func init() {
rootCmd.AddCommand(cancelCmd)
}
func cancel(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
err := r.PublishCancelation(args[0])
if err != nil {
fmt.Printf("could not send cancelation signal: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully sent cancelation siganl for task %s\n", args[0])
}

View File

@@ -1,73 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// delCmd represents the del command
var delCmd = &cobra.Command{
Use: "del [task id]",
Short: "Deletes a task given an identifier",
Long: `Del (asynqmon del) will delete a task given an identifier.
The command takes one argument which specifies the task to delete.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: del,
}
func init() {
rootCmd.AddCommand(delCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// delCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// delCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func del(cmd *cobra.Command, args []string) {
id, score, qtype, err := parseQueryID(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":
err = r.DeleteScheduledTask(id, score)
case "r":
err = r.DeleteRetryTask(id, score)
case "d":
err = r.DeleteDeadTask(id, score)
default:
fmt.Println("invalid argument")
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully deleted %v\n", args[0])
}

View File

@@ -1,71 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var delallValidArgs = []string{"scheduled", "retry", "dead"}
// delallCmd represents the delall command
var delallCmd = &cobra.Command{
Use: "delall [state]",
Short: "Deletes all tasks in the specified state",
Long: `Delall (asynqmon delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
Example: asynqmon delall dead -> Deletes all dead tasks`,
ValidArgs: delallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: delall,
}
func init() {
rootCmd.AddCommand(delallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// delallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// delallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func delall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var err error
switch args[0] {
case "scheduled":
err = r.DeleteAllScheduledTasks()
case "retry":
err = r.DeleteAllRetryTasks()
case "dead":
err = r.DeleteAllDeadTasks()
default:
fmt.Printf("error: `asynqmon delall [state]` only accepts %v as the argument.\n", delallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Deleted all tasks in %q state\n", args[0])
}

View File

@@ -1,76 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// enqCmd represents the enq command
var enqCmd = &cobra.Command{
Use: "enq [task id]",
Short: "Enqueues a task given an identifier",
Long: `Enq (asynqmon enq) will enqueue a task given an identifier.
The command takes one argument which specifies the task to enqueue.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command.
The task enqueued by this command will be processed as soon as the task
gets dequeued by a processor.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: enq,
}
func init() {
rootCmd.AddCommand(enqCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// enqCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// enqCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func enq(cmd *cobra.Command, args []string) {
id, score, qtype, err := parseQueryID(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":
err = r.EnqueueScheduledTask(id, score)
case "r":
err = r.EnqueueRetryTask(id, score)
case "d":
err = r.EnqueueDeadTask(id, score)
default:
fmt.Println("invalid argument")
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully enqueued %v\n", args[0])
}

View File

@@ -1,75 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var enqallValidArgs = []string{"scheduled", "retry", "dead"}
// enqallCmd represents the enqall command
var enqallCmd = &cobra.Command{
Use: "enqall [state]",
Short: "Enqueues all tasks in the specified state",
Long: `Enqall (asynqmon enqall) will enqueue all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
The tasks enqueued by this command will be processed as soon as it
gets dequeued by a processor.
Example: asynqmon enqall dead -> Enqueues all dead tasks`,
ValidArgs: enqallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: enqall,
}
func init() {
rootCmd.AddCommand(enqallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// enqallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// enqallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func enqall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var n int64
var err error
switch args[0] {
case "scheduled":
n, err = r.EnqueueAllScheduledTasks()
case "retry":
n, err = r.EnqueueAllRetryTasks()
case "dead":
n, err = r.EnqueueAllDeadTasks()
default:
fmt.Printf("error: `asynqmon enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Enqueued %d tasks in %q state\n", n, args[0])
}

View File

@@ -1,71 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"strings"
"text/tabwriter"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var days int
// historyCmd represents the history command
var historyCmd = &cobra.Command{
Use: "history",
Short: "Shows historical aggregate data",
Long: `History (asynqmon history) will show the number of processed and failed tasks
from the last x days.
By default, it will show the data from the last 10 days.
Example: asynqmon history -x=30 -> Shows stats from the last 30 days`,
Args: cobra.NoArgs,
Run: history,
}
func init() {
rootCmd.AddCommand(historyCmd)
historyCmd.Flags().IntVarP(&days, "days", "x", 10, "show data from last x days")
}
func history(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
stats, err := r.HistoricalStats(days)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
printDailyStats(stats)
}
func printDailyStats(stats []*rdb.DailyStats) {
format := strings.Repeat("%v\t", 4) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "Date (UTC)", "Processed", "Failed", "Error Rate")
fmt.Fprintf(tw, format, "----------", "---------", "------", "----------")
for _, s := range stats {
var errrate string
if s.Processed == 0 {
errrate = "N/A"
} else {
errrate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
}
fmt.Fprintf(tw, format, s.Time.Format("2006-01-02"), s.Processed, s.Failed, errrate)
}
tw.Flush()
}

View File

@@ -1,72 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// killCmd represents the kill command
var killCmd = &cobra.Command{
Use: "kill [task id]",
Short: "Kills a task given an identifier",
Long: `Kill (asynqmon kill) will put a task in dead state given an identifier.
The command takes one argument which specifies the task to kill.
The task should be in either scheduled or retry state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Example: asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: kill,
}
func init() {
rootCmd.AddCommand(killCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// killCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// killCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func kill(cmd *cobra.Command, args []string) {
id, score, qtype, err := parseQueryID(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":
err = r.KillScheduledTask(id, score)
case "r":
err = r.KillRetryTask(id, score)
default:
fmt.Println("invalid argument")
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully killed %v\n", args[0])
}

View File

@@ -1,70 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var killallValidArgs = []string{"scheduled", "retry"}
// killallCmd represents the killall command
var killallCmd = &cobra.Command{
Use: "killall [state]",
Short: "Kills all tasks in the specified state",
Long: `Killall (asynqmon killall) will update all tasks from the specified state to dead state.
The argument should be either "scheduled" or "retry".
Example: asynqmon killall retry -> Update all retry tasks to dead tasks`,
ValidArgs: killallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: killall,
}
func init() {
rootCmd.AddCommand(killallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// killallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// killallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func killall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var n int64
var err error
switch args[0] {
case "scheduled":
n, err = r.KillAllScheduledTasks()
case "retry":
n, err = r.KillAllRetryTasks()
default:
fmt.Printf("error: `asynqmon killall [state]` only accepts %v as the argument.\n", killallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully updated %d tasks to \"dead\" state\n", n)
}

View File

@@ -1,229 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"strconv"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var lsValidArgs = []string{"enqueued", "inprogress", "scheduled", "retry", "dead"}
// lsCmd represents the ls command
var lsCmd = &cobra.Command{
Use: "ls [state]",
Short: "Lists tasks in the specified state",
Long: `Ls (asynqmon ls) will list all tasks in the specified state in a table format.
The command takes one argument which specifies the state of tasks.
The argument value should be one of "enqueued", "inprogress", "scheduled",
"retry", or "dead".
Example:
asynqmon ls dead -> Lists all tasks in dead state
Enqueued tasks requires a queue name after ":"
Example:
asynqmon ls enqueued:default -> List tasks from default queue
asynqmon ls enqueued:critical -> List tasks from critical queue
`,
Args: cobra.ExactValidArgs(1),
Run: ls,
}
// Flags
var pageSize int
var pageNum int
func init() {
rootCmd.AddCommand(lsCmd)
lsCmd.Flags().IntVar(&pageSize, "size", 30, "page size")
lsCmd.Flags().IntVar(&pageNum, "page", 0, "page number - zero indexed (default 0)")
}
func ls(cmd *cobra.Command, args []string) {
if pageSize < 0 {
fmt.Println("page size cannot be negative.")
os.Exit(1)
}
if pageNum < 0 {
fmt.Println("page number cannot be negative.")
os.Exit(1)
}
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
parts := strings.Split(args[0], ":")
switch parts[0] {
case "enqueued":
if len(parts) != 2 {
fmt.Printf("error: Missing queue name\n`asynqmon ls enqueued:[queue name]`\n")
os.Exit(1)
}
listEnqueued(r, parts[1])
case "inprogress":
listInProgress(r)
case "scheduled":
listScheduled(r)
case "retry":
listRetry(r)
case "dead":
listDead(r)
default:
fmt.Printf("error: `asynqmon ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
os.Exit(1)
}
}
// queryID returns an identifier used for "enq" command.
// score is the zset score and queryType should be one
// of "s", "r" or "d" (scheduled, retry, dead respectively).
func queryID(id xid.ID, score int64, qtype string) string {
const format = "%v:%v:%v"
return fmt.Sprintf(format, qtype, score, id)
}
// parseQueryID is a reverse operation of queryID function.
// It takes a queryID and return each part of id with proper
// type if valid, otherwise it reports an error.
func parseQueryID(queryID string) (id xid.ID, score int64, qtype string, err error) {
parts := strings.Split(queryID, ":")
if len(parts) != 3 {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
}
id, err = xid.FromString(parts[2])
if err != nil {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
}
score, err = strconv.ParseInt(parts[1], 10, 64)
if err != nil {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
}
qtype = parts[0]
if len(qtype) != 1 || !strings.Contains("srd", qtype) {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
}
return id, score, qtype, nil
}
func listEnqueued(r *rdb.RDB, qname string) {
tasks, err := r.ListEnqueued(qname, rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No enqueued tasks in %q queue\n", qname)
return
}
cols := []string{"ID", "Type", "Payload", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listInProgress(r *rdb.RDB) {
tasks, err := r.ListInProgress(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No in-progress tasks")
return
}
cols := []string{"ID", "Type", "Payload"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listScheduled(r *rdb.RDB) {
tasks, err := r.ListScheduled(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No scheduled tasks")
return
}
cols := []string{"ID", "Type", "Payload", "Process In", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
processIn := fmt.Sprintf("%.0f seconds", t.ProcessAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "s"), t.Type, t.Payload, processIn, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listRetry(r *rdb.RDB) {
tasks, err := r.ListRetry(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No retry tasks")
return
}
cols := []string{"ID", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
var nextRetry string
if d := t.ProcessAt.Sub(time.Now()); d > 0 {
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
} else {
nextRetry = "right now"
}
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "r"), t.Type, t.Payload, nextRetry, t.ErrorMsg, t.Retried, t.Retry, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listDead(r *rdb.RDB) {
tasks, err := r.ListDead(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No dead tasks")
return
}
cols := []string{"ID", "Type", "Payload", "Last Failed", "Last Error", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "d"), t.Type, t.Payload, t.LastFailedAt, t.ErrorMsg, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}

View File

@@ -1,54 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// rmqCmd represents the rmq command
var rmqCmd = &cobra.Command{
Use: "rmq [queue name]",
Short: "Removes the specified queue",
Long: `Rmq (asynqmon rmq) will remove the specified queue.
By default, it will remove the queue only if it's empty.
Use --force option to override this behavior.
Example: asynqmon rmq low -> Removes "low" queue`,
Args: cobra.ExactValidArgs(1),
Run: rmq,
}
var rmqForce bool
func init() {
rootCmd.AddCommand(rmqCmd)
rmqCmd.Flags().BoolVarP(&rmqForce, "force", "f", false, "remove the queue regardless of its size")
}
func rmq(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.RemoveQueue(args[0], rmqForce)
if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynqmon rmq --force %s'\n", err, args[0])
os.Exit(1)
}
fmt.Printf("error: %v", err)
os.Exit(1)
}
fmt.Printf("Successfully removed queue %q\n", args[0])
}

View File

@@ -1,112 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
"github.com/spf13/cobra"
homedir "github.com/mitchellh/go-homedir"
"github.com/spf13/viper"
)
var cfgFile string
// Flags
var uri string
var db int
var password string
// rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{
Use: "asynqmon",
Short: "A monitoring tool for asynq queues",
Long: `Asynqmon is a montoring CLI to inspect tasks and queues managed by asynq.`,
}
// Execute adds all child commands to the root command and sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the rootCmd.
func Execute() {
if err := rootCmd.Execute(); err != nil {
fmt.Println(err)
os.Exit(1)
}
}
func init() {
cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynqmon.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
viper.BindPFlag("uri", rootCmd.PersistentFlags().Lookup("uri"))
viper.BindPFlag("db", rootCmd.PersistentFlags().Lookup("db"))
viper.BindPFlag("password", rootCmd.PersistentFlags().Lookup("password"))
}
// initConfig reads in config file and ENV variables if set.
func initConfig() {
if cfgFile != "" {
// Use config file from the flag.
viper.SetConfigFile(cfgFile)
} else {
// Find home directory.
home, err := homedir.Dir()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
// Search config in home directory with name ".asynqmon" (without extension).
viper.AddConfigPath(home)
viper.SetConfigName(".asynqmon")
}
viper.AutomaticEnv() // read in environment variables that match
// If a config file is found, read it in.
if err := viper.ReadInConfig(); err == nil {
fmt.Println("Using config file:", viper.ConfigFileUsed())
}
}
// printTable is a helper function to print data in table format.
//
// cols is a list of headers and printRow specifies how to print rows.
//
// Example:
// type User struct {
// Name string
// Addr string
// Age int
// }
// data := []*User{{"user1", "addr1", 24}, {"user2", "addr2", 42}, ...}
// cols := []string{"Name", "Addr", "Age"}
// printRows := func(w io.Writer, tmpl string) {
// for _, u := range data {
// fmt.Fprintf(w, tmpl, u.Name, u.Addr, u.Age)
// }
// }
// printTable(cols, printRows)
func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
format := strings.Repeat("%v\t", len(cols)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
var headers []interface{}
var seps []interface{}
for _, name := range cols {
headers = append(headers, name)
seps = append(seps, strings.Repeat("-", len(name)))
}
fmt.Fprintf(tw, format, headers...)
fmt.Fprintf(tw, format, seps...)
printRows(tw, format)
tw.Flush()
}

View File

@@ -1,75 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// workersCmd represents the workers command
var workersCmd = &cobra.Command{
Use: "workers",
Short: "Shows all running workers information",
Long: `Workers (asynqmon workers) will show all running workers information.
The command shows the follwoing for each worker:
* Process in which the worker is running
* ID of the task worker is processing
* Type of the task worker is processing
* Payload of the task worker is processing
* Queue that the task was pulled from.
* Time the worker started processing the task`,
Args: cobra.NoArgs,
Run: workers,
}
func init() {
rootCmd.AddCommand(workersCmd)
}
func workers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
workers, err := r.ListWorkers()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(workers) == 0 {
fmt.Println("No workers")
return
}
// sort by started timestamp or ID.
sort.Slice(workers, func(i, j int) bool {
x, y := workers[i], workers[j]
if x.Started != y.Started {
return x.Started.Before(y.Started)
}
return x.ID.String() < y.ID.String()
})
cols := []string{"Process", "ID", "Type", "Payload", "Queue", "Started"}
printRows := func(w io.Writer, tmpl string) {
for _, wk := range workers {
fmt.Fprintf(w, tmpl,
fmt.Sprintf("%s:%d", wk.Host, wk.PID), wk.ID, wk.Type, wk.Payload, wk.Queue, timeAgo(wk.Started))
}
}
printTable(cols, printRows)
}

View File

@@ -3,12 +3,20 @@ module github.com/hibiken/asynq/tools
go 1.13 go 1.13
require ( require (
github.com/go-redis/redis/v7 v7.2.0 github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6 // indirect
github.com/hibiken/asynq v0.4.0 github.com/coreos/go-etcd v2.0.0+incompatible // indirect
github.com/cpuguy83/go-md2man v1.0.10 // indirect
github.com/fatih/color v1.9.0
github.com/go-redis/redis/v7 v7.4.0
github.com/google/uuid v1.1.1
github.com/hibiken/asynq v0.14.0
github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-homedir v1.1.0
github.com/rs/xid v1.2.1 github.com/spf13/cast v1.3.1
github.com/spf13/cobra v0.0.5 github.com/spf13/cobra v1.1.1
github.com/spf13/viper v1.6.2 github.com/spf13/viper v1.7.0
github.com/ugorji/go v1.1.4 // indirect
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8 // indirect
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77 // indirect
) )
replace github.com/hibiken/asynq => ./.. replace github.com/hibiken/asynq => ./..

View File

@@ -1,89 +1,173 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis v6.15.7+incompatible h1:3skhDh95XQMpnqeqNftPkQD9jL9e5e36z/1SUm6dy1U=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.1.0 h1:I4C4a8UGbFejiVjtYVTRVOiMIJ5pm5Yru6ibvDX/OS0=
github.com/go-redis/redis/v7 v7.1.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs= github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hibiken/asynq v0.4.0 h1:NvAfYX0DRe04WgGMKRg5oX7bs6ktv2fu9YwB6O356FI= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hibiken/asynq v0.4.0/go.mod h1:dtrVkxCsGPVhVNHMDXAH7lFq64kbj43+G6lt4FQZfW4= github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4= github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.11 h1:FxPOTFNqGkuDUGi3H/qkUbQO4ZiBa2brKq5r0l8TGeM=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE= github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc= github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
@@ -93,38 +177,50 @@ github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI= github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s= github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.1.1 h1:KfztREH0tPxJJ+geloSLaAkaPkr4ki2Er5quFV1TDo4=
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk= github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg= github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.6.0/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k= github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E= github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E=
github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k= github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/spf13/viper v1.7.0 h1:xVKxvI7ouOI5I+U9s2eeiUfMaWBVoXA3AWskkrqK0VM=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s= github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
@@ -133,58 +229,143 @@ github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljT
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI= go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno= gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -192,4 +373,10 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo= gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=