2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-10-20 09:16:12 +08:00

Compare commits

..

108 Commits

Author SHA1 Message Date
Ken Hibino
04702ddfd2 Change ErrorHandler function signature 2020-07-04 05:53:50 -07:00
Ken Hibino
6705f7c27a Return Result struct to caller of Enqueue 2020-07-03 21:49:53 -07:00
Ken Hibino
e27ae0d33a Replace github.com/rs/xid with github.com/google/uuid 2020-07-02 06:38:13 -07:00
Ken Hibino
6cd0ab65a3 Add version command to CLI 2020-06-29 20:59:15 -07:00
Ken Hibino
83c9d5ae94 Add migrate command to CLI
The command converts all messages in redis to be compatible for asynq
v0.10.0
2020-06-29 06:11:47 -07:00
Ken Hibino
7eebbf181e Update docs 2020-06-29 06:11:47 -07:00
Ken Hibino
7b1770da96 Minor code cleanup 2020-06-29 06:11:47 -07:00
Ken Hibino
e2c5882368 Use int64 type for Timeout and Deadline in TaskMessage 2020-06-29 06:11:47 -07:00
Ken Hibino
50df107ace Clean up processor test 2020-06-29 06:11:47 -07:00
Ken Hibino
9699d196e5 Add recoverer 2020-06-29 06:11:47 -07:00
Ken Hibino
1c5f7a791b Add RDB.ListDeadlineExceeded 2020-06-29 06:11:47 -07:00
Ken Hibino
232efe8279 Fix processor 2020-06-29 06:11:47 -07:00
Ken Hibino
ef4a4a8334 Add deadline to syncRequest
- syncer will drop a request if its deadline has been exceeded
2020-06-29 06:11:47 -07:00
Ken Hibino
65e17a3469 Update processor to adapt for deadlines set change
- Processor dequeues tasks only when it's available to process
- Processor retries a task when its context's Done channel is closed
2020-06-29 06:11:47 -07:00
Ken Hibino
88d94a2a9d Update RDB.Requeue to remove message from deadlines set 2020-06-29 06:11:47 -07:00
Ken Hibino
7433b94aac Update RDB.Dequeue to return deadline as time.Time 2020-06-29 06:11:47 -07:00
Ken Hibino
08ac7793ab Update RDB.Kill to remove message from deadlines set 2020-06-29 06:11:47 -07:00
Ken Hibino
02b653df72 Update RDB.Retry to remove message from deadlines set 2020-06-29 06:11:47 -07:00
Ken Hibino
bee784c052 Update RDB.Done to remove message from deadlines set 2020-06-29 06:11:47 -07:00
Ken Hibino
4ea58052f8 Update RDB.Dequeue to return message and deadline 2020-06-29 06:11:47 -07:00
Ken Hibino
5afb4861a5 Add task message to deadlines set on dequeue
Updated dequeueCmd to decode the message and compute its deadline and add
the message to the Deadline set.
2020-06-29 06:11:47 -07:00
Ken Hibino
68e6b379fc Use default timeout of 30mins if both timeout and deadline are not
provided
2020-06-29 06:11:47 -07:00
Ken Hibino
0e70a14899 Change TaskMessage Timeout and Deadline to int
* This change breaks existing tasks in Redis
2020-06-29 06:11:47 -07:00
Ken Hibino
f01c7b8e66 Add redis key for deadlines in base package 2020-06-29 06:11:47 -07:00
Ken Hibino
4e5f596910 Fix Client.Enqueue to always call enqueue
Closes https://github.com/hibiken/asynq/issues/158
2020-06-14 05:54:18 -07:00
Ken Hibino
8bf5917cd9 v0.9.4 2020-06-13 06:27:28 -07:00
Ken Hibino
7f30fa2bb6 Fix requeue logic in processor 2020-06-13 06:22:32 -07:00
Ken Hibino
ade6e61f51 v0.9.3 2020-06-12 06:31:42 -07:00
Ken Hibino
a2abeedaa0 Fix JSON number ovewflow issue 2020-06-12 06:29:36 -07:00
lion.zhao
81bb52b08c processor: log detail err in markAsDone func 2020-06-10 05:57:31 -07:00
Ken Hibino
bc2a7635a0 v0.9.2 2020-06-08 06:23:02 -07:00
Ken Hibino
f65d408bf9 Update docs for pause feature 2020-06-08 06:22:14 -07:00
Ken Hibino
4749b4bbfc Add benchmark test to verify client enqueue performance while server is
running
2020-06-08 06:06:18 -07:00
Ken Hibino
06c4a1c7f8 Limit the number of tasks moved by CheckAndEnqueue to prevent a long
running script
2020-06-08 06:06:18 -07:00
Ken Hibino
8af4cbad51 Fix data race in test 2020-06-08 06:06:18 -07:00
Ken Hibino
4e800a7f68 Update stats command to show queue paused status 2020-06-08 06:06:18 -07:00
Ken Hibino
d6a5c84dc6 Add pause and unpause command to CLI 2020-06-08 06:06:18 -07:00
Ken Hibino
363cfedb49 Update Dequeue operation to skip paused queues 2020-06-08 06:06:18 -07:00
Ken Hibino
4595bd41c3 Add Pause and Unpause methods to rdb 2020-06-08 06:06:18 -07:00
Ken Hibino
e236d55477 Fix cli build 2020-06-04 06:35:50 -07:00
Ken Hibino
a38f628f3b Refactor server state management 2020-05-31 06:41:19 -07:00
Ken Hibino
69ad583278 v0.9.1 2020-05-29 05:42:40 -07:00
Ken Hibino
23f46dde52 Add helper functions to extract task metadata from context 2020-05-29 05:40:42 -07:00
lihe
39188fe930 remove typo and redundant code 2020-05-22 05:11:54 -07:00
Ken Hibino
4492ed9255 Change internal constructor signatures.
Created "params" type to avoid positional arguments.
Personally it feels more explicit and reads better.
2020-05-17 13:25:24 -07:00
Ken Hibino
4e3e053989 Update readme 2020-05-16 11:00:44 -07:00
Ken Hibino
aef0775c05 v0.9.0 2020-05-16 08:02:57 -07:00
Ken Hibino
de146993d2 Add log messages around Server.Quiet 2020-05-16 08:01:39 -07:00
Ken Hibino
60cbf8dc5a Minor code cleanup 2020-05-16 08:00:35 -07:00
Ken Hibino
fb38086590 Clean up log messages
Moved development purpose log messages to DEBUG level.
2020-05-16 08:00:35 -07:00
Ken Hibino
cfcd19a222 Change default log level to info 2020-05-16 08:00:35 -07:00
Ken Hibino
24ee4b9693 Define test flags for package testing
Added test flags for

- redis address (defaults to "localhost:6379")
- redis db number (defaults to 14)
- log level (defaults to FATAL)
2020-05-16 08:00:35 -07:00
Ken Hibino
7849b395bd Update changelog 2020-05-16 08:00:35 -07:00
Ken Hibino
fa3082e5bb Change LogLevel to satisfy flag.Value interface 2020-05-16 08:00:35 -07:00
Ken Hibino
d13f7e900f Allow setting minimum log level for logger 2020-05-16 08:00:35 -07:00
Ken Hibino
b63476ddc8 Simplify Logger interface 2020-05-16 08:00:35 -07:00
Ken Hibino
210b026b01 Add log messages around Server.Quiet 2020-05-16 07:12:08 -07:00
Ken Hibino
556b2103fe Minor code cleanup 2020-05-15 08:19:35 -07:00
Ken Hibino
0289bc7a10 Clean up log messages
Moved development purpose log messages to DEBUG level.
2020-05-11 20:28:49 -07:00
Ken Hibino
ae942c93e5 Change default log level to info 2020-05-11 20:28:49 -07:00
Ken Hibino
0faf97f146 Define test flags for package testing
Added test flags for

- redis address (defaults to "localhost:6379")
- redis db number (defaults to 14)
- log level (defaults to FATAL)
2020-05-11 06:22:43 -07:00
Ken Hibino
711bfa371f Update changelog 2020-05-11 06:22:43 -07:00
Ken Hibino
73d62844e6 Change LogLevel to satisfy flag.Value interface 2020-05-11 06:22:43 -07:00
Ken Hibino
00b82904c6 Allow setting minimum log level for logger 2020-05-11 06:22:43 -07:00
Ken Hibino
a866369866 Simplify Logger interface 2020-05-11 06:22:43 -07:00
Ken Hibino
26b78136ba v0.8.3 2020-05-08 06:21:01 -07:00
t-asaka
44aad7f037 Add redis conn close func to client 2020-05-08 06:15:14 -07:00
Ken Hibino
9884d5f2fa v0.8.2 2020-05-03 16:55:34 -07:00
Ken Hibino
826f1ecff4 Update docs 2020-05-03 16:54:39 -07:00
Ken Hibino
24f2b64c6c Make sure to invoke CancelFunc in all cases 2020-05-03 15:58:23 -07:00
Ken Hibino
1c1474c55c Add tests to simulate cases where server cannot talk to redis 2020-05-02 07:05:26 -07:00
Ken Hibino
5161b9368a Clean up tests 2020-05-02 07:05:26 -07:00
Ken Hibino
0c998a8e17 Add test for signal handling 2020-04-28 06:56:05 -07:00
Ken Hibino
49160f2536 v0.8.1 2020-04-27 06:49:12 -07:00
Ken Hibino
e33d297d8e Add SetDefaultOptions method to Client 2020-04-27 06:45:13 -07:00
Ken Hibino
eb8ced6bdd Add ParseRedisURI helper function 2020-04-25 13:06:20 -07:00
Ken Hibino
789a9fd711 Update readme 2020-04-20 07:52:26 -07:00
Ken Hibino
5924cdac33 Add example tests 2020-04-19 11:36:43 -07:00
Ken Hibino
442c9275a0 v0.8.0 2020-04-19 09:08:20 -07:00
Ken Hibino
a0865df33c Change default concurrency to the number of CPUs 2020-04-19 08:51:17 -07:00
Ken Hibino
431a96a1f7 Update changelog 2020-04-19 08:51:17 -07:00
Ken Hibino
74e5582cfc Update readme 2020-04-19 08:51:17 -07:00
Ken Hibino
bf542a781c Add failure test for heartbeater 2020-04-19 08:51:17 -07:00
Ken Hibino
7c7f8e5f30 Move Broker interface to base package 2020-04-19 08:51:17 -07:00
Ken Hibino
46ab4417dd Add test to simulate situation where redis is down 2020-04-19 08:51:17 -07:00
Ken Hibino
f8a94fb839 Define broker interface 2020-04-19 08:51:17 -07:00
Ken Hibino
42453280f4 Fix subscriber to not panic when it cannot establish pubsub channel on
startup
2020-04-19 08:51:17 -07:00
Ken Hibino
4ec2dc9e47 Minor reorganization in tests 2020-04-19 08:51:17 -07:00
Ken Hibino
45933eb6b0 Reword doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
4df372b369 Allow user to configure shutdown timeout 2020-04-19 08:51:17 -07:00
Ken Hibino
c688b8f4f9 Fix test for base package 2020-04-19 08:51:17 -07:00
Ken Hibino
eef2f5f3cb Add test cases for server error 2020-04-19 08:51:17 -07:00
Ken Hibino
239ef27a6e Update doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
24da281aa7 Update docs with new APIs 2020-04-19 08:51:17 -07:00
Ken Hibino
b086e88a47 Rename ps command to servers 2020-04-19 08:51:17 -07:00
Ken Hibino
cf61911a49 Update all reference to asynqmon to Asynq CLI 2020-04-19 08:51:17 -07:00
Ken Hibino
aafd8a5b74 Rename internal ProcessState to ServerState 2020-04-19 08:51:17 -07:00
Ken Hibino
4f11e52558 Rename CLI to asynq 2020-04-19 08:51:17 -07:00
Ken Hibino
b14c73809e Refactor server state 2020-04-19 08:51:17 -07:00
Ken Hibino
779065c269 Export Start, Stop and Quiet method on Server type 2020-04-19 08:51:17 -07:00
Ken Hibino
f9842ba914 Rename Background to Server 2020-04-19 08:51:17 -07:00
Ken Hibino
022dc29701 Add overview section in readme 2020-04-11 17:08:31 -07:00
Ken Hibino
40d1889ba0 Highlight stability and compatibility section in readme 2020-04-11 09:30:00 -07:00
Ken Hibino
7e96e893fe (fix): Change log messages depending on signals being handled 2020-04-10 08:56:01 -07:00
Ken Hibino
84b0c76c8b v0.7.1 2020-04-05 14:56:06 -07:00
Ken Hibino
60b887b8e3 Fix singnal handling for different systems 2020-04-05 14:37:23 -07:00
Ken Hibino
7864bea55c Update readme
Add features section
2020-03-28 08:44:06 -07:00
Apos Spanos
47220554ca Correct typo 2020-03-23 13:47:05 -07:00
72 changed files with 5653 additions and 2035 deletions

6
.gitignore vendored
View File

@@ -15,7 +15,7 @@
/examples /examples
# Ignore command binary # Ignore command binary
/tools/asynqmon/asynqmon /tools/asynq/asynq
# Ignore asynqmon config file # Ignore asynq config file
.asynqmon.* .asynq.*

View File

@@ -5,6 +5,7 @@ git:
go: [1.13.x, 1.14.x] go: [1.13.x, 1.14.x]
script: script:
- go test -race -v -coverprofile=coverage.txt -covermode=atomic ./... - go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
- go test -run=XXX -bench=. -loglevel=debug ./...
services: services:
- redis-server - redis-server
after_success: after_success:

View File

@@ -3,13 +3,16 @@ if [ "${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}" != "master" ]; then
cd ${TRAVIS_BUILD_DIR}/.. && \ cd ${TRAVIS_BUILD_DIR}/.. && \
git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \ git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \
cd "${TRAVIS_REPO_SLUG}-bench" && \ cd "${TRAVIS_REPO_SLUG}-bench" && \
# Benchmark master # Benchmark master
git checkout master && \ git checkout master && \
go test -run=XXX -bench=. ./... > master.txt && \ go test -run=XXX -bench=. ./... > master.txt && \
# Benchmark feature branch # Benchmark feature branch
git checkout ${TRAVIS_COMMIT} && \ git checkout ${TRAVIS_COMMIT} && \
go test -run=XXX -bench=. ./... > feature.txt && \ go test -run=XXX -bench=. ./... > feature.txt && \
go get -u golang.org/x/tools/cmd/benchcmp && \
# compare two benchmarks # compare two benchmarks
go get -u golang.org/x/tools/cmd/benchcmp && \
benchcmp master.txt feature.txt; benchcmp master.txt feature.txt;
fi fi

View File

@@ -7,6 +7,92 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
### Changed
- All tasks now requires timeout or deadline. By default, timeout is set to 30 mins.
- Tasks that exceed its deadline are automatically retried.
- Encoding schema for task message has changed. Please install the latest CLI and run `migrate` command if
you have tasks enqueued with the previous version of asynq.
- API of `(*Client).Enqueue`, `(*Client).EnqueueIn`, and `(*Client).EnqueueAt` has changed to return a `*Result`.
- API of `ErrorHandler` has changed. It now takes context as the first argument and removed `retried`, `maxRetry` from the argument list.
Use `GetRetryCount` and/or `GetMaxRetry` to get the count values.
## [0.9.4] - 2020-06-13
### Fixed
- Fixes issue of same tasks processed by more than one worker (https://github.com/hibiken/asynq/issues/90).
## [0.9.3] - 2020-06-12
### Fixed
- Fixes the JSON number overflow issue (https://github.com/hibiken/asynq/issues/166).
## [0.9.2] - 2020-06-08
### Added
- The `pause` and `unpause` commands were added to the CLI. See README for the CLI for details.
## [0.9.1] - 2020-05-29
### Added
- `GetTaskID`, `GetRetryCount`, and `GetMaxRetry` functions were added to extract task metadata from context.
## [0.9.0] - 2020-05-16
### Changed
- `Logger` interface has changed. Please see the godoc for the new interface.
### Added
- `LogLevel` type is added. Server's log level can be specified through `LogLevel` field in `Config`.
## [0.8.3] - 2020-05-08
### Added
- `Close` method is added to `Client`.
## [0.8.2] - 2020-05-03
### Fixed
- [Fixed cancelfunc leak](https://github.com/hibiken/asynq/pull/145)
## [0.8.1] - 2020-04-27
### Added
- `ParseRedisURI` helper function is added to create a `RedisConnOpt` from a URI string.
- `SetDefaultOptions` method is added to `Client`.
## [0.8.0] - 2020-04-19
### Changed
- `Background` type is renamed to `Server`.
- To upgrade from the previous version, Update `NewBackground` to `NewServer` and pass `Config` by value.
- CLI is renamed to `asynq`.
- To upgrade the CLI to the latest version run `go get -u github.com/hibiken/tools/asynq`
- The `ps` command in CLI is renamed to `servers`
- `Concurrency` defaults to the number of CPUs when unset or set to a negative value.
### Added
- `ShutdownTimeout` field is added to `Config` to speicfy timeout duration used during graceful shutdown.
- New `Server` type exposes `Start`, `Stop`, and `Quiet` as well as `Run`.
## [0.7.1] - 2020-04-05
### Fixed
- Fixed signal handling for windows.
## [0.7.0] - 2020-03-22 ## [0.7.0] - 2020-03-22
### Changed ### Changed

131
README.md
View File

@@ -7,12 +7,42 @@
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community) [![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq) [![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq)
Asynq is a simple Go library for queueing tasks and processing them in the background with workers. ## Overview
It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before v1.0.0 release. Asynq is a Go library for queueing tasks and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
![Task Queue Diagram](/docs/assets/task-queue.png) Highlevel overview of how Asynq works:
- Client puts task on a queue
- Server pulls task off queues and starts a worker goroutine for each task
- Tasks are processed concurrently by multiple workers
Task queues are used as a mechanism to distribute work across multiple machines.
A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
![Task Queue Diagram](/docs/assets/overview.png)
## Stability and Compatibility
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users (Feedback on APIs are appreciated!). The public API could change without a major version update before v1.0.0 release.
**Status**: The library is currently undergoing heavy development with frequent, breaking API changes.
## Features
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
- Scheduling of tasks
- Durability since tasks are written to Redis
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#strict-priority-queues)
- Low latency to add a task since writes are fast in Redis
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
- [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive)
- [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for HA
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Quickstart ## Quickstart
@@ -22,7 +52,7 @@ First, make sure you are running a Redis server locally.
$ redis-server $ redis-server
``` ```
Next, write a package that encapslates task creation and task handling. Next, write a package that encapsulates task creation and task handling.
```go ```go
package tasks package tasks
@@ -33,13 +63,16 @@ import (
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
) )
// A list of background task types. // A list of task types.
const ( const (
EmailDelivery = "email:deliver" EmailDelivery = "email:deliver"
ImageProcessing = "image:process" ImageProcessing = "image:process"
) )
// Write function NewXXXTask to create a task. //----------------------------------------------
// Write a function NewXXXTask to create a task.
// A task consists of a type and a payload.
//----------------------------------------------
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task { func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID} payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
@@ -51,8 +84,13 @@ func NewImageProcessingTask(src, dst string) *asynq.Task {
return asynq.NewTask(ImageProcessing, payload) return asynq.NewTask(ImageProcessing, payload)
} }
// Write function HandleXXXTask to handle the given task. //---------------------------------------------------------------
// NOTE: It satisfies the asynq.HandlerFunc interface. // Write a function HandleXXXTask to handle the input task.
// Note that it satisfies the asynq.HandlerFunc interface.
//
// Handler doesn't need to be a function. You can define a type
// that satisfies asynq.Handler interface. See examples below.
//---------------------------------------------------------------
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error { func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
userID, err := t.Payload.GetInt("user_id") userID, err := t.Payload.GetInt("user_id")
@@ -68,7 +106,12 @@ func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
return nil return nil
} }
func HandleImageProcessingTask(ctx context.Context, t *asynq.Task) error { // ImageProcessor implements asynq.Handler interface.
type ImageProcesser struct {
// ... fields for struct
}
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
src, err := t.Payload.GetString("src") src, err := t.Payload.GetString("src")
if err != nil { if err != nil {
return err return err
@@ -81,10 +124,14 @@ func HandleImageProcessingTask(ctx context.Context, t *asynq.Task) error {
// Image processing logic ... // Image processing logic ...
return nil return nil
} }
func NewImageProcessor() *ImageProcessor {
// ... return an instance
}
``` ```
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to enqueue tasks to the task queue. In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
A task will be processed by a background worker as soon as the task gets enqueued. A task will be processed asynchronously by a background worker as soon as the task gets enqueued.
Scheduled tasks will be stored in Redis and will be enqueued at the specified time. Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
```go ```go
@@ -100,40 +147,66 @@ import (
const redisAddr = "127.0.0.1:6379" const redisAddr = "127.0.0.1:6379"
func main() { func main() {
r := &asynq.RedisClientOpt{Addr: redisAddr} r := asynq.RedisClientOpt{Addr: redisAddr}
c := asynq.NewClient(r) c := asynq.NewClient(r)
defer c.Close()
// ------------------------------------------------------
// Example 1: Enqueue task to be processed immediately. // Example 1: Enqueue task to be processed immediately.
// Use (*Client).Enqueue method.
// ------------------------------------------------------
t := tasks.NewEmailDeliveryTask(42, "some:template:id") t := tasks.NewEmailDeliveryTask(42, "some:template:id")
err := c.Enqueue(t) res, err := c.Enqueue(t)
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatal("could not enqueue task: %v", err)
} }
fmt.Printf("Enqueued Result: %+v\n", res)
// ------------------------------------------------------------
// Example 2: Schedule task to be processed in the future. // Example 2: Schedule task to be processed in the future.
// Use (*Client).EnqueueIn or (*Client).EnqueueAt.
// ------------------------------------------------------------
t = tasks.NewEmailDeliveryTask(42, "other:template:id") t = tasks.NewEmailDeliveryTask(42, "other:template:id")
err = c.EnqueueIn(24*time.Hour, t) res, err = c.EnqueueIn(24*time.Hour, t)
if err != nil { if err != nil {
log.Fatal("could not schedule task: %v", err) log.Fatal("could not schedule task: %v", err)
} }
fmt.Printf("Enqueued Result: %+v\n", res)
// Example 3: Pass options to tune task processing behavior. // ----------------------------------------------------------------------------
// Options include MaxRetry, Queue, Timeout, Deadline, etc. // Example 3: Set options to tune task processing behavior.
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
// ----------------------------------------------------------------------------
c.SetDefaultOptions(tasks.ImageProcessing, asynq.MaxRetry(10), asynq.Timeout(time.Minute))
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url") t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
err = c.Enqueue(t, asynq.MaxRetry(10), asynq.Queue("critical"), asynq.Timeout(time.Minute)) res, err = c.Enqueue(t)
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatal("could not enqueue task: %v", err)
} }
fmt.Printf("Enqueued Result: %+v\n", res)
// ---------------------------------------------------------------------------
// Example 4: Pass options to tune task processing behavior at enqueue time.
// Options passed at enqueue time override default ones, if any.
// ---------------------------------------------------------------------------
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
res, err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
} }
``` ```
Next, create a binary to process these tasks in the background. Next, create a worker server to process these tasks in the background.
To start the background workers, use [`Background`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Background) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks. To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler. You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
@@ -141,6 +214,8 @@ You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?
package main package main
import ( import (
"log"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"your/app/package/tasks" "your/app/package/tasks"
) )
@@ -148,9 +223,9 @@ import (
const redisAddr = "127.0.0.1:6379" const redisAddr = "127.0.0.1:6379"
func main() { func main() {
r := &asynq.RedisClientOpt{Addr: redisAddr} r := asynq.RedisClientOpt{Addr: redisAddr}
bg := asynq.NewBackground(r, &asynq.Config{ srv := asynq.NewServer(r, asynq.Config{
// Specify how many concurrent workers to use // Specify how many concurrent workers to use
Concurrency: 10, Concurrency: 10,
// Optionally specify multiple queues with different priority. // Optionally specify multiple queues with different priority.
@@ -165,10 +240,12 @@ func main() {
// mux maps a type to a handler // mux maps a type to a handler
mux := asynq.NewServeMux() mux := asynq.NewServeMux()
mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask) mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask)
mux.HandleFunc(tasks.ImageProcessing, tasks.HandleImageProcessingTask) mux.Handle(tasks.ImageProcessing, tasks.NewImageProcessor())
// ...register other handlers... // ...register other handlers...
bg.Run(mux) if err := srv.Run(mux); err != nil {
log.Fatalf("could not run server: %v", err)
}
} }
``` ```
@@ -184,7 +261,7 @@ Here's an example of running the `stats` command.
![Gif](/docs/assets/demo.gif) ![Gif](/docs/assets/demo.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynqmon/README.md). For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
## Installation ## Installation
@@ -197,7 +274,7 @@ go get -u github.com/hibiken/asynq
To install the CLI tool, run the following command: To install the CLI tool, run the following command:
```sh ```sh
go get -u github.com/hibiken/asynq/tools/asynqmon go get -u github.com/hibiken/asynq/tools/asynq
``` ```
## Requirements ## Requirements
@@ -216,7 +293,7 @@ Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI - [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library. - [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynqmon CLI is built with cobra - [Cobra](https://github.com/spf13/cobra) : Asynq CLI is built with cobra
## License ## License

View File

@@ -7,6 +7,9 @@ package asynq
import ( import (
"crypto/tls" "crypto/tls"
"fmt" "fmt"
"net/url"
"strconv"
"strings"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
) )
@@ -94,6 +97,79 @@ type RedisFailoverClientOpt struct {
TLSConfig *tls.Config TLSConfig *tls.Config
} }
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
// It returns a non-nil error if uri cannot be parsed.
//
// Three URI schemes are supported, which are redis:, redis-socket:, and redis-sentinel:.
// Supported formats are:
// redis://[:password@]host[:port][/dbnumber]
// redis-socket://[:password@]path[?db=dbnumber]
// redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName]
func ParseRedisURI(uri string) (RedisConnOpt, error) {
u, err := url.Parse(uri)
if err != nil {
return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err)
}
switch u.Scheme {
case "redis":
return parseRedisURI(u)
case "redis-socket":
return parseRedisSocketURI(u)
case "redis-sentinel":
return parseRedisSentinelURI(u)
default:
return nil, fmt.Errorf("asynq: unsupported uri scheme: %q", u.Scheme)
}
}
func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
var db int
var err error
if len(u.Path) > 0 {
xs := strings.Split(strings.Trim(u.Path, "/"), "/")
db, err = strconv.Atoi(xs[0])
if err != nil {
return nil, fmt.Errorf("asynq: could not parse redis uri: database number should be the first segment of the path")
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Addr: u.Host, DB: db, Password: password}, nil
}
func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) {
const errPrefix = "asynq: could not parse redis socket uri"
if len(u.Path) == 0 {
return nil, fmt.Errorf("%s: path does not exist", errPrefix)
}
q := u.Query()
var db int
var err error
if n := q.Get("db"); n != "" {
db, err = strconv.Atoi(n)
if err != nil {
return nil, fmt.Errorf("%s: query param `db` should be a number", errPrefix)
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Network: "unix", Addr: u.Path, DB: db, Password: password}, nil
}
func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
addrs := strings.Split(u.Host, ",")
master := u.Query().Get("master")
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, Password: password}, nil
}
// createRedisClient returns a redis client given a redis connection configuration. // createRedisClient returns a redis client given a redis connection configuration.
// //
// Passing an unexpected type as a RedisConnOpt argument will cause panic. // Passing an unexpected type as a RedisConnOpt argument will cause panic.

View File

@@ -5,7 +5,7 @@
package asynq package asynq
import ( import (
"os" "flag"
"sort" "sort"
"testing" "testing"
@@ -15,16 +15,28 @@ import (
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
) )
// This file defines test helper functions used by //============================================================================
// other test files. // This file defines helper functions and variables used in other test files.
//============================================================================
// redis used for package testing. // variables used for package testing.
const ( var (
redisAddr = "localhost:6379" redisAddr string
redisDB = 14 redisDB int
testLogLevel = FatalLevel
) )
var testLogger = log.NewLogger(os.Stderr) var testLogger *log.Logger
func init() {
flag.StringVar(&redisAddr, "redis_addr", "localhost:6379", "redis address to use in testing")
flag.IntVar(&redisDB, "redis_db", 14, "redis db number to use in testing")
flag.Var(&testLogLevel, "loglevel", "log level to use in testing")
testLogger = log.NewLogger(nil)
testLogger.SetLevel(toInternalLogLevel(testLogLevel))
}
func setup(tb testing.TB) *redis.Client { func setup(tb testing.TB) *redis.Client {
tb.Helper() tb.Helper()
@@ -44,3 +56,106 @@ var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
}) })
return out return out
}) })
func TestParseRedisURI(t *testing.T) {
tests := []struct {
uri string
want RedisConnOpt
}{
{
"redis://localhost:6379",
RedisClientOpt{Addr: "localhost:6379"},
},
{
"redis://localhost:6379/3",
RedisClientOpt{Addr: "localhost:6379", DB: 3},
},
{
"redis://:mypassword@localhost:6379",
RedisClientOpt{Addr: "localhost:6379", Password: "mypassword"},
},
{
"redis://:mypassword@127.0.0.1:6379/11",
RedisClientOpt{Addr: "127.0.0.1:6379", Password: "mypassword", DB: 11},
},
{
"redis-socket:///var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock"},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword"},
},
{
"redis-socket:///var/run/redis/redis.sock?db=7",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", DB: 7},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock?db=12",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword", DB: 12},
},
{
"redis-sentinel://localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
},
},
{
"redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
Password: "mypassword",
},
},
}
for _, tc := range tests {
got, err := ParseRedisURI(tc.uri)
if err != nil {
t.Errorf("ParseRedisURI(%q) returned an error: %v", tc.uri, err)
continue
}
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff)
}
}
}
func TestParseRedisURIErrors(t *testing.T) {
tests := []struct {
desc string
uri string
}{
{
"unsupported scheme",
"rdb://localhost:6379",
},
{
"missing scheme",
"localhost:6379",
},
{
"multiple db numbers",
"redis://localhost:6379/1,2,3",
},
{
"missing path for socket connection",
"redis-socket://?db=one",
},
{
"non integer for db numbers for socket",
"redis-socket:///some/path/to/redis?db=one",
},
}
for _, tc := range tests {
_, err := ParseRedisURI(tc.uri)
if err == nil {
t.Errorf("%s: ParseRedisURI(%q) succeeded for malformed input, want error",
tc.desc, tc.uri)
}
}
}

View File

@@ -1,313 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"math"
"math/rand"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
)
// Background is responsible for managing the background-task processing.
//
// Background manages task queues to process tasks.
// If the processing of a task is unsuccessful, background will
// schedule it for a retry until either the task gets processed successfully
// or it exhausts its max retry count.
//
// Once a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met
// (e.g., queue size reaches a certain limit, or the task has been in the
// queue for a certain amount of time).
type Background struct {
mu sync.Mutex
running bool
ps *base.ProcessState
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
logger Logger
rdb *rdb.RDB
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
}
// Config specifies the background-task processing behavior.
type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to a zero or negative value, NewBackground will overwrite the value to one.
Concurrency int
// Function to calculate retry delay for a failed task.
//
// By default, it uses exponential backoff algorithm to calculate the delay.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the background will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
// If set to true, tasks in the queue with the highest priority is processed first.
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
// Logger specifies the logger used by the background instance.
//
// If unset, default logger is used.
Logger Logger
}
// An ErrorHandler handles errors returned by the task handler.
type ErrorHandler interface {
HandleError(task *Task, err error, retried, maxRetry int)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(task *Task, err error, retried, maxRetry int)
// HandleError calls fn(task, err, retried, maxRetry)
func (fn ErrorHandlerFunc) HandleError(task *Task, err error, retried, maxRetry int) {
fn(task, err, retried, maxRetry)
}
// Logger implements logging with various log levels.
type Logger interface {
// Debug logs a message at Debug level.
Debug(format string, args ...interface{})
// Info logs a message at Info level.
Info(format string, args ...interface{})
// Warn logs a message at Warning level.
Warn(format string, args ...interface{})
// Error logs a message at Error level.
Error(format string, args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(format string, args ...interface{})
}
// Formula taken from https://github.com/mperham/sidekiq.
func defaultDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1))
return time.Duration(s) * time.Second
}
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
// NewBackground returns a new Background given a redis connection option
// and background processing configuration.
func NewBackground(r RedisConnOpt, cfg *Config) *Background {
n := cfg.Concurrency
if n < 1 {
n = 1
}
delayFunc := cfg.RetryDelayFunc
if delayFunc == nil {
delayFunc = defaultDelayFunc
}
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
logger := cfg.Logger
if logger == nil {
logger = log.NewLogger(os.Stderr)
}
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
pid := os.Getpid()
rdb := rdb.NewRDB(createRedisClient(r))
ps := base.NewProcessState(host, pid, n, queues, cfg.StrictPriority)
syncCh := make(chan *syncRequest)
cancels := base.NewCancelations()
syncer := newSyncer(logger, syncCh, 5*time.Second)
heartbeater := newHeartbeater(logger, rdb, ps, 5*time.Second)
scheduler := newScheduler(logger, rdb, 5*time.Second, queues)
processor := newProcessor(logger, rdb, ps, delayFunc, syncCh, cancels, cfg.ErrorHandler)
subscriber := newSubscriber(logger, rdb, cancels)
return &Background{
logger: logger,
rdb: rdb,
ps: ps,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
}
}
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
type Handler interface {
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// Run starts the background-task processing and blocks until
// an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all pending workers and other
// goroutines to process the tasks.
func (bg *Background) Run(handler Handler) {
type prefixLogger interface {
SetPrefix(prefix string)
}
// If logger supports setting prefix, then set prefix for log output.
if l, ok := bg.logger.(prefixLogger); ok {
l.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid()))
}
bg.logger.Info("Starting processing")
bg.start(handler)
defer bg.stop()
bg.logger.Info("Send signal TSTP to stop processing new tasks")
bg.logger.Info("Send signal TERM or INT to terminate the process")
// Wait for a signal to terminate.
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGTSTP)
for {
sig := <-sigs
if sig == syscall.SIGTSTP {
bg.processor.stop()
bg.ps.SetStatus(base.StatusStopped)
continue
}
break
}
fmt.Println()
bg.logger.Info("Starting graceful shutdown")
}
// starts the background-task processing.
func (bg *Background) start(handler Handler) {
bg.mu.Lock()
defer bg.mu.Unlock()
if bg.running {
return
}
bg.running = true
bg.processor.handler = handler
bg.heartbeater.start(&bg.wg)
bg.subscriber.start(&bg.wg)
bg.syncer.start(&bg.wg)
bg.scheduler.start(&bg.wg)
bg.processor.start(&bg.wg)
}
// stops the background-task processing.
func (bg *Background) stop() {
bg.mu.Lock()
defer bg.mu.Unlock()
if !bg.running {
return
}
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
//
// processor -> syncer (via syncCh)
bg.scheduler.terminate()
bg.processor.terminate()
bg.syncer.terminate()
bg.subscriber.terminate()
bg.heartbeater.terminate()
bg.wg.Wait()
bg.rdb.Close()
bg.running = false
bg.logger.Info("Bye!")
}

View File

@@ -1,128 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"go.uber.org/goleak"
)
func TestBackground(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
client := NewClient(r)
bg := NewBackground(r, &Config{
Concurrency: 10,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
bg.start(HandlerFunc(h))
err := client.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
err = client.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
bg.stop()
}
func TestGCD(t *testing.T) {
tests := []struct {
input []int
want int
}{
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
got := gcd(tc.input...)
if got != tc.want {
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
}
}
}
func TestNormalizeQueueCfg(t *testing.T) {
tests := []struct {
input map[string]int
want map[string]int
}{
{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
}
for _, tc := range tests {
got := normalizeQueueCfg(tc.input)
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
tc.input, got, tc.want, diff)
}
}
}

View File

@@ -7,7 +7,6 @@ package asynq
import ( import (
"context" "context"
"fmt" "fmt"
"math/rand"
"sync" "sync"
"testing" "testing"
"time" "time"
@@ -24,16 +23,17 @@ func BenchmarkEndToEndSimple(b *testing.B) {
DB: redisDB, DB: redisDB,
} }
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration { RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second return time.Second
}, },
LogLevel: testLogLevel,
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil { if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -46,11 +46,11 @@ func BenchmarkEndToEndSimple(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }
@@ -60,29 +60,29 @@ func BenchmarkEndToEnd(b *testing.B) {
const count = 100000 const count = 100000
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
rand.Seed(time.Now().UnixNano())
setup(b) setup(b)
redis := &RedisClientOpt{ redis := &RedisClientOpt{
Addr: redisAddr, Addr: redisAddr,
DB: redisDB, DB: redisDB,
} }
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration { RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second return time.Second
}, },
LogLevel: testLogLevel,
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil { if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil { if _, err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -90,8 +90,16 @@ func BenchmarkEndToEnd(b *testing.B) {
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(count * 2) wg.Add(count * 2)
handler := func(ctx context.Context, t *Task) error { handler := func(ctx context.Context, t *Task) error {
// randomly fail 1% of tasks n, err := t.Payload.GetInt("data")
if rand.Intn(100) == 1 { if err != nil {
b.Logf("internal error: %v", err)
}
retried, ok := GetRetryCount(ctx)
if !ok {
b.Logf("internal error: %v", err)
}
// Fail 1% of tasks for the first attempt.
if retried == 0 && n%100 == 0 {
return fmt.Errorf(":(") return fmt.Errorf(":(")
} }
wg.Done() wg.Done()
@@ -99,11 +107,11 @@ func BenchmarkEndToEnd(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }
@@ -124,30 +132,31 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
DB: redisDB, DB: redisDB,
} }
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
Queues: map[string]int{ Queues: map[string]int{
"high": 6, "high": 6,
"default": 3, "default": 3,
"low": 1, "low": 1,
}, },
LogLevel: testLogLevel,
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < highCount; i++ { for i := 0; i < highCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("high")); err != nil { if _, err := client.Enqueue(t, Queue("high")); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < defaultCount; i++ { for i := 0; i < defaultCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil { if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < lowCount; i++ { for i := 0; i < lowCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("low")); err != nil { if _, err := client.Enqueue(t, Queue("low")); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -160,11 +169,70 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown
}
}
// E2E benchmark to check client enqueue operation performs correctly,
// while server is busy processing tasks.
func BenchmarkClientWhileServerRunning(b *testing.B) {
const count = 10000
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis)
srv := NewServer(redis, Config{
Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second
},
LogLevel: testLogLevel,
})
// Enqueue 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
// Schedule 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if _, err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
handler := func(ctx context.Context, t *Task) error {
return nil
}
srv.Start(HandlerFunc(handler))
b.StartTimer() // end setup
b.Log("Starting enqueueing")
enqueued := 0
for enqueued < 100000 {
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), map[string]interface{}{"data": enqueued})
if _, err := client.Enqueue(t); err != nil {
b.Logf("could not enqueue task %d: %v", enqueued, err)
continue
}
enqueued++
}
b.Logf("Finished enqueueing %d tasks", enqueued)
b.StopTimer() // begin teardown
srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }

166
client.go
View File

@@ -9,11 +9,12 @@ import (
"fmt" "fmt"
"sort" "sort"
"strings" "strings"
"sync"
"time" "time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
) )
// A Client is responsible for scheduling tasks. // A Client is responsible for scheduling tasks.
@@ -23,13 +24,18 @@ import (
// //
// Clients are safe for concurrent use by multiple goroutines. // Clients are safe for concurrent use by multiple goroutines.
type Client struct { type Client struct {
rdb *rdb.RDB mu sync.Mutex
opts map[string][]Option
rdb *rdb.RDB
} }
// NewClient and returns a new Client given a redis connection option. // NewClient and returns a new Client given a redis connection option.
func NewClient(r RedisConnOpt) *Client { func NewClient(r RedisConnOpt) *Client {
rdb := rdb.NewRDB(createRedisClient(r)) rdb := rdb.NewRDB(createRedisClient(r))
return &Client{rdb} return &Client{
opts: make(map[string][]Option),
rdb: rdb,
}
} }
// Option specifies the task processing behavior. // Option specifies the task processing behavior.
@@ -63,13 +69,23 @@ func Queue(name string) Option {
} }
// Timeout returns an option to specify how long a task may run. // Timeout returns an option to specify how long a task may run.
// If the timeout elapses before the Handler returns, then the task
// will be retried.
// //
// Zero duration means no limit. // Zero duration means no limit.
//
// If there's a conflicting Deadline option, whichever comes earliest
// will be used.
func Timeout(d time.Duration) Option { func Timeout(d time.Duration) Option {
return timeoutOption(d) return timeoutOption(d)
} }
// Deadline returns an option to specify the deadline for the given task. // Deadline returns an option to specify the deadline for the given task.
// If it reaches the deadline before the Handler returns, then the task
// will be retried.
//
// If there's a conflicting Timeout option, whichever comes earliest
// will be used.
func Deadline(t time.Time) Option { func Deadline(t time.Time) Option {
return deadlineOption(t) return deadlineOption(t)
} }
@@ -104,7 +120,7 @@ func composeOptions(opts ...Option) option {
res := option{ res := option{
retry: defaultMaxRetry, retry: defaultMaxRetry,
queue: base.DefaultQueueName, queue: base.DefaultQueueName,
timeout: 0, timeout: 0, // do not set to deafultTimeout here
deadline: time.Time{}, deadline: time.Time{},
} }
for _, opt := range opts { for _, opt := range opts {
@@ -160,38 +176,68 @@ func serializePayload(payload map[string]interface{}) string {
} }
const ( const (
// Max retry count by default // Default max retry count used if nothing is specified.
defaultMaxRetry = 25 defaultMaxRetry = 25
// Default timeout used if both timeout and deadline are not specified.
defaultTimeout = 30 * time.Minute
) )
// Value zero indicates no timeout and no deadline.
var (
noTimeout time.Duration = 0
noDeadline time.Time = time.Unix(0, 0)
)
// SetDefaultOptions sets options to be used for a given task type.
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
//
// Default options can be overridden by options passed at enqueue time.
func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
c.mu.Lock()
defer c.mu.Unlock()
c.opts[taskType] = opts
}
// A Result holds enqueued task's metadata.
type Result struct {
// ID is a unique identifier for the task.
ID string
// Retry is the maximum number of retry for the task.
Retry int
// Queue is a name of the queue the task is enqueued to.
Queue string
// Timeout is the timeout value for the task.
// Counting for timeout starts when a worker starts processing the task.
// If task processing doesn't complete within the timeout, the task will be retried.
// The value zero means no timeout.
//
// If deadline is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Timeout time.Duration
// Deadline is the deadline value for the task.
// If task processing doesn't complete before the deadline, the task will be retried.
// The value time.Unix(0, 0) means no deadline.
//
// If timeout is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Deadline time.Time
}
// EnqueueAt schedules task to be enqueued at the specified time. // EnqueueAt schedules task to be enqueued at the specified time.
// //
// EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error. // EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
// //
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error { // By deafult, max retry is set to 25 and timeout is set to 30 minutes.
opt := composeOptions(opts...) func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) (*Result, error) {
msg := &base.TaskMessage{ return c.enqueueAt(t, task, opts...)
ID: xid.New(),
Type: task.Type,
Payload: task.Payload.data,
Queue: opt.queue,
Retry: opt.retry,
Timeout: opt.timeout.String(),
Deadline: opt.deadline.Format(time.RFC3339),
UniqueKey: uniqueKey(task, opt.uniqueTTL, opt.queue),
}
var err error
if time.Now().After(t) {
err = c.enqueue(msg, opt.uniqueTTL)
} else {
err = c.schedule(msg, t, opt.uniqueTTL)
}
if err == rdb.ErrDuplicateTask {
return fmt.Errorf("%w", ErrDuplicateTask)
}
return err
} }
// Enqueue enqueues task to be processed immediately. // Enqueue enqueues task to be processed immediately.
@@ -200,8 +246,9 @@ func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
// //
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error { // By deafult, max retry is set to 25 and timeout is set to 30 minutes.
return c.EnqueueAt(time.Now(), task, opts...) func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(time.Now(), task, opts...)
} }
// EnqueueIn schedules task to be enqueued after the specified delay. // EnqueueIn schedules task to be enqueued after the specified delay.
@@ -210,8 +257,65 @@ func (c *Client) Enqueue(task *Task, opts ...Option) error {
// //
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error { // By deafult, max retry is set to 25 and timeout is set to 30 minutes.
return c.EnqueueAt(time.Now().Add(d), task, opts...) func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(time.Now().Add(d), task, opts...)
}
// Close closes the connection with redis server.
func (c *Client) Close() error {
return c.rdb.Close()
}
func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) (*Result, error) {
c.mu.Lock()
defer c.mu.Unlock()
if defaults, ok := c.opts[task.Type]; ok {
opts = append(defaults, opts...)
}
opt := composeOptions(opts...)
deadline := noDeadline
if !opt.deadline.IsZero() {
deadline = opt.deadline
}
timeout := noTimeout
if opt.timeout != 0 {
timeout = opt.timeout
}
if deadline.Equal(noDeadline) && timeout == noTimeout {
// If neither deadline nor timeout are set, use default timeout.
timeout = defaultTimeout
}
msg := &base.TaskMessage{
ID: uuid.New(),
Type: task.Type,
Payload: task.Payload.data,
Queue: opt.queue,
Retry: opt.retry,
Deadline: deadline.Unix(),
Timeout: int64(timeout.Seconds()),
UniqueKey: uniqueKey(task, opt.uniqueTTL, opt.queue),
}
var err error
now := time.Now()
if t.Before(now) || t.Equal(now) {
err = c.enqueue(msg, opt.uniqueTTL)
} else {
err = c.schedule(msg, t, opt.uniqueTTL)
}
switch {
case err == rdb.ErrDuplicateTask:
return nil, fmt.Errorf("%w", ErrDuplicateTask)
case err != nil:
return nil, err
}
return &Result{
ID: msg.ID.String(),
Queue: msg.Queue,
Retry: msg.Retry,
Timeout: timeout,
Deadline: deadline,
}, nil
} }
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error { func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {

View File

@@ -27,9 +27,6 @@ func TestClientEnqueueAt(t *testing.T) {
var ( var (
now = time.Now() now = time.Now()
oneHourLater = now.Add(time.Hour) oneHourLater = now.Add(time.Hour)
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
) )
tests := []struct { tests := []struct {
@@ -37,6 +34,7 @@ func TestClientEnqueueAt(t *testing.T) {
task *Task task *Task
processAt time.Time processAt time.Time
opts []Option opts []Option
wantRes *Result
wantEnqueued map[string][]*base.TaskMessage wantEnqueued map[string][]*base.TaskMessage
wantScheduled []h.ZSetEntry wantScheduled []h.ZSetEntry
}{ }{
@@ -45,6 +43,12 @@ func TestClientEnqueueAt(t *testing.T) {
task: task, task: task,
processAt: now, processAt: now,
opts: []Option{}, opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
@@ -52,18 +56,24 @@ func TestClientEnqueueAt(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
}, },
{ {
desc: "Schedule task to be processed in the future", desc: "Schedule task to be processed in the future",
task: task, task: task,
processAt: oneHourLater, processAt: oneHourLater,
opts: []Option{}, opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{ wantScheduled: []h.ZSetEntry{
{ {
@@ -72,8 +82,8 @@ func TestClientEnqueueAt(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
Score: float64(oneHourLater.Unix()), Score: float64(oneHourLater.Unix()),
}, },
@@ -84,11 +94,15 @@ func TestClientEnqueueAt(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...) gotRes, err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueueAt(processAt, task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname) gotEnqueued := h.GetEnqueuedMessages(t, r, qname)
@@ -113,15 +127,11 @@ func TestClientEnqueue(t *testing.T) {
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
opts []Option opts []Option
wantRes *Result
wantEnqueued map[string][]*base.TaskMessage wantEnqueued map[string][]*base.TaskMessage
}{ }{
{ {
@@ -130,6 +140,12 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
MaxRetry(3), MaxRetry(3),
}, },
wantRes: &Result{
Queue: "default",
Retry: 3,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
@@ -137,8 +153,8 @@ func TestClientEnqueue(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: 3, Retry: 3,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -149,6 +165,12 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
MaxRetry(-2), MaxRetry(-2),
}, },
wantRes: &Result{
Queue: "default",
Retry: 0,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
@@ -156,8 +178,8 @@ func TestClientEnqueue(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: 0, // Retry count should be set to zero Retry: 0, // Retry count should be set to zero
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -169,6 +191,12 @@ func TestClientEnqueue(t *testing.T) {
MaxRetry(2), MaxRetry(2),
MaxRetry(10), MaxRetry(10),
}, },
wantRes: &Result{
Queue: "default",
Retry: 10,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
@@ -176,8 +204,8 @@ func TestClientEnqueue(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: 10, // Last option takes precedence Retry: 10, // Last option takes precedence
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -188,6 +216,12 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Queue("custom"), Queue("custom"),
}, },
wantRes: &Result{
Queue: "custom",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"custom": { "custom": {
{ {
@@ -195,8 +229,8 @@ func TestClientEnqueue(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "custom", Queue: "custom",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -207,6 +241,12 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Queue("HIGH"), Queue("HIGH"),
}, },
wantRes: &Result{
Queue: "high",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"high": { "high": {
{ {
@@ -214,8 +254,8 @@ func TestClientEnqueue(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "high", Queue: "high",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -226,6 +266,12 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Timeout(20 * time.Second), Timeout(20 * time.Second),
}, },
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: 20 * time.Second,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
@@ -233,8 +279,8 @@ func TestClientEnqueue(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: (20 * time.Second).String(), Timeout: 20,
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -245,6 +291,12 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)), Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
}, },
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: noTimeout,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
@@ -252,8 +304,34 @@ func TestClientEnqueue(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(noTimeout.Seconds()),
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Format(time.RFC3339), Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Unix(),
},
},
},
},
{
desc: "With both deadline and timeout options",
task: task,
opts: []Option{
Timeout(20 * time.Second),
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: 20 * time.Second,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: 20,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Unix(),
}, },
}, },
}, },
@@ -263,11 +341,15 @@ func TestClientEnqueue(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
err := client.Enqueue(tc.task, tc.opts...) gotRes, err := client.Enqueue(tc.task, tc.opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantEnqueued {
got := h.GetEnqueuedMessages(t, r, qname) got := h.GetEnqueuedMessages(t, r, qname)
@@ -287,24 +369,26 @@ func TestClientEnqueueIn(t *testing.T) {
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
delay time.Duration delay time.Duration
opts []Option opts []Option
wantRes *Result
wantEnqueued map[string][]*base.TaskMessage wantEnqueued map[string][]*base.TaskMessage
wantScheduled []h.ZSetEntry wantScheduled []h.ZSetEntry
}{ }{
{ {
desc: "schedule a task to be enqueued in one hour", desc: "schedule a task to be enqueued in one hour",
task: task, task: task,
delay: time.Hour, delay: time.Hour,
opts: []Option{}, opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{ wantScheduled: []h.ZSetEntry{
{ {
@@ -313,8 +397,8 @@ func TestClientEnqueueIn(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
Score: float64(time.Now().Add(time.Hour).Unix()), Score: float64(time.Now().Add(time.Hour).Unix()),
}, },
@@ -325,6 +409,12 @@ func TestClientEnqueueIn(t *testing.T) {
task: task, task: task,
delay: 0, delay: 0,
opts: []Option{}, opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{ wantEnqueued: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
@@ -332,8 +422,8 @@ func TestClientEnqueueIn(t *testing.T) {
Payload: task.Payload.data, Payload: task.Payload.data,
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: noTimeout, Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline, Deadline: noDeadline.Unix(),
}, },
}, },
}, },
@@ -344,11 +434,15 @@ func TestClientEnqueueIn(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueIn(tc.delay, tc.task, tc.opts...) gotRes, err := client.EnqueueIn(tc.delay, tc.task, tc.opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueueIn(delay, task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname) gotEnqueued := h.GetEnqueuedMessages(t, r, qname)
@@ -364,6 +458,109 @@ func TestClientEnqueueIn(t *testing.T) {
} }
} }
func TestClientDefaultOptions(t *testing.T) {
r := setup(t)
tests := []struct {
desc string
defaultOpts []Option // options set at the client level.
opts []Option // options used at enqueue time.
task *Task
wantRes *Result
queue string // queue that the message should go into.
want *base.TaskMessage
}{
{
desc: "With queue routing option",
defaultOpts: []Option{Queue("feed")},
opts: []Option{},
task: NewTask("feed:import", nil),
wantRes: &Result{
Queue: "feed",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: defaultMaxRetry,
Queue: "feed",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
{
desc: "With multiple options",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{},
task: NewTask("feed:import", nil),
wantRes: &Result{
Queue: "feed",
Retry: 5,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "feed",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
{
desc: "With overriding options at enqueue time",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{Queue("critical")},
task: NewTask("feed:import", nil),
wantRes: &Result{
Queue: "critical",
Retry: 5,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "critical",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "critical",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...)
gotRes, err := c.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
enqueued := h.GetEnqueuedMessages(t, r, tc.queue)
if len(enqueued) != 1 {
t.Errorf("%s;\nexpected queue %q to have one message; got %d messages in the queue.",
tc.desc, tc.queue, len(enqueued))
continue
}
got := enqueued[0]
if diff := cmp.Diff(tc.want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in enqueued task message; (-want,+got)\n%s",
tc.desc, diff)
}
}
}
func TestUniqueKey(t *testing.T) { func TestUniqueKey(t *testing.T) {
tests := []struct { tests := []struct {
desc string desc string
@@ -451,7 +648,7 @@ func TestEnqueueUnique(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed. // Enqueue the task first. It should succeed.
err := c.Enqueue(tc.task, Unique(tc.ttl)) _, err := c.Enqueue(tc.task, Unique(tc.ttl))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -463,7 +660,7 @@ func TestEnqueueUnique(t *testing.T) {
} }
// Enqueue the task again. It should fail. // Enqueue the task again. It should fail.
err = c.Enqueue(tc.task, Unique(tc.ttl)) _, err = c.Enqueue(tc.task, Unique(tc.ttl))
if err == nil { if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task) t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue continue
@@ -498,7 +695,7 @@ func TestEnqueueInUnique(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed. // Enqueue the task first. It should succeed.
err := c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl)) _, err := c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -511,7 +708,7 @@ func TestEnqueueInUnique(t *testing.T) {
} }
// Enqueue the task again. It should fail. // Enqueue the task again. It should fail.
err = c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl)) _, err = c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl))
if err == nil { if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task) t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue continue
@@ -546,7 +743,7 @@ func TestEnqueueAtUnique(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed. // Enqueue the task first. It should succeed.
err := c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl)) _, err := c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -559,7 +756,7 @@ func TestEnqueueAtUnique(t *testing.T) {
} }
// Enqueue the task again. It should fail. // Enqueue the task again. It should fail.
err = c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl)) _, err = c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl))
if err == nil { if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task) t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue continue

74
context.go Normal file
View File

@@ -0,0 +1,74 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"time"
"github.com/hibiken/asynq/internal/base"
)
// A taskMetadata holds task scoped data to put in context.
type taskMetadata struct {
id string
maxRetry int
retryCount int
}
// ctxKey type is unexported to prevent collisions with context keys defined in
// other packages.
type ctxKey int
// metadataCtxKey is the context key for the task metadata.
// Its value of zero is arbitrary.
const metadataCtxKey ctxKey = 0
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage, deadline time.Time) (context.Context, context.CancelFunc) {
metadata := taskMetadata{
id: msg.ID.String(),
maxRetry: msg.Retry,
retryCount: msg.Retried,
}
ctx := context.WithValue(context.Background(), metadataCtxKey, metadata)
return context.WithDeadline(ctx, deadline)
}
// GetTaskID extracts a task ID from a context, if any.
//
// ID of a task is guaranteed to be unique.
// ID of a task doesn't change if the task is being retried.
func GetTaskID(ctx context.Context) (id string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.id, true
}
// GetRetryCount extracts retry count from a context, if any.
//
// Return value n indicates the number of times associated task has been
// retried so far.
func GetRetryCount(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.retryCount, true
}
// GetMaxRetry extracts maximum retry from a context, if any.
//
// Return value n indicates the maximum number of times the assoicated task
// can be retried if ProcessTask returns a non-nil error.
func GetMaxRetry(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.maxRetry, true
}

148
context_test.go Normal file
View File

@@ -0,0 +1,148 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
)
func TestCreateContextWithFutureDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.New(),
Payload: nil,
}
ctx, cancel := createContext(msg, tc.deadline)
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
}
}
func TestCreateContextWithPastDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(-2 * time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.New(),
Payload: nil,
}
ctx, cancel := createContext(msg, tc.deadline)
defer cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
}
}
func TestGetTaskMetadataFromContext(t *testing.T) {
tests := []struct {
desc string
msg *base.TaskMessage
}{
{"with zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 25, Retried: 0, Timeout: 1800}},
{"with non-zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 10, Retried: 5, Timeout: 1800}},
}
for _, tc := range tests {
ctx, cancel := createContext(tc.msg, time.Now().Add(30*time.Minute))
defer cancel()
id, ok := GetTaskID(ctx)
if !ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == false", tc.desc)
}
if ok && id != tc.msg.ID.String() {
t.Errorf("%s: GetTaskID(ctx) returned id == %q, want %q", tc.desc, id, tc.msg.ID.String())
}
retried, ok := GetRetryCount(ctx)
if !ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == false", tc.desc)
}
if ok && retried != tc.msg.Retried {
t.Errorf("%s: GetRetryCount(ctx) returned n == %d want %d", tc.desc, retried, tc.msg.Retried)
}
maxRetry, ok := GetMaxRetry(ctx)
if !ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == false", tc.desc)
}
if ok && maxRetry != tc.msg.Retry {
t.Errorf("%s: GetMaxRetry(ctx) returned n == %d want %d", tc.desc, maxRetry, tc.msg.Retry)
}
}
}
func TestGetTaskMetadataFromContextError(t *testing.T) {
tests := []struct {
desc string
ctx context.Context
}{
{"with background context", context.Background()},
}
for _, tc := range tests {
if _, ok := GetTaskID(tc.ctx); ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == true", tc.desc)
}
if _, ok := GetRetryCount(tc.ctx); ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == true", tc.desc)
}
if _, ok := GetMaxRetry(tc.ctx); ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == true", tc.desc)
}
}
}

16
doc.go
View File

@@ -14,7 +14,7 @@ specify the options using one of RedisConnOpt types.
DB: 3, DB: 3,
} }
The Client is used to register a task to be processed at the specified time. The Client is used to enqueue a task to be processed at the specified time.
Task is created with two parameters: its type and payload. Task is created with two parameters: its type and payload.
@@ -25,20 +25,20 @@ Task is created with two parameters: its type and payload.
map[string]interface{}{"user_id": 42}) map[string]interface{}{"user_id": 42})
// Enqueue the task to be processed immediately. // Enqueue the task to be processed immediately.
err := client.Enqueue(t) res, err := client.Enqueue(t)
// Schedule the task to be processed in one minute. // Schedule the task to be processed after one minute.
err = client.EnqueueIn(time.Minute, t) res, err = client.EnqueueIn(time.Minute, t)
The Background is used to run the background task processing with a given The Server is used to run the background task processing with a given
handler. handler.
bg := asynq.NewBackground(redis, &asynq.Config{ srv := asynq.NewServer(redis, asynq.Config{
Concurrency: 10, Concurrency: 10,
}) })
bg.Run(handler) srv.Run(handler)
Handler is an interface with one method ProcessTask which Handler is an interface type with a method which
takes a task and returns an error. Handler should return nil if takes a task and returns an error. Handler should return nil if
the processing is successful, otherwise return a non-nil error. the processing is successful, otherwise return a non-nil error.
If handler panics or returns a non-nil error, the task will be retried in the future. If handler panics or returns a non-nil error, the task will be retried in the future.

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

Before

Width:  |  Height:  |  Size: 582 KiB

After

Width:  |  Height:  |  Size: 582 KiB

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
docs/assets/overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

95
example_test.go Normal file
View File

@@ -0,0 +1,95 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq_test
import (
"fmt"
"log"
"os"
"os/signal"
"github.com/hibiken/asynq"
"golang.org/x/sys/unix"
)
func ExampleServer_Run() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
// Run blocks and waits for os signal to terminate the program.
if err := srv.Run(h); err != nil {
log.Fatal(err)
}
}
func ExampleServer_Stop() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs // wait for termination signal
srv.Stop()
}
func ExampleServer_Quiet() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
// Handle SIGTERM, SIGINT to exit the program.
// Handle SIGTSTP to stop processing new tasks.
for {
s := <-sigs
if s == unix.SIGTSTP {
srv.Quiet() // stop processing new tasks
continue
}
break
}
srv.Stop()
}
func ExampleParseRedisURI() {
rconn, err := asynq.ParseRedisURI("redis://localhost:6379/10")
if err != nil {
log.Fatal(err)
}
r, ok := rconn.(asynq.RedisClientOpt)
if !ok {
log.Fatal("unexpected type")
}
fmt.Println(r.Addr)
fmt.Println(r.DB)
// Output:
// localhost:6379
// 10
}

4
go.mod
View File

@@ -5,10 +5,10 @@ go 1.13
require ( require (
github.com/go-redis/redis/v7 v7.2.0 github.com/go-redis/redis/v7 v7.2.0
github.com/google/go-cmp v0.4.0 github.com/google/go-cmp v0.4.0
github.com/rs/xid v1.2.1 github.com/google/uuid v1.1.1
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.3.1
go.uber.org/goleak v0.10.0 go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e // indirect golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/yaml.v2 v2.2.7 // indirect gopkg.in/yaml.v2 v2.2.7 // indirect
) )

22
go.sum
View File

@@ -2,16 +2,15 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-redis/redis/v7 v7.0.0-beta.4 h1:p6z7Pde69EGRWvlC++y8aFcaWegyrKHzOBGo0zUACTQ=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs= github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg= github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
@@ -20,16 +19,12 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w= github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo= github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w= github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
@@ -39,10 +34,8 @@ go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092 h1:4QSRKanuywn15aTZvI/mIDEgPQpswuFndXpOj3rKEco= golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -60,8 +53,7 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=

View File

@@ -5,69 +5,161 @@
package asynq package asynq
import ( import (
"os"
"sync" "sync"
"time" "time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/log"
) )
// heartbeater is responsible for writing process info to redis periodically to // heartbeater is responsible for writing process info to redis periodically to
// indicate that the background worker process is up. // indicate that the background worker process is up.
type heartbeater struct { type heartbeater struct {
logger Logger logger *log.Logger
rdb *rdb.RDB broker base.Broker
ps *base.ProcessState
// channel to communicate back to the long running "heartbeater" goroutine. // channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{} done chan struct{}
// interval between heartbeats. // interval between heartbeats.
interval time.Duration interval time.Duration
// following fields are initialized at construction time and are immutable.
host string
pid int
serverID string
concurrency int
queues map[string]int
strictPriority bool
// following fields are mutable and should be accessed only by the
// heartbeater goroutine. In other words, confine these variables
// to this goroutine only.
started time.Time
workers map[string]workerStat
// status is shared with other goroutine but is concurrency safe.
status *base.ServerStatus
// channels to receive updates on active workers.
starting <-chan *base.TaskMessage
finished <-chan *base.TaskMessage
} }
func newHeartbeater(l Logger, rdb *rdb.RDB, ps *base.ProcessState, interval time.Duration) *heartbeater { type heartbeaterParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
concurrency int
queues map[string]int
strictPriority bool
status *base.ServerStatus
starting <-chan *base.TaskMessage
finished <-chan *base.TaskMessage
}
func newHeartbeater(params heartbeaterParams) *heartbeater {
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
return &heartbeater{ return &heartbeater{
logger: l, logger: params.logger,
rdb: rdb, broker: params.broker,
ps: ps,
done: make(chan struct{}), done: make(chan struct{}),
interval: interval, interval: params.interval,
host: host,
pid: os.Getpid(),
serverID: uuid.New().String(),
concurrency: params.concurrency,
queues: params.queues,
strictPriority: params.strictPriority,
status: params.status,
workers: make(map[string]workerStat),
starting: params.starting,
finished: params.finished,
} }
} }
func (h *heartbeater) terminate() { func (h *heartbeater) terminate() {
h.logger.Info("Heartbeater shutting down...") h.logger.Debug("Heartbeater shutting down...")
// Signal the heartbeater goroutine to stop. // Signal the heartbeater goroutine to stop.
h.done <- struct{}{} h.done <- struct{}{}
} }
// A workerStat records the message a worker is working on
// and the time the worker has started processing the message.
type workerStat struct {
started time.Time
msg *base.TaskMessage
}
func (h *heartbeater) start(wg *sync.WaitGroup) { func (h *heartbeater) start(wg *sync.WaitGroup) {
h.ps.SetStarted(time.Now())
h.ps.SetStatus(base.StatusRunning)
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
h.started = time.Now()
h.beat() h.beat()
timer := time.NewTimer(h.interval)
for { for {
select { select {
case <-h.done: case <-h.done:
h.rdb.ClearProcessState(h.ps) h.broker.ClearServerState(h.host, h.pid, h.serverID)
h.logger.Info("Heartbeater done") h.logger.Debug("Heartbeater done")
timer.Stop()
return return
case <-time.After(h.interval):
case <-timer.C:
h.beat() h.beat()
timer.Reset(h.interval)
case msg := <-h.starting:
h.workers[msg.ID.String()] = workerStat{time.Now(), msg}
case msg := <-h.finished:
delete(h.workers, msg.ID.String())
} }
} }
}() }()
} }
func (h *heartbeater) beat() { func (h *heartbeater) beat() {
info := base.ServerInfo{
Host: h.host,
PID: h.pid,
ServerID: h.serverID,
Concurrency: h.concurrency,
Queues: h.queues,
StrictPriority: h.strictPriority,
Status: h.status.String(),
Started: h.started,
ActiveWorkerCount: len(h.workers),
}
var ws []*base.WorkerInfo
for id, stat := range h.workers {
ws = append(ws, &base.WorkerInfo{
Host: h.host,
PID: h.pid,
ID: id,
Type: stat.msg.Type,
Queue: stat.msg.Queue,
Payload: stat.msg.Payload,
Started: stat.started,
})
}
// Note: Set TTL to be long enough so that it won't expire before we write again // Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed. // and short enough to expire quickly once the process is shut down or killed.
err := h.rdb.WriteProcessState(h.ps, h.interval*2) if err := h.broker.WriteServerState(&info, ws, h.interval*2); err != nil {
if err != nil { h.logger.Errorf("could not write server state data: %v", err)
h.logger.Error("could not write heartbeat data: %v", err)
} }
} }

View File

@@ -14,6 +14,7 @@ import (
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
) )
func TestHeartbeater(t *testing.T) { func TestHeartbeater(t *testing.T) {
@@ -31,17 +32,33 @@ func TestHeartbeater(t *testing.T) {
} }
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond) timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{}) ignoreOpt := cmpopts.IgnoreUnexported(base.ServerInfo{})
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
state := base.NewProcessState(tc.host, tc.pid, tc.concurrency, tc.queues, false) status := base.NewServerStatus(base.StatusIdle)
hb := newHeartbeater(testLogger, rdbClient, state, tc.interval) hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: rdbClient,
interval: tc.interval,
concurrency: tc.concurrency,
queues: tc.queues,
strictPriority: false,
status: status,
starting: make(chan *base.TaskMessage),
finished: make(chan *base.TaskMessage),
})
// Change host and pid fields for testing purpose.
hb.host = tc.host
hb.pid = tc.pid
status.Set(base.StatusRunning)
var wg sync.WaitGroup var wg sync.WaitGroup
hb.start(&wg) hb.start(&wg)
want := &base.ProcessInfo{ want := &base.ServerInfo{
Host: tc.host, Host: tc.host,
PID: tc.pid, PID: tc.pid,
Queues: tc.queues, Queues: tc.queues,
@@ -53,47 +70,47 @@ func TestHeartbeater(t *testing.T) {
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval * 2)
ps, err := rdbClient.ListProcesses() ss, err := rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("could not read server info from redis: %v", err)
hb.terminate() hb.terminate()
continue continue
} }
if len(ps) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps)) t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
hb.terminate() hb.terminate()
continue continue
} }
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.terminate()
continue continue
} }
// status change // status change
state.SetStatus(base.StatusStopped) status.Set(base.StatusStopped)
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval * 2)
want.Status = "stopped" want.Status = "stopped"
ps, err = rdbClient.ListProcesses() ss, err = rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("could not read process status from redis: %v", err)
hb.terminate() hb.terminate()
continue continue
} }
if len(ps) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps)) t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
hb.terminate() hb.terminate()
continue continue
} }
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.terminate()
continue continue
} }
@@ -101,3 +118,35 @@ func TestHeartbeater(t *testing.T) {
hb.terminate() hb.terminate()
} }
} }
func TestHeartbeaterWithRedisDown(t *testing.T) {
// Make sure that heartbeater goroutine doesn't panic
// if it cannot connect to redis.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: testBroker,
interval: time.Second,
concurrency: 10,
queues: map[string]int{"default": 1},
strictPriority: false,
status: base.NewServerStatus(base.StatusRunning),
starting: make(chan *base.TaskMessage),
finished: make(chan *base.TaskMessage),
})
testBroker.Sleep()
var wg sync.WaitGroup
hb.start(&wg)
// wait for heartbeater to try writing data to redis
time.Sleep(2 * time.Second)
hb.terminate()
}

View File

@@ -13,8 +13,8 @@ import (
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
) )
// ZSetEntry is an entry in redis sorted set. // ZSetEntry is an entry in redis sorted set.
@@ -41,9 +41,9 @@ var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) [
return out return out
}) })
// SortProcessInfoOpt is a cmp.Option to sort base.ProcessInfo for comparing slice of process info. // SortServerInfoOpt is a cmp.Option to sort base.ServerInfo for comparing slice of process info.
var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.ProcessInfo) []*base.ProcessInfo { var SortServerInfoOpt = cmp.Transformer("SortServerInfo", func(in []*base.ServerInfo) []*base.ServerInfo {
out := append([]*base.ProcessInfo(nil), in...) // Copy input to avoid mutating it out := append([]*base.ServerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host { if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host return out[i].Host < out[j].Host
@@ -57,7 +57,7 @@ var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.Proc
var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo { var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo {
out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
return out[i].ID.String() < out[j].ID.String() return out[i].ID < out[j].ID
}) })
return out return out
}) })
@@ -75,11 +75,13 @@ var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload. // NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage { func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage {
return &base.TaskMessage{ return &base.TaskMessage{
ID: xid.New(), ID: uuid.New(),
Type: taskType, Type: taskType,
Queue: base.DefaultQueueName, Queue: base.DefaultQueueName,
Retry: 25, Retry: 25,
Payload: payload, Payload: payload,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
} }
} }
@@ -87,7 +89,7 @@ func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskM
// task type, payload and queue name. // task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage { func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage {
return &base.TaskMessage{ return &base.TaskMessage{
ID: xid.New(), ID: uuid.New(),
Type: taskType, Type: taskType,
Queue: qname, Queue: qname,
Retry: 25, Retry: 25,
@@ -95,6 +97,20 @@ func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qn
} }
} }
// TaskMessageAfterRetry returns an updated copy of t after retry.
// It increments retry count and sets the error message.
func TaskMessageAfterRetry(t base.TaskMessage, errMsg string) *base.TaskMessage {
t.Retried = t.Retried + 1
t.ErrorMsg = errMsg
return &t
}
// TaskMessageWithError returns an updated copy of t with the given error message.
func TaskMessageWithError(t base.TaskMessage, errMsg string) *base.TaskMessage {
t.ErrorMsg = errMsg
return &t
}
// MustMarshal marshals given task message and returns a json string. // MustMarshal marshals given task message and returns a json string.
// Calling test will fail if marshaling errors out. // Calling test will fail if marshaling errors out.
func MustMarshal(tb testing.TB, msg *base.TaskMessage) string { func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
@@ -185,6 +201,12 @@ func SeedDeadQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) {
seedRedisZSet(tb, r, base.DeadQueue, entries) seedRedisZSet(tb, r, base.DeadQueue, entries)
} }
// SeedDeadlines initializes the deadlines set with the given entries.
func SeedDeadlines(tb testing.TB, r *redis.Client, entries []ZSetEntry) {
tb.Helper()
seedRedisZSet(tb, r, base.KeyDeadlines, entries)
}
func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.TaskMessage) { func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.TaskMessage) {
data := MustMarshalSlice(tb, msgs) data := MustMarshalSlice(tb, msgs)
for _, s := range data { for _, s := range data {
@@ -257,6 +279,12 @@ func GetDeadEntries(tb testing.TB, r *redis.Client) []ZSetEntry {
return getZSetEntries(tb, r, base.DeadQueue) return getZSetEntries(tb, r, base.DeadQueue)
} }
// GetDeadlinesEntries returns all task messages and its score in the deadlines set.
func GetDeadlinesEntries(tb testing.TB, r *redis.Client) []ZSetEntry {
tb.Helper()
return getZSetEntries(tb, r, base.KeyDeadlines)
}
func getListMessages(tb testing.TB, r *redis.Client, list string) []*base.TaskMessage { func getListMessages(tb testing.TB, r *redis.Client, list string) []*base.TaskMessage {
data := r.LRange(list, 0, -1).Val() data := r.LRange(list, 0, -1).Val()
return MustUnmarshalSlice(tb, data) return MustUnmarshalSlice(tb, data)

View File

@@ -7,23 +7,28 @@ package base
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/rs/xid" "github.com/go-redis/redis/v7"
"github.com/google/uuid"
) )
// Version of asynq library and CLI.
const Version = "0.10.0"
// DefaultQueueName is the queue name used if none are specified by user. // DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default" const DefaultQueueName = "default"
// Redis keys // Redis keys
const ( const (
AllProcesses = "asynq:ps" // ZSET AllServers = "asynq:servers" // ZSET
psPrefix = "asynq:ps:" // STRING - asynq:ps:<host>:<pid> serversPrefix = "asynq:servers:" // STRING - asynq:ps:<host>:<pid>:<serverid>
AllWorkers = "asynq:workers" // ZSET AllWorkers = "asynq:workers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid> workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>:<serverid>
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd> processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd> failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname> QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
@@ -33,6 +38,8 @@ const (
RetryQueue = "asynq:retry" // ZSET RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST InProgressQueue = "asynq:in_progress" // LIST
KeyDeadlines = "asynq:deadlines" // ZSET
PausedQueues = "asynq:paused" // SET
CancelChannel = "asynq:cancel" // PubSub channel CancelChannel = "asynq:cancel" // PubSub channel
) )
@@ -51,14 +58,14 @@ func FailureKey(t time.Time) string {
return failurePrefix + t.UTC().Format("2006-01-02") return failurePrefix + t.UTC().Format("2006-01-02")
} }
// ProcessInfoKey returns a redis key for process info. // ServerInfoKey returns a redis key for process info.
func ProcessInfoKey(hostname string, pid int) string { func ServerInfoKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d", psPrefix, hostname, pid) return fmt.Sprintf("%s%s:%d:%s", serversPrefix, hostname, pid, sid)
} }
// WorkersKey returns a redis key for the workers given hostname and pid. // WorkersKey returns a redis key for the workers given hostname, pid, and server ID.
func WorkersKey(hostname string, pid int) string { func WorkersKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d", workersPrefix, hostname, pid) return fmt.Sprintf("%s%s:%d:%s", workersPrefix, hostname, pid, sid)
} }
// TaskMessage is the internal representation of a task with additional metadata fields. // TaskMessage is the internal representation of a task with additional metadata fields.
@@ -71,7 +78,7 @@ type TaskMessage struct {
Payload map[string]interface{} Payload map[string]interface{}
// ID is a unique identifier for each task. // ID is a unique identifier for each task.
ID xid.ID ID uuid.UUID
// Queue is a name this message should be enqueued to. // Queue is a name this message should be enqueued to.
Queue string Queue string
@@ -85,18 +92,20 @@ type TaskMessage struct {
// ErrorMsg holds the error message from the last failure. // ErrorMsg holds the error message from the last failure.
ErrorMsg string ErrorMsg string
// Timeout specifies how long a task may run. // Timeout specifies timeout in seconds.
// The string value should be compatible with time.Duration.ParseDuration. // If task processing doesn't complete within the timeout, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the dead queue.
// //
// Zero means no limit. // Use zero to indicate no timeout.
Timeout string Timeout int64
// Deadline specifies the deadline for the task. // Deadline specifies the deadline for the task in Unix time,
// Task won't be processed if it exceeded its deadline. // the number of seconds elapsed since January 1, 1970 UTC.
// The string shoulbe be in RFC3339 format. // If task processing doesn't complete before the deadline, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the dead queue.
// //
// time.Time's zero value means no deadline. // Use zero to indicate no deadline.
Deadline string Deadline int64
// UniqueKey holds the redis key used for uniqueness lock for this task. // UniqueKey holds the redis key used for uniqueness lock for this task.
// //
@@ -104,149 +113,90 @@ type TaskMessage struct {
UniqueKey string UniqueKey string
} }
// ProcessState holds process level information. // EncodeMessage marshals the given task message in JSON and returns an encoded string.
// func EncodeMessage(msg *TaskMessage) (string, error) {
// ProcessStates are safe for concurrent use by multiple goroutines. b, err := json.Marshal(msg)
type ProcessState struct { if err != nil {
mu sync.Mutex // guards all data fields return "", err
concurrency int }
queues map[string]int return string(b), nil
strictPriority bool
pid int
host string
status PStatus
started time.Time
workers map[string]*workerStats
} }
// PStatus represents status of a process. // DecodeMessage unmarshals the given encoded string and returns a decoded task message.
type PStatus int func DecodeMessage(s string) (*TaskMessage, error) {
d := json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, err
}
return &msg, nil
}
// ServerStatus represents status of a server.
// ServerStatus methods are concurrency safe.
type ServerStatus struct {
mu sync.Mutex
val ServerStatusValue
}
// NewServerStatus returns a new status instance given an initial value.
func NewServerStatus(v ServerStatusValue) *ServerStatus {
return &ServerStatus{val: v}
}
type ServerStatusValue int
const ( const (
// StatusIdle indicates process is in idle state. // StatusIdle indicates the server is in idle state.
StatusIdle PStatus = iota StatusIdle ServerStatusValue = iota
// StatusRunning indicates process is up and processing tasks. // StatusRunning indicates the servier is up and processing tasks.
StatusRunning StatusRunning
// StatusStopped indicates process is up but not processing new tasks. // StatusQuiet indicates the server is up but not processing new tasks.
StatusQuiet
// StatusStopped indicates the server server has been stopped.
StatusStopped StatusStopped
) )
var statuses = []string{ var statuses = []string{
"idle", "idle",
"running", "running",
"quiet",
"stopped", "stopped",
} }
func (s PStatus) String() string { func (s *ServerStatus) String() string {
if StatusIdle <= s && s <= StatusStopped { s.mu.Lock()
return statuses[s] defer s.mu.Unlock()
if StatusIdle <= s.val && s.val <= StatusStopped {
return statuses[s.val]
} }
return "unknown status" return "unknown status"
} }
type workerStats struct { // Get returns the status value.
msg *TaskMessage func (s *ServerStatus) Get() ServerStatusValue {
started time.Time s.mu.Lock()
v := s.val
s.mu.Unlock()
return v
} }
// NewProcessState returns a new instance of ProcessState. // Set sets the status value.
func NewProcessState(host string, pid, concurrency int, queues map[string]int, strict bool) *ProcessState { func (s *ServerStatus) Set(v ServerStatusValue) {
return &ProcessState{ s.mu.Lock()
host: host, s.val = v
pid: pid, s.mu.Unlock()
concurrency: concurrency,
queues: cloneQueueConfig(queues),
strictPriority: strict,
status: StatusIdle,
workers: make(map[string]*workerStats),
}
} }
// SetStatus updates the state of process. // ServerInfo holds information about a running server.
func (ps *ProcessState) SetStatus(status PStatus) { type ServerInfo struct {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.status = status
}
// SetStarted records when the process started processing.
func (ps *ProcessState) SetStarted(t time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.started = t
}
// AddWorkerStats records when a worker started and which task it's processing.
func (ps *ProcessState) AddWorkerStats(msg *TaskMessage, started time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.workers[msg.ID.String()] = &workerStats{msg, started}
}
// DeleteWorkerStats removes a worker's entry from the process state.
func (ps *ProcessState) DeleteWorkerStats(msg *TaskMessage) {
ps.mu.Lock()
defer ps.mu.Unlock()
delete(ps.workers, msg.ID.String())
}
// Get returns current state of process as a ProcessInfo.
func (ps *ProcessState) Get() *ProcessInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
return &ProcessInfo{
Host: ps.host,
PID: ps.pid,
Concurrency: ps.concurrency,
Queues: cloneQueueConfig(ps.queues),
StrictPriority: ps.strictPriority,
Status: ps.status.String(),
Started: ps.started,
ActiveWorkerCount: len(ps.workers),
}
}
// GetWorkers returns a list of currently running workers' info.
func (ps *ProcessState) GetWorkers() []*WorkerInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
var res []*WorkerInfo
for _, w := range ps.workers {
res = append(res, &WorkerInfo{
Host: ps.host,
PID: ps.pid,
ID: w.msg.ID,
Type: w.msg.Type,
Queue: w.msg.Queue,
Payload: clonePayload(w.msg.Payload),
Started: w.started,
})
}
return res
}
func cloneQueueConfig(qcfg map[string]int) map[string]int {
res := make(map[string]int)
for qname, n := range qcfg {
res[qname] = n
}
return res
}
func clonePayload(payload map[string]interface{}) map[string]interface{} {
res := make(map[string]interface{})
for k, v := range payload {
res[k] = v
}
return res
}
// ProcessInfo holds information about a running background worker process.
type ProcessInfo struct {
Host string Host string
PID int PID int
ServerID string
Concurrency int Concurrency int
Queues map[string]int Queues map[string]int
StrictPriority bool StrictPriority bool
@@ -259,7 +209,7 @@ type ProcessInfo struct {
type WorkerInfo struct { type WorkerInfo struct {
Host string Host string
PID int PID int
ID xid.ID ID string
Type string Type string
Queue string Queue string
Payload map[string]interface{} Payload map[string]interface{}
@@ -303,13 +253,24 @@ func (c *Cancelations) Get(id string) (fn context.CancelFunc, ok bool) {
return fn, ok return fn, ok
} }
// GetAll returns all cancel funcs. // Broker is a message broker that supports operations to manage task queues.
func (c *Cancelations) GetAll() []context.CancelFunc { //
c.mu.Lock() // See rdb.RDB as a reference implementation.
defer c.mu.Unlock() type Broker interface {
var res []context.CancelFunc Enqueue(msg *TaskMessage) error
for _, fn := range c.cancelFuncs { EnqueueUnique(msg *TaskMessage, ttl time.Duration) error
res = append(res, fn) Dequeue(qnames ...string) (*TaskMessage, time.Time, error)
} Done(msg *TaskMessage) error
return res Requeue(msg *TaskMessage) error
Schedule(msg *TaskMessage, processAt time.Time) error
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
Kill(msg *TaskMessage, errMsg string) error
CheckAndEnqueue() error
ListDeadlineExceeded(deadline time.Time) ([]*TaskMessage, error)
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
ClearServerState(host string, pid int, serverID string) error
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers
PublishCancelation(id string) error
Close() error
} }

View File

@@ -6,13 +6,13 @@ package base
import ( import (
"context" "context"
"math/rand" "encoding/json"
"sync" "sync"
"testing" "testing"
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/rs/xid" "github.com/google/uuid"
) )
func TestQueueKey(t *testing.T) { func TestQueueKey(t *testing.T) {
@@ -67,20 +67,22 @@ func TestFailureKey(t *testing.T) {
} }
} }
func TestProcessInfoKey(t *testing.T) { func TestServerInfoKey(t *testing.T) {
tests := []struct { tests := []struct {
hostname string hostname string
pid int pid int
sid string
want string want string
}{ }{
{"localhost", 9876, "asynq:ps:localhost:9876"}, {"localhost", 9876, "server123", "asynq:servers:localhost:9876:server123"},
{"127.0.0.1", 1234, "asynq:ps:127.0.0.1:1234"}, {"127.0.0.1", 1234, "server987", "asynq:servers:127.0.0.1:1234:server987"},
} }
for _, tc := range tests { for _, tc := range tests {
got := ProcessInfoKey(tc.hostname, tc.pid) got := ServerInfoKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want { if got != tc.want {
t.Errorf("ProcessInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want) t.Errorf("ServerInfoKey(%q, %d, %q) = %q, want %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
} }
} }
} }
@@ -89,80 +91,92 @@ func TestWorkersKey(t *testing.T) {
tests := []struct { tests := []struct {
hostname string hostname string
pid int pid int
sid string
want string want string
}{ }{
{"localhost", 9876, "asynq:workers:localhost:9876"}, {"localhost", 9876, "server1", "asynq:workers:localhost:9876:server1"},
{"127.0.0.1", 1234, "asynq:workers:127.0.0.1:1234"}, {"127.0.0.1", 1234, "server2", "asynq:workers:127.0.0.1:1234:server2"},
} }
for _, tc := range tests { for _, tc := range tests {
got := WorkersKey(tc.hostname, tc.pid) got := WorkersKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want { if got != tc.want {
t.Errorf("WorkersKey(%q, %d) = %q, want = %q", tc.hostname, tc.pid, got, tc.want) t.Errorf("WorkersKey(%q, %d, %q) = %q, want = %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
} }
} }
} }
// Test for process state being accessed by multiple goroutines. func TestMessageEncoding(t *testing.T) {
// Run with -race flag to check for data race. id := uuid.New()
func TestProcessStateConcurrentAccess(t *testing.T) { tests := []struct {
ps := NewProcessState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false) in *TaskMessage
var wg sync.WaitGroup out *TaskMessage
started := time.Now() }{
msgs := []*TaskMessage{ {
{ID: xid.New(), Type: "type1", Payload: map[string]interface{}{"user_id": 42}}, in: &TaskMessage{
{ID: xid.New(), Type: "type2"}, Type: "task1",
{ID: xid.New(), Type: "type3"}, Payload: map[string]interface{}{"a": 1, "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
},
out: &TaskMessage{
Type: "task1",
Payload: map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
},
},
} }
// Simulate hearbeater calling SetStatus and SetStarted. for _, tc := range tests {
encoded, err := EncodeMessage(tc.in)
if err != nil {
t.Errorf("EncodeMessage(msg) returned error: %v", err)
continue
}
decoded, err := DecodeMessage(encoded)
if err != nil {
t.Errorf("DecodeMessage(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(tc.out, decoded); diff != "" {
t.Errorf("Decoded message == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.out, diff)
}
}
}
// Test for status being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestStatusConcurrentAccess(t *testing.T) {
status := NewServerStatus(StatusIdle)
var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
ps.SetStarted(started) status.Get()
ps.SetStatus(StatusRunning) status.String()
}() }()
// Simulate processor starting worker goroutines.
for _, msg := range msgs {
wg.Add(1)
ps.AddWorkerStats(msg, time.Now())
go func(msg *TaskMessage) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
ps.DeleteWorkerStats(msg)
}(msg)
}
// Simulate hearbeater calling Get and GetWorkers
wg.Add(1) wg.Add(1)
go func() { go func() {
wg.Done() defer wg.Done()
for i := 0; i < 5; i++ { status.Set(StatusStopped)
ps.Get() status.String()
ps.GetWorkers()
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
}
}() }()
wg.Wait() wg.Wait()
want := &ProcessInfo{
Host: "127.0.0.1",
PID: 1234,
Concurrency: 10,
Queues: map[string]int{"default": 1},
StrictPriority: false,
Status: "running",
Started: started,
ActiveWorkerCount: 0,
}
got := ps.Get()
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("(*ProcessState).Get() = %+v, want %+v; (-want,+got)\n%s",
got, want, diff)
}
} }
// Test for cancelations being accessed by multiple goroutines. // Test for cancelations being accessed by multiple goroutines.
@@ -208,9 +222,4 @@ func TestCancelationsConcurrentAccess(t *testing.T) {
if ok { if ok {
t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2) t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2)
} }
funcs := c.GetAll()
if len(funcs) != 2 {
t.Errorf("(*Cancelations).GetAll() returns %d functions, want 2", len(funcs))
}
} }

View File

@@ -6,52 +6,210 @@
package log package log
import ( import (
"fmt"
"io" "io"
stdlog "log" stdlog "log"
"os" "os"
"sync"
) )
// NewLogger creates and returns a new instance of Logger. // Base supports logging at various log levels.
func NewLogger(out io.Writer) *Logger { type Base interface {
return &Logger{ // Debug logs a message at Debug level.
stdlog.New(out, "", stdlog.Ldate|stdlog.Ltime|stdlog.Lmicroseconds|stdlog.LUTC), Debug(args ...interface{})
}
// Info logs a message at Info level.
Info(args ...interface{})
// Warn logs a message at Warning level.
Warn(args ...interface{})
// Error logs a message at Error level.
Error(args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(args ...interface{})
} }
// Logger is a wrapper object around log.Logger from the standard library. // baseLogger is a wrapper object around log.Logger from the standard library.
// It supports logging at various log levels. // It supports logging at various log levels.
type Logger struct { type baseLogger struct {
*stdlog.Logger *stdlog.Logger
} }
// Debug logs a message at Debug level. // Debug logs a message at Debug level.
func (l *Logger) Debug(format string, args ...interface{}) { func (l *baseLogger) Debug(args ...interface{}) {
format = "DEBUG: " + format l.prefixPrint("DEBUG: ", args...)
l.Printf(format, args...)
} }
// Info logs a message at Info level. // Info logs a message at Info level.
func (l *Logger) Info(format string, args ...interface{}) { func (l *baseLogger) Info(args ...interface{}) {
format = "INFO: " + format l.prefixPrint("INFO: ", args...)
l.Printf(format, args...)
} }
// Warn logs a message at Warning level. // Warn logs a message at Warning level.
func (l *Logger) Warn(format string, args ...interface{}) { func (l *baseLogger) Warn(args ...interface{}) {
format = "WARN: " + format l.prefixPrint("WARN: ", args...)
l.Printf(format, args...)
} }
// Error logs a message at Error level. // Error logs a message at Error level.
func (l *Logger) Error(format string, args ...interface{}) { func (l *baseLogger) Error(args ...interface{}) {
format = "ERROR: " + format l.prefixPrint("ERROR: ", args...)
l.Printf(format, args...)
} }
// Fatal logs a message at Fatal level // Fatal logs a message at Fatal level
// and process will exit with status set to 1. // and process will exit with status set to 1.
func (l *Logger) Fatal(format string, args ...interface{}) { func (l *baseLogger) Fatal(args ...interface{}) {
format = "FATAL: " + format l.prefixPrint("FATAL: ", args...)
l.Printf(format, args...)
os.Exit(1) os.Exit(1)
} }
func (l *baseLogger) prefixPrint(prefix string, args ...interface{}) {
args = append([]interface{}{prefix}, args...)
l.Print(args...)
}
// newBase creates and returns a new instance of baseLogger.
func newBase(out io.Writer) *baseLogger {
prefix := fmt.Sprintf("asynq: pid=%d ", os.Getpid())
return &baseLogger{
stdlog.New(out, prefix, stdlog.Ldate|stdlog.Ltime|stdlog.Lmicroseconds|stdlog.LUTC),
}
}
// NewLogger creates and returns a new instance of Logger.
// Log level is set to DebugLevel by default.
func NewLogger(base Base) *Logger {
if base == nil {
base = newBase(os.Stderr)
}
return &Logger{base: base, level: DebugLevel}
}
// Logger logs message to io.Writer at various log levels.
type Logger struct {
base Base
mu sync.Mutex
// Minimum log level for this logger.
// Message with level lower than this level won't be outputted.
level Level
}
// Level represents a log level.
type Level int32
const (
// DebugLevel is the lowest level of logging.
// Debug logs are intended for debugging and development purposes.
DebugLevel Level = iota
// InfoLevel is used for general informational log messages.
InfoLevel
// WarnLevel is used for undesired but relatively expected events,
// which may indicate a problem.
WarnLevel
// ErrorLevel is used for undesired and unexpected events that
// the program can recover from.
ErrorLevel
// FatalLevel is used for undesired and unexpected events that
// the program cannot recover from.
FatalLevel
)
// String is part of the fmt.Stringer interface.
//
// Used for testing and debugging purposes.
func (l Level) String() string {
switch l {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warning"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
default:
return "unknown"
}
}
// canLogAt reports whether logger can log at level v.
func (l *Logger) canLogAt(v Level) bool {
l.mu.Lock()
defer l.mu.Unlock()
return v >= l.level
}
func (l *Logger) Debug(args ...interface{}) {
if !l.canLogAt(DebugLevel) {
return
}
l.base.Debug(args...)
}
func (l *Logger) Info(args ...interface{}) {
if !l.canLogAt(InfoLevel) {
return
}
l.base.Info(args...)
}
func (l *Logger) Warn(args ...interface{}) {
if !l.canLogAt(WarnLevel) {
return
}
l.base.Warn(args...)
}
func (l *Logger) Error(args ...interface{}) {
if !l.canLogAt(ErrorLevel) {
return
}
l.base.Error(args...)
}
func (l *Logger) Fatal(args ...interface{}) {
if !l.canLogAt(FatalLevel) {
return
}
l.base.Fatal(args...)
}
func (l *Logger) Debugf(format string, args ...interface{}) {
l.Debug(fmt.Sprintf(format, args...))
}
func (l *Logger) Infof(format string, args ...interface{}) {
l.Info(fmt.Sprintf(format, args...))
}
func (l *Logger) Warnf(format string, args ...interface{}) {
l.Warn(fmt.Sprintf(format, args...))
}
func (l *Logger) Errorf(format string, args ...interface{}) {
l.Error(fmt.Sprintf(format, args...))
}
func (l *Logger) Fatalf(format string, args ...interface{}) {
l.Fatal(fmt.Sprintf(format, args...))
}
// SetLevel sets the logger level.
// It panics if v is less than DebugLevel or greater than FatalLevel.
func (l *Logger) SetLevel(v Level) {
l.mu.Lock()
defer l.mu.Unlock()
if v < DebugLevel || v > FatalLevel {
panic("log: invalid log level")
}
l.level = v
}

View File

@@ -13,6 +13,7 @@ import (
// regexp for timestamps // regexp for timestamps
const ( const (
rgxPID = `[0-9]+`
rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]` rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]`
rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]` rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]`
rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]` rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]`
@@ -27,20 +28,22 @@ type tester struct {
func TestLoggerDebug(t *testing.T) { func TestLoggerDebug(t *testing.T) {
tests := []tester{ tests := []tester{
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s DEBUG: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s DEBUG: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Debug(tc.message) logger.Debug(tc.message)
@@ -50,7 +53,7 @@ func TestLoggerDebug(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Debug(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
@@ -59,20 +62,22 @@ func TestLoggerDebug(t *testing.T) {
func TestLoggerInfo(t *testing.T) { func TestLoggerInfo(t *testing.T) {
tests := []tester{ tests := []tester{
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Info(tc.message) logger.Info(tc.message)
@@ -82,7 +87,7 @@ func TestLoggerInfo(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
@@ -91,20 +96,22 @@ func TestLoggerInfo(t *testing.T) {
func TestLoggerWarn(t *testing.T) { func TestLoggerWarn(t *testing.T) {
tests := []tester{ tests := []tester{
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Warn(tc.message) logger.Warn(tc.message)
@@ -114,7 +121,7 @@ func TestLoggerWarn(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Warn(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
@@ -123,20 +130,22 @@ func TestLoggerWarn(t *testing.T) {
func TestLoggerError(t *testing.T) { func TestLoggerError(t *testing.T) {
tests := []tester{ tests := []tester{
{ {
desc: "without trailing newline, logger adds newline", desc: "without trailing newline, logger adds newline",
message: "hello, world!", message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
{ {
desc: "with trailing newline, logger preserves newline", desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n", message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds), wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
}, },
} }
for _, tc := range tests { for _, tc := range tests {
var buf bytes.Buffer var buf bytes.Buffer
logger := NewLogger(&buf) logger := NewLogger(newBase(&buf))
logger.Error(tc.message) logger.Error(tc.message)
@@ -146,8 +155,234 @@ func TestLoggerError(t *testing.T) {
t.Fatal("pattern did not compile:", err) t.Fatal("pattern did not compile:", err)
} }
if !matched { if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q", t.Errorf("logger.Error(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern) tc.message, got, tc.wantPattern)
} }
} }
} }
type formatTester struct {
desc string
format string
args []interface{}
wantPattern string // regexp that log output must match
}
func TestLoggerDebugf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with DEBUG prefix",
format: "hello, %s!",
args: []interface{}{"Gopher"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, Gopher!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Debugf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Debugf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerInfof(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with INFO prefix",
format: "%d,%d,%d",
args: []interface{}{1, 2, 3},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: 1,2,3\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Infof(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Infof(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerWarnf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with WARN prefix",
format: "hello, %s",
args: []interface{}{"Gophers"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, Gophers\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Warnf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Warnf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerErrorf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with ERROR prefix",
format: "hello, %s",
args: []interface{}{"Gophers"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, Gophers\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Errorf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Errorf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerWithLowerLevels(t *testing.T) {
// Logger should not log messages at a level
// lower than the specified level.
tests := []struct {
level Level
op string
}{
// with level one above
{InfoLevel, "Debug"},
{InfoLevel, "Debugf"},
{WarnLevel, "Info"},
{WarnLevel, "Infof"},
{ErrorLevel, "Warn"},
{ErrorLevel, "Warnf"},
{FatalLevel, "Error"},
{FatalLevel, "Errorf"},
// with skip level
{WarnLevel, "Debug"},
{ErrorLevel, "Infof"},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.SetLevel(tc.level)
switch tc.op {
case "Debug":
logger.Debug("hello")
case "Debugf":
logger.Debugf("hello, %s", "world")
case "Info":
logger.Info("hello")
case "Infof":
logger.Infof("hello, %s", "world")
case "Warn":
logger.Warn("hello")
case "Warnf":
logger.Warnf("hello, %s", "world")
case "Error":
logger.Error("hello")
case "Errorf":
logger.Errorf("hello, %s", "world")
default:
t.Fatalf("unexpected op: %q", tc.op)
}
if buf.String() != "" {
t.Errorf("logger.%s outputted log message when level is set to %v", tc.op, tc.level)
}
}
}
func TestLoggerWithSameOrHigherLevels(t *testing.T) {
// Logger should log messages at a level
// same as or higher than the specified level.
tests := []struct {
level Level
op string
}{
// same level
{DebugLevel, "Debug"},
{InfoLevel, "Infof"},
{WarnLevel, "Warn"},
{ErrorLevel, "Errorf"},
// higher level
{DebugLevel, "Info"},
{InfoLevel, "Warnf"},
{WarnLevel, "Error"},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.SetLevel(tc.level)
switch tc.op {
case "Debug":
logger.Debug("hello")
case "Debugf":
logger.Debugf("hello, %s", "world")
case "Info":
logger.Info("hello")
case "Infof":
logger.Infof("hello, %s", "world")
case "Warn":
logger.Warn("hello")
case "Warnf":
logger.Warnf("hello, %s", "world")
case "Error":
logger.Error("hello")
case "Errorf":
logger.Errorf("hello, %s", "world")
default:
t.Fatalf("unexpected op: %q", tc.op)
}
if buf.String() == "" {
t.Errorf("logger.%s did not output log message when level is set to %v", tc.op, tc.level)
}
}
}

View File

@@ -7,12 +7,13 @@ package rdb
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"sort"
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
"github.com/spf13/cast" "github.com/spf13/cast"
) )
@@ -25,10 +26,24 @@ type Stats struct {
Dead int Dead int
Processed int Processed int
Failed int Failed int
Queues map[string]int // map of queue name to number of tasks in the queue (e.g., "default": 100, "critical": 20) Queues []*Queue
Timestamp time.Time Timestamp time.Time
} }
// Queue represents a task queue.
type Queue struct {
// Name of the queue (e.g. "default", "critical").
// Note: It doesn't include the prefix "asynq:queues:".
Name string
// Paused indicates whether the queue is paused.
// If true, tasks in the queue should not be processed.
Paused bool
// Size is the number of tasks in the queue.
Size int
}
// DailyStats holds aggregate data for a given day. // DailyStats holds aggregate data for a given day.
type DailyStats struct { type DailyStats struct {
Processed int Processed int
@@ -38,7 +53,7 @@ type DailyStats struct {
// EnqueuedTask is a task in a queue and is ready to be processed. // EnqueuedTask is a task in a queue and is ready to be processed.
type EnqueuedTask struct { type EnqueuedTask struct {
ID xid.ID ID uuid.UUID
Type string Type string
Payload map[string]interface{} Payload map[string]interface{}
Queue string Queue string
@@ -46,14 +61,14 @@ type EnqueuedTask struct {
// InProgressTask is a task that's currently being processed. // InProgressTask is a task that's currently being processed.
type InProgressTask struct { type InProgressTask struct {
ID xid.ID ID uuid.UUID
Type string Type string
Payload map[string]interface{} Payload map[string]interface{}
} }
// ScheduledTask is a task that's scheduled to be processed in the future. // ScheduledTask is a task that's scheduled to be processed in the future.
type ScheduledTask struct { type ScheduledTask struct {
ID xid.ID ID uuid.UUID
Type string Type string
Payload map[string]interface{} Payload map[string]interface{}
ProcessAt time.Time ProcessAt time.Time
@@ -63,7 +78,7 @@ type ScheduledTask struct {
// RetryTask is a task that's in retry queue because worker failed to process the task. // RetryTask is a task that's in retry queue because worker failed to process the task.
type RetryTask struct { type RetryTask struct {
ID xid.ID ID uuid.UUID
Type string Type string
Payload map[string]interface{} Payload map[string]interface{}
// TODO(hibiken): add LastFailedAt time.Time // TODO(hibiken): add LastFailedAt time.Time
@@ -77,7 +92,7 @@ type RetryTask struct {
// DeadTask is a task in that has exhausted all retries. // DeadTask is a task in that has exhausted all retries.
type DeadTask struct { type DeadTask struct {
ID xid.ID ID uuid.UUID
Type string Type string
Payload map[string]interface{} Payload map[string]interface{}
LastFailedAt time.Time LastFailedAt time.Time
@@ -143,8 +158,12 @@ func (r *RDB) CurrentStats() (*Stats, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
paused, err := r.client.SMembersMap(base.PausedQueues).Result()
if err != nil {
return nil, err
}
stats := &Stats{ stats := &Stats{
Queues: make(map[string]int), Queues: make([]*Queue, 0),
Timestamp: now, Timestamp: now,
} }
for i := 0; i < len(data); i += 2 { for i := 0; i < len(data); i += 2 {
@@ -154,7 +173,14 @@ func (r *RDB) CurrentStats() (*Stats, error) {
switch { switch {
case strings.HasPrefix(key, base.QueuePrefix): case strings.HasPrefix(key, base.QueuePrefix):
stats.Enqueued += val stats.Enqueued += val
stats.Queues[strings.TrimPrefix(key, base.QueuePrefix)] = val q := Queue{
Name: strings.TrimPrefix(key, base.QueuePrefix),
Size: val,
}
if _, exist := paused[key]; exist {
q.Paused = true
}
stats.Queues = append(stats.Queues, &q)
case key == base.InProgressQueue: case key == base.InProgressQueue:
stats.InProgress = val stats.InProgress = val
case key == base.ScheduledQueue: case key == base.ScheduledQueue:
@@ -169,6 +195,9 @@ func (r *RDB) CurrentStats() (*Stats, error) {
stats.Failed = val stats.Failed = val
} }
} }
sort.Slice(stats.Queues, func(i, j int) bool {
return stats.Queues[i].Name < stats.Queues[j].Name
})
return stats, nil return stats, nil
} }
@@ -417,7 +446,7 @@ func (r *RDB) ListDead(pgn Pagination) ([]*DeadTask, error) {
// EnqueueDeadTask finds a task that matches the given id and score from dead queue // EnqueueDeadTask finds a task that matches the given id and score from dead queue
// and enqueues it for processing. If a task that matches the id and score // and enqueues it for processing. If a task that matches the id and score
// does not exist, it returns ErrTaskNotFound. // does not exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueDeadTask(id xid.ID, score int64) error { func (r *RDB) EnqueueDeadTask(id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.DeadQueue, id.String(), float64(score)) n, err := r.removeAndEnqueue(base.DeadQueue, id.String(), float64(score))
if err != nil { if err != nil {
return err return err
@@ -431,7 +460,7 @@ func (r *RDB) EnqueueDeadTask(id xid.ID, score int64) error {
// EnqueueRetryTask finds a task that matches the given id and score from retry queue // EnqueueRetryTask finds a task that matches the given id and score from retry queue
// and enqueues it for processing. If a task that matches the id and score // and enqueues it for processing. If a task that matches the id and score
// does not exist, it returns ErrTaskNotFound. // does not exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueRetryTask(id xid.ID, score int64) error { func (r *RDB) EnqueueRetryTask(id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.RetryQueue, id.String(), float64(score)) n, err := r.removeAndEnqueue(base.RetryQueue, id.String(), float64(score))
if err != nil { if err != nil {
return err return err
@@ -445,7 +474,7 @@ func (r *RDB) EnqueueRetryTask(id xid.ID, score int64) error {
// EnqueueScheduledTask finds a task that matches the given id and score from scheduled queue // EnqueueScheduledTask finds a task that matches the given id and score from scheduled queue
// and enqueues it for processing. If a task that matches the id and score does not // and enqueues it for processing. If a task that matches the id and score does not
// exist, it returns ErrTaskNotFound. // exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueScheduledTask(id xid.ID, score int64) error { func (r *RDB) EnqueueScheduledTask(id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.ScheduledQueue, id.String(), float64(score)) n, err := r.removeAndEnqueue(base.ScheduledQueue, id.String(), float64(score))
if err != nil { if err != nil {
return err return err
@@ -524,7 +553,7 @@ func (r *RDB) removeAndEnqueueAll(zset string) (int64, error) {
// KillRetryTask finds a task that matches the given id and score from retry queue // KillRetryTask finds a task that matches the given id and score from retry queue
// and moves it to dead queue. If a task that maches the id and score does not exist, // and moves it to dead queue. If a task that maches the id and score does not exist,
// it returns ErrTaskNotFound. // it returns ErrTaskNotFound.
func (r *RDB) KillRetryTask(id xid.ID, score int64) error { func (r *RDB) KillRetryTask(id uuid.UUID, score int64) error {
n, err := r.removeAndKill(base.RetryQueue, id.String(), float64(score)) n, err := r.removeAndKill(base.RetryQueue, id.String(), float64(score))
if err != nil { if err != nil {
return err return err
@@ -538,7 +567,7 @@ func (r *RDB) KillRetryTask(id xid.ID, score int64) error {
// KillScheduledTask finds a task that matches the given id and score from scheduled queue // KillScheduledTask finds a task that matches the given id and score from scheduled queue
// and moves it to dead queue. If a task that maches the id and score does not exist, // and moves it to dead queue. If a task that maches the id and score does not exist,
// it returns ErrTaskNotFound. // it returns ErrTaskNotFound.
func (r *RDB) KillScheduledTask(id xid.ID, score int64) error { func (r *RDB) KillScheduledTask(id uuid.UUID, score int64) error {
n, err := r.removeAndKill(base.ScheduledQueue, id.String(), float64(score)) n, err := r.removeAndKill(base.ScheduledQueue, id.String(), float64(score))
if err != nil { if err != nil {
return err return err
@@ -631,21 +660,21 @@ func (r *RDB) removeAndKillAll(zset string) (int64, error) {
// DeleteDeadTask finds a task that matches the given id and score from dead queue // DeleteDeadTask finds a task that matches the given id and score from dead queue
// and deletes it. If a task that matches the id and score does not exist, // and deletes it. If a task that matches the id and score does not exist,
// it returns ErrTaskNotFound. // it returns ErrTaskNotFound.
func (r *RDB) DeleteDeadTask(id xid.ID, score int64) error { func (r *RDB) DeleteDeadTask(id uuid.UUID, score int64) error {
return r.deleteTask(base.DeadQueue, id.String(), float64(score)) return r.deleteTask(base.DeadQueue, id.String(), float64(score))
} }
// DeleteRetryTask finds a task that matches the given id and score from retry queue // DeleteRetryTask finds a task that matches the given id and score from retry queue
// and deletes it. If a task that matches the id and score does not exist, // and deletes it. If a task that matches the id and score does not exist,
// it returns ErrTaskNotFound. // it returns ErrTaskNotFound.
func (r *RDB) DeleteRetryTask(id xid.ID, score int64) error { func (r *RDB) DeleteRetryTask(id uuid.UUID, score int64) error {
return r.deleteTask(base.RetryQueue, id.String(), float64(score)) return r.deleteTask(base.RetryQueue, id.String(), float64(score))
} }
// DeleteScheduledTask finds a task that matches the given id and score from // DeleteScheduledTask finds a task that matches the given id and score from
// scheduled queue and deletes it. If a task that matches the id and score // scheduled queue and deletes it. If a task that matches the id and score
//does not exist, it returns ErrTaskNotFound. //does not exist, it returns ErrTaskNotFound.
func (r *RDB) DeleteScheduledTask(id xid.ID, score int64) error { func (r *RDB) DeleteScheduledTask(id uuid.UUID, score int64) error {
return r.deleteTask(base.ScheduledQueue, id.String(), float64(score)) return r.deleteTask(base.ScheduledQueue, id.String(), float64(score))
} }
@@ -759,23 +788,23 @@ func (r *RDB) RemoveQueue(qname string, force bool) error {
} }
// Note: Script also removes stale keys. // Note: Script also removes stale keys.
var listProcessesCmd = redis.NewScript(` var listServersCmd = redis.NewScript(`
local res = {} local res = {}
local now = tonumber(ARGV[1]) local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf") local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do for _, key in ipairs(keys) do
local ps = redis.call("GET", key) local s = redis.call("GET", key)
if ps then if s then
table.insert(res, ps) table.insert(res, s)
end end
end end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1) redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`) return res`)
// ListProcesses returns the list of process statuses. // ListServers returns the list of server info.
func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) { func (r *RDB) ListServers() ([]*base.ServerInfo, error) {
res, err := listProcessesCmd.Run(r.client, res, err := listServersCmd.Run(r.client,
[]string{base.AllProcesses}, time.Now().UTC().Unix()).Result() []string{base.AllServers}, time.Now().UTC().Unix()).Result()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -783,16 +812,16 @@ func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
var processes []*base.ProcessInfo var servers []*base.ServerInfo
for _, s := range data { for _, s := range data {
var ps base.ProcessInfo var info base.ServerInfo
err := json.Unmarshal([]byte(s), &ps) err := json.Unmarshal([]byte(s), &info)
if err != nil { if err != nil {
continue // skip bad data continue // skip bad data
} }
processes = append(processes, &ps) servers = append(servers, &info)
} }
return processes, nil return servers, nil
} }
// Note: Script also removes stale keys. // Note: Script also removes stale keys.
@@ -830,3 +859,33 @@ func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) {
} }
return workers, nil return workers, nil
} }
// KEYS[1] -> asynq:paused
// ARGV[1] -> asynq:queues:<qname> - queue to pause
var pauseCmd = redis.NewScript(`
local ismem = redis.call("SISMEMBER", KEYS[1], ARGV[1])
if ismem == 1 then
return redis.error_reply("queue is already paused")
end
return redis.call("SADD", KEYS[1], ARGV[1])`)
// Pause pauses processing of tasks from the given queue.
func (r *RDB) Pause(qname string) error {
qkey := base.QueueKey(qname)
return pauseCmd.Run(r.client, []string{base.PausedQueues}, qkey).Err()
}
// KEYS[1] -> asynq:paused
// ARGV[1] -> asynq:queues:<qname> - queue to unpause
var unpauseCmd = redis.NewScript(`
local ismem = redis.call("SISMEMBER", KEYS[1], ARGV[1])
if ismem == 0 then
return redis.error_reply("queue is not paused")
end
return redis.call("SREM", KEYS[1], ARGV[1])`)
// Unpause resumes processing of tasks from the given queue.
func (r *RDB) Unpause(qname string) error {
qkey := base.QueueKey(qname)
return unpauseCmd.Run(r.client, []string{base.PausedQueues}, qkey).Err()
}

View File

@@ -12,9 +12,9 @@ import (
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid"
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
) )
func TestCurrentStats(t *testing.T) { func TestCurrentStats(t *testing.T) {
@@ -38,6 +38,7 @@ func TestCurrentStats(t *testing.T) {
processed int processed int
failed int failed int
allQueues []interface{} allQueues []interface{}
paused []string
want *Stats want *Stats
}{ }{
{ {
@@ -55,6 +56,7 @@ func TestCurrentStats(t *testing.T) {
processed: 120, processed: 120,
failed: 2, failed: 2,
allQueues: []interface{}{base.DefaultQueue, base.QueueKey("critical"), base.QueueKey("low")}, allQueues: []interface{}{base.DefaultQueue, base.QueueKey("critical"), base.QueueKey("low")},
paused: []string{},
want: &Stats{ want: &Stats{
Enqueued: 3, Enqueued: 3,
InProgress: 1, InProgress: 1,
@@ -64,7 +66,12 @@ func TestCurrentStats(t *testing.T) {
Processed: 120, Processed: 120,
Failed: 2, Failed: 2,
Timestamp: now, Timestamp: now,
Queues: map[string]int{base.DefaultQueueName: 1, "critical": 1, "low": 1}, // Queues should be sorted by name.
Queues: []*Queue{
{Name: "critical", Paused: false, Size: 1},
{Name: "default", Paused: false, Size: 1},
{Name: "low", Paused: false, Size: 1},
},
}, },
}, },
{ {
@@ -82,6 +89,7 @@ func TestCurrentStats(t *testing.T) {
processed: 90, processed: 90,
failed: 10, failed: 10,
allQueues: []interface{}{base.DefaultQueue}, allQueues: []interface{}{base.DefaultQueue},
paused: []string{},
want: &Stats{ want: &Stats{
Enqueued: 0, Enqueued: 0,
InProgress: 0, InProgress: 0,
@@ -91,13 +99,52 @@ func TestCurrentStats(t *testing.T) {
Processed: 90, Processed: 90,
Failed: 10, Failed: 10,
Timestamp: now, Timestamp: now,
Queues: map[string]int{base.DefaultQueueName: 0}, Queues: []*Queue{
{Name: base.DefaultQueueName, Paused: false, Size: 0},
},
},
},
{
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {m1},
"critical": {m5},
"low": {m6},
},
inProgress: []*base.TaskMessage{m2},
scheduled: []h.ZSetEntry{
{Msg: m3, Score: float64(now.Add(time.Hour).Unix())},
{Msg: m4, Score: float64(now.Unix())}},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
processed: 120,
failed: 2,
allQueues: []interface{}{base.DefaultQueue, base.QueueKey("critical"), base.QueueKey("low")},
paused: []string{"critical", "low"},
want: &Stats{
Enqueued: 3,
InProgress: 1,
Scheduled: 2,
Retry: 0,
Dead: 0,
Processed: 120,
Failed: 2,
Timestamp: now,
Queues: []*Queue{
{Name: "critical", Paused: true, Size: 1},
{Name: "default", Paused: false, Size: 1},
{Name: "low", Paused: true, Size: 1},
},
}, },
}, },
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r.client) // clean up db before each test case h.FlushDB(t, r.client) // clean up db before each test case
for _, qname := range tc.paused {
if err := r.Pause(qname); err != nil {
t.Fatal(err)
}
}
for qname, msgs := range tc.enqueued { for qname, msgs := range tc.enqueued {
h.SeedEnqueuedQueue(t, r.client, msgs, qname) h.SeedEnqueuedQueue(t, r.client, msgs, qname)
} }
@@ -136,7 +183,7 @@ func TestCurrentStatsWithoutData(t *testing.T) {
Processed: 0, Processed: 0,
Failed: 0, Failed: 0,
Timestamp: time.Now(), Timestamp: time.Now(),
Queues: map[string]int{}, Queues: make([]*Queue, 0),
} }
got, err := r.CurrentStats() got, err := r.CurrentStats()
@@ -571,7 +618,7 @@ func TestListScheduledPagination(t *testing.T) {
func TestListRetry(t *testing.T) { func TestListRetry(t *testing.T) {
r := setup(t) r := setup(t)
m1 := &base.TaskMessage{ m1 := &base.TaskMessage{
ID: xid.New(), ID: uuid.New(),
Type: "send_email", Type: "send_email",
Queue: "default", Queue: "default",
Payload: map[string]interface{}{"subject": "hello"}, Payload: map[string]interface{}{"subject": "hello"},
@@ -580,7 +627,7 @@ func TestListRetry(t *testing.T) {
Retried: 10, Retried: 10,
} }
m2 := &base.TaskMessage{ m2 := &base.TaskMessage{
ID: xid.New(), ID: uuid.New(),
Type: "reindex", Type: "reindex",
Queue: "default", Queue: "default",
Payload: nil, Payload: nil,
@@ -658,12 +705,14 @@ func TestListRetry(t *testing.T) {
func TestListRetryPagination(t *testing.T) { func TestListRetryPagination(t *testing.T) {
r := setup(t) r := setup(t)
// create 100 tasks with an increasing number of wait time. // create 100 tasks with an increasing number of wait time.
now := time.Now()
var seed []h.ZSetEntry
for i := 0; i < 100; i++ { for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil) msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil)
if err := r.Retry(msg, time.Now().Add(time.Duration(i)*time.Second), "error"); err != nil { processAt := now.Add(time.Duration(i) * time.Second)
t.Fatal(err) seed = append(seed, h.ZSetEntry{Msg: msg, Score: float64(processAt.Unix())})
}
} }
h.SeedRetryQueue(t, r.client, seed)
tests := []struct { tests := []struct {
desc string desc string
@@ -714,14 +763,14 @@ func TestListRetryPagination(t *testing.T) {
func TestListDead(t *testing.T) { func TestListDead(t *testing.T) {
r := setup(t) r := setup(t)
m1 := &base.TaskMessage{ m1 := &base.TaskMessage{
ID: xid.New(), ID: uuid.New(),
Type: "send_email", Type: "send_email",
Queue: "default", Queue: "default",
Payload: map[string]interface{}{"subject": "hello"}, Payload: map[string]interface{}{"subject": "hello"},
ErrorMsg: "email server not responding", ErrorMsg: "email server not responding",
} }
m2 := &base.TaskMessage{ m2 := &base.TaskMessage{
ID: xid.New(), ID: uuid.New(),
Type: "reindex", Type: "reindex",
Queue: "default", Queue: "default",
Payload: nil, Payload: nil,
@@ -858,7 +907,7 @@ func TestEnqueueDeadTask(t *testing.T) {
tests := []struct { tests := []struct {
dead []h.ZSetEntry dead []h.ZSetEntry
score int64 score int64
id xid.ID id uuid.UUID
want error // expected return value from calling EnqueueDeadTask want error // expected return value from calling EnqueueDeadTask
wantDead []*base.TaskMessage wantDead []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage wantEnqueued map[string][]*base.TaskMessage
@@ -942,7 +991,7 @@ func TestEnqueueRetryTask(t *testing.T) {
tests := []struct { tests := []struct {
retry []h.ZSetEntry retry []h.ZSetEntry
score int64 score int64
id xid.ID id uuid.UUID
want error // expected return value from calling EnqueueRetryTask want error // expected return value from calling EnqueueRetryTask
wantRetry []*base.TaskMessage wantRetry []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage wantEnqueued map[string][]*base.TaskMessage
@@ -1026,7 +1075,7 @@ func TestEnqueueScheduledTask(t *testing.T) {
tests := []struct { tests := []struct {
scheduled []h.ZSetEntry scheduled []h.ZSetEntry
score int64 score int64
id xid.ID id uuid.UUID
want error // expected return value from calling EnqueueScheduledTask want error // expected return value from calling EnqueueScheduledTask
wantScheduled []*base.TaskMessage wantScheduled []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage wantEnqueued map[string][]*base.TaskMessage
@@ -1345,7 +1394,7 @@ func TestKillRetryTask(t *testing.T) {
tests := []struct { tests := []struct {
retry []h.ZSetEntry retry []h.ZSetEntry
dead []h.ZSetEntry dead []h.ZSetEntry
id xid.ID id uuid.UUID
score int64 score int64
want error want error
wantRetry []h.ZSetEntry wantRetry []h.ZSetEntry
@@ -1422,7 +1471,7 @@ func TestKillScheduledTask(t *testing.T) {
tests := []struct { tests := []struct {
scheduled []h.ZSetEntry scheduled []h.ZSetEntry
dead []h.ZSetEntry dead []h.ZSetEntry
id xid.ID id uuid.UUID
score int64 score int64
want error want error
wantScheduled []h.ZSetEntry wantScheduled []h.ZSetEntry
@@ -1662,7 +1711,7 @@ func TestDeleteDeadTask(t *testing.T) {
tests := []struct { tests := []struct {
dead []h.ZSetEntry dead []h.ZSetEntry
id xid.ID id uuid.UUID
score int64 score int64
want error want error
wantDead []*base.TaskMessage wantDead []*base.TaskMessage
@@ -1722,7 +1771,7 @@ func TestDeleteRetryTask(t *testing.T) {
tests := []struct { tests := []struct {
retry []h.ZSetEntry retry []h.ZSetEntry
id xid.ID id uuid.UUID
score int64 score int64
want error want error
wantRetry []*base.TaskMessage wantRetry []*base.TaskMessage
@@ -1774,7 +1823,7 @@ func TestDeleteScheduledTask(t *testing.T) {
tests := []struct { tests := []struct {
scheduled []h.ZSetEntry scheduled []h.ZSetEntry
id xid.ID id uuid.UUID
score int64 score int64
want error want error
wantScheduled []*base.TaskMessage wantScheduled []*base.TaskMessage
@@ -2051,74 +2100,63 @@ func TestRemoveQueueError(t *testing.T) {
} }
} }
func TestListProcesses(t *testing.T) { func TestListServers(t *testing.T) {
r := setup(t) r := setup(t)
started1 := time.Now().Add(-time.Hour) started1 := time.Now().Add(-time.Hour)
ps1 := base.NewProcessState("do.droplet1", 1234, 10, map[string]int{"default": 1}, false) info1 := &base.ServerInfo{
ps1.SetStarted(started1)
ps1.SetStatus(base.StatusRunning)
info1 := &base.ProcessInfo{
Concurrency: 10,
Queues: map[string]int{"default": 1},
Host: "do.droplet1", Host: "do.droplet1",
PID: 1234, PID: 1234,
ServerID: "server123",
Concurrency: 10,
Queues: map[string]int{"default": 1},
Status: "running", Status: "running",
Started: started1, Started: started1,
ActiveWorkerCount: 0, ActiveWorkerCount: 0,
} }
started2 := time.Now().Add(-2 * time.Hour) started2 := time.Now().Add(-2 * time.Hour)
ps2 := base.NewProcessState("do.droplet2", 9876, 20, map[string]int{"email": 1}, false) info2 := &base.ServerInfo{
ps2.SetStarted(started2)
ps2.SetStatus(base.StatusStopped)
ps2.AddWorkerStats(h.NewTaskMessage("send_email", nil), time.Now())
info2 := &base.ProcessInfo{
Concurrency: 20,
Queues: map[string]int{"email": 1},
Host: "do.droplet2", Host: "do.droplet2",
PID: 9876, PID: 9876,
ServerID: "server456",
Concurrency: 20,
Queues: map[string]int{"email": 1},
Status: "stopped", Status: "stopped",
Started: started2, Started: started2,
ActiveWorkerCount: 1, ActiveWorkerCount: 1,
} }
tests := []struct { tests := []struct {
processes []*base.ProcessState data []*base.ServerInfo
want []*base.ProcessInfo
}{ }{
{ {
processes: []*base.ProcessState{}, data: []*base.ServerInfo{},
want: []*base.ProcessInfo{},
}, },
{ {
processes: []*base.ProcessState{ps1}, data: []*base.ServerInfo{info1},
want: []*base.ProcessInfo{info1},
}, },
{ {
processes: []*base.ProcessState{ps1, ps2}, data: []*base.ServerInfo{info1, info2},
want: []*base.ProcessInfo{info1, info2},
}, },
} }
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r.client) h.FlushDB(t, r.client)
for _, ps := range tc.processes { for _, info := range tc.data {
if err := r.WriteProcessState(ps, 5*time.Second); err != nil { if err := r.WriteServerState(info, []*base.WorkerInfo{}, 5*time.Second); err != nil {
t.Fatal(err) t.Fatal(err)
} }
} }
got, err := r.ListProcesses() got, err := r.ListServers()
if err != nil { if err != nil {
t.Errorf("r.ListProcesses returned an error: %v", err) t.Errorf("r.ListServers returned an error: %v", err)
} }
if diff := cmp.Diff(tc.want, got, h.SortProcessInfoOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(tc.data, got, h.SortServerInfoOpt); diff != "" {
t.Errorf("r.ListProcesses returned %v, want %v; (-want,+got)\n%s", t.Errorf("r.ListServers returned %v, want %v; (-want,+got)\n%s",
got, tc.processes, diff) got, tc.data, diff)
} }
} }
} }
@@ -2126,37 +2164,23 @@ func TestListProcesses(t *testing.T) {
func TestListWorkers(t *testing.T) { func TestListWorkers(t *testing.T) {
r := setup(t) r := setup(t)
const ( var (
host = "127.0.0.1" host = "127.0.0.1"
pid = 4567 pid = 4567
m1 = h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "abc123"})
m2 = h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/image/file"})
m3 = h.NewTaskMessage("reindex", map[string]interface{}{})
) )
m1 := h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "abc123"})
m2 := h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/image/file"})
m3 := h.NewTaskMessage("reindex", map[string]interface{}{})
t1 := time.Now().Add(-time.Second)
t2 := time.Now().Add(-10 * time.Second)
t3 := time.Now().Add(-time.Minute)
type workerStats struct {
msg *base.TaskMessage
started time.Time
}
tests := []struct { tests := []struct {
workers []*workerStats data []*base.WorkerInfo
want []*base.WorkerInfo
}{ }{
{ {
workers: []*workerStats{ data: []*base.WorkerInfo{
{m1, t1}, {Host: host, PID: pid, ID: m1.ID.String(), Type: m1.Type, Queue: m1.Queue, Payload: m1.Payload, Started: time.Now().Add(-1 * time.Second)},
{m2, t2}, {Host: host, PID: pid, ID: m2.ID.String(), Type: m2.Type, Queue: m2.Queue, Payload: m2.Payload, Started: time.Now().Add(-5 * time.Second)},
{m3, t3}, {Host: host, PID: pid, ID: m3.ID.String(), Type: m3.Type, Queue: m3.Queue, Payload: m3.Payload, Started: time.Now().Add(-30 * time.Second)},
},
want: []*base.WorkerInfo{
{Host: host, PID: pid, ID: m1.ID, Type: m1.Type, Queue: m1.Queue, Payload: m1.Payload, Started: t1},
{Host: host, PID: pid, ID: m2.ID, Type: m2.Type, Queue: m2.Queue, Payload: m2.Payload, Started: t2},
{Host: host, PID: pid, ID: m3.ID, Type: m3.Type, Queue: m3.Queue, Payload: m3.Payload, Started: t3},
}, },
}, },
} }
@@ -2164,15 +2188,9 @@ func TestListWorkers(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r.client) h.FlushDB(t, r.client)
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false) err := r.WriteServerState(&base.ServerInfo{}, tc.data, time.Minute)
for _, w := range tc.workers {
ps.AddWorkerStats(w.msg, w.started)
}
err := r.WriteProcessState(ps, time.Minute)
if err != nil { if err != nil {
t.Errorf("could not write process state to redis: %v", err) t.Errorf("could not write server state to redis: %v", err)
continue continue
} }
@@ -2182,8 +2200,165 @@ func TestListWorkers(t *testing.T) {
continue continue
} }
if diff := cmp.Diff(tc.want, got, h.SortWorkerInfoOpt); diff != "" { if diff := cmp.Diff(tc.data, got, h.SortWorkerInfoOpt); diff != "" {
t.Errorf("(*RDB).ListWorkers() = %v, want = %v; (-want,+got)\n%s", got, tc.want, diff) t.Errorf("(*RDB).ListWorkers() = %v, want = %v; (-want,+got)\n%s", got, tc.data, diff)
}
}
}
func TestPause(t *testing.T) {
r := setup(t)
tests := []struct {
initial []string // initial keys in the paused set
qname string // name of the queue to pause
want []string // expected keys in the paused set
}{
{[]string{}, "default", []string{"asynq:queues:default"}},
{[]string{"asynq:queues:default"}, "critical", []string{"asynq:queues:default", "asynq:queues:critical"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Pause(tc.qname)
if err != nil {
t.Errorf("Pause(%q) returned error: %v", tc.qname, err)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%q has members %v, want %v; (-want,+got)\n%s",
base.PausedQueues, got, tc.want, diff)
}
}
}
func TestPauseError(t *testing.T) {
r := setup(t)
tests := []struct {
desc string // test case description
initial []string // initial keys in the paused set
qname string // name of the queue to pause
want []string // expected keys in the paused set
}{
{"queue already paused", []string{"asynq:queues:default"}, "default", []string{"asynq:queues:default"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Pause(tc.qname)
if err == nil {
t.Errorf("%s; Pause(%q) returned nil: want error", tc.desc, tc.qname)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%s; %q has members %v, want %v; (-want,+got)\n%s",
tc.desc, base.PausedQueues, got, tc.want, diff)
}
}
}
func TestUnpause(t *testing.T) {
r := setup(t)
tests := []struct {
initial []string // initial keys in the paused set
qname string // name of the queue to unpause
want []string // expected keys in the paused set
}{
{[]string{"asynq:queues:default"}, "default", []string{}},
{[]string{"asynq:queues:default", "asynq:queues:low"}, "low", []string{"asynq:queues:default"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Unpause(tc.qname)
if err != nil {
t.Errorf("Unpause(%q) returned error: %v", tc.qname, err)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%q has members %v, want %v; (-want,+got)\n%s",
base.PausedQueues, got, tc.want, diff)
}
}
}
func TestUnpauseError(t *testing.T) {
r := setup(t)
tests := []struct {
desc string // test case description
initial []string // initial keys in the paused set
qname string // name of the queue to unpause
want []string // expected keys in the paused set
}{
{"set is empty", []string{}, "default", []string{}},
{"queue is not in the set", []string{"asynq:queues:default"}, "low", []string{"asynq:queues:default"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Unpause(tc.qname)
if err == nil {
t.Errorf("%s; Unpause(%q) returned nil: want error", tc.desc, tc.qname)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%s; %q has members %v, want %v; (-want,+got)\n%s",
tc.desc, base.PausedQueues, got, tc.want, diff)
} }
} }
} }

View File

@@ -9,6 +9,7 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"strconv"
"time" "time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
@@ -54,12 +55,12 @@ return 1`)
// Enqueue inserts the given task to the tail of the queue. // Enqueue inserts the given task to the tail of the queue.
func (r *RDB) Enqueue(msg *base.TaskMessage) error { func (r *RDB) Enqueue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
key := base.QueueKey(msg.Queue) key := base.QueueKey(msg.Queue)
return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, bytes).Err() return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, encoded).Err()
} }
// KEYS[1] -> unique key in the form <type>:<payload>:<qname> // KEYS[1] -> unique key in the form <type>:<payload>:<qname>
@@ -81,14 +82,14 @@ return 1
// EnqueueUnique inserts the given task if the task's uniqueness lock can be acquired. // EnqueueUnique inserts the given task if the task's uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired. // It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error { func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
key := base.QueueKey(msg.Queue) key := base.QueueKey(msg.Queue)
res, err := enqueueUniqueCmd.Run(r.client, res, err := enqueueUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, key, base.AllQueues}, []string{msg.UniqueKey, key, base.AllQueues},
msg.ID.String(), int(ttl.Seconds()), bytes).Result() msg.ID.String(), int(ttl.Seconds()), encoded).Result()
if err != nil { if err != nil {
return err return err
} }
@@ -102,78 +103,110 @@ func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
return nil return nil
} }
// Dequeue queries given queues in order and pops a task message if there is one and returns it. // Dequeue queries given queues in order and pops a task message
// off a queue if one exists and returns the message and deadline.
// Dequeue skips a queue if the queue is paused.
// If all queues are empty, ErrNoProcessableTask error is returned. // If all queues are empty, ErrNoProcessableTask error is returned.
func (r *RDB) Dequeue(qnames ...string) (*base.TaskMessage, error) { func (r *RDB) Dequeue(qnames ...string) (msg *base.TaskMessage, deadline time.Time, err error) {
var data string var qkeys []interface{}
var err error for _, q := range qnames {
if len(qnames) == 1 { qkeys = append(qkeys, base.QueueKey(q))
data, err = r.dequeueSingle(base.QueueKey(qnames[0]))
} else {
var keys []string
for _, q := range qnames {
keys = append(keys, base.QueueKey(q))
}
data, err = r.dequeue(keys...)
} }
data, d, err := r.dequeue(qkeys...)
if err == redis.Nil { if err == redis.Nil {
return nil, ErrNoProcessableTask return nil, time.Time{}, ErrNoProcessableTask
} }
if err != nil { if err != nil {
return nil, err return nil, time.Time{}, err
} }
var msg base.TaskMessage if msg, err = base.DecodeMessage(data); err != nil {
err = json.Unmarshal([]byte(data), &msg) return nil, time.Time{}, err
if err != nil {
return nil, err
} }
return &msg, nil return msg, time.Unix(d, 0), nil
} }
func (r *RDB) dequeueSingle(queue string) (data string, err error) { // KEYS[1] -> asynq:in_progress
// timeout needed to avoid blocking forever // KEYS[2] -> asynq:paused
return r.client.BRPopLPush(queue, base.InProgressQueue, time.Second).Result() // KEYS[3] -> asynq:deadlines
} // ARGV[1] -> current time in Unix time
// ARGV[2:] -> List of queues to query in order
// KEYS[1] -> asynq:in_progress //
// ARGV -> List of queues to query in order // dequeueCmd checks whether a queue is paused first, before
// calling RPOPLPUSH to pop a task from the queue.
// It computes the task deadline by inspecting Timout and Deadline fields,
// and inserts the task with deadlines set.
var dequeueCmd = redis.NewScript(` var dequeueCmd = redis.NewScript(`
local res for i = 2, table.getn(ARGV) do
for _, qkey in ipairs(ARGV) do local qkey = ARGV[i]
res = redis.call("RPOPLPUSH", qkey, KEYS[1]) if redis.call("SISMEMBER", KEYS[2], qkey) == 0 then
if res then local msg = redis.call("RPOPLPUSH", qkey, KEYS[1])
return res if msg then
local decoded = cjson.decode(msg)
local timeout = decoded["Timeout"]
local deadline = decoded["Deadline"]
local score
if timeout ~= 0 and deadline ~= 0 then
score = math.min(ARGV[1]+timeout, deadline)
elseif timeout ~= 0 then
score = ARGV[1] + timeout
elseif deadline ~= 0 then
score = deadline
else
return redis.error_reply("asynq internal error: both timeout and deadline are not set")
end
redis.call("ZADD", KEYS[3], score, msg)
return {msg, score}
end
end end
end end
return res`) return nil`)
func (r *RDB) dequeue(queues ...string) (data string, err error) { func (r *RDB) dequeue(qkeys ...interface{}) (msgjson string, deadline int64, err error) {
var args []interface{} var args []interface{}
for _, qkey := range queues { args = append(args, time.Now().Unix())
args = append(args, qkey) args = append(args, qkeys...)
} res, err := dequeueCmd.Run(r.client,
res, err := dequeueCmd.Run(r.client, []string{base.InProgressQueue}, args...).Result() []string{base.InProgressQueue, base.PausedQueues, base.KeyDeadlines}, args...).Result()
if err != nil { if err != nil {
return "", err return "", 0, err
} }
return cast.ToStringE(res) data, err := cast.ToSliceE(res)
if err != nil {
return "", 0, err
}
if len(data) != 2 {
return "", 0, fmt.Errorf("asynq: internal error: dequeue command returned %d values", len(data))
}
if msgjson, err = cast.ToStringE(data[0]); err != nil {
return "", 0, err
}
if deadline, err = cast.ToInt64E(data[1]); err != nil {
return "", 0, err
}
return msgjson, deadline, nil
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:processed:<yyyy-mm-dd> // KEYS[2] -> asynq:deadlines
// KEYS[3] -> unique key in the format <type>:<payload>:<qname> // KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> unique key in the format <type>:<payload>:<qname>
// ARGV[1] -> base.TaskMessage value // ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp // ARGV[2] -> stats expiration timestamp
// ARGV[3] -> task ID // ARGV[3] -> task ID
// Note: LREM count ZERO means "remove all elements equal to val" // Note: LREM count ZERO means "remove all elements equal to val"
var doneCmd = redis.NewScript(` var doneCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
local n = redis.call("INCR", KEYS[2]) return redis.error_reply("NOT FOUND")
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[2], ARGV[2])
end end
if string.len(KEYS[3]) > 0 and redis.call("GET", KEYS[3]) == ARGV[3] then if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
redis.call("DEL", KEYS[3]) return redis.error_reply("NOT FOUND")
end
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[2])
end
if string.len(KEYS[4]) > 0 and redis.call("GET", KEYS[4]) == ARGV[3] then
redis.call("DEL", KEYS[4])
end end
return redis.status_reply("OK") return redis.status_reply("OK")
`) `)
@@ -181,7 +214,7 @@ return redis.status_reply("OK")
// Done removes the task from in-progress queue to mark the task as done. // Done removes the task from in-progress queue to mark the task as done.
// It removes a uniqueness lock acquired by the task, if any. // It removes a uniqueness lock acquired by the task, if any.
func (r *RDB) Done(msg *base.TaskMessage) error { func (r *RDB) Done(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
@@ -189,28 +222,34 @@ func (r *RDB) Done(msg *base.TaskMessage) error {
processedKey := base.ProcessedKey(now) processedKey := base.ProcessedKey(now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return doneCmd.Run(r.client, return doneCmd.Run(r.client,
[]string{base.InProgressQueue, processedKey, msg.UniqueKey}, []string{base.InProgressQueue, base.KeyDeadlines, processedKey, msg.UniqueKey},
bytes, expireAt.Unix(), msg.ID.String()).Err() encoded, expireAt.Unix(), msg.ID.String()).Err()
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:queues:<qname> // KEYS[2] -> asynq:deadlines
// KEYS[3] -> asynq:queues:<qname>
// ARGV[1] -> base.TaskMessage value // ARGV[1] -> base.TaskMessage value
// Note: Use RPUSH to push to the head of the queue. // Note: Use RPUSH to push to the head of the queue.
var requeueCmd = redis.NewScript(` var requeueCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
redis.call("RPUSH", KEYS[2], ARGV[1]) return redis.error_reply("NOT FOUND")
end
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
redis.call("RPUSH", KEYS[3], ARGV[1])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Requeue moves the task from in-progress queue to the specified queue. // Requeue moves the task from in-progress queue to the specified queue.
func (r *RDB) Requeue(msg *base.TaskMessage) error { func (r *RDB) Requeue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
return requeueCmd.Run(r.client, return requeueCmd.Run(r.client,
[]string{base.InProgressQueue, base.QueueKey(msg.Queue)}, []string{base.InProgressQueue, base.KeyDeadlines, base.QueueKey(msg.Queue)},
string(bytes)).Err() encoded).Err()
} }
// KEYS[1] -> asynq:scheduled // KEYS[1] -> asynq:scheduled
@@ -226,7 +265,7 @@ return 1
// Schedule adds the task to the backlog queue to be processed in the future. // Schedule adds the task to the backlog queue to be processed in the future.
func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error { func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
@@ -234,7 +273,7 @@ func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error {
score := float64(processAt.Unix()) score := float64(processAt.Unix())
return scheduleCmd.Run(r.client, return scheduleCmd.Run(r.client,
[]string{base.ScheduledQueue, base.AllQueues}, []string{base.ScheduledQueue, base.AllQueues},
score, bytes, qkey).Err() score, encoded, qkey).Err()
} }
// KEYS[1] -> unique key in the format <type>:<payload>:<qname> // KEYS[1] -> unique key in the format <type>:<payload>:<qname>
@@ -258,7 +297,7 @@ return 1
// ScheduleUnique adds the task to the backlog queue to be processed in the future if the uniqueness lock can be acquired. // ScheduleUnique adds the task to the backlog queue to be processed in the future if the uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired. // It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error { func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
bytes, err := json.Marshal(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
@@ -266,7 +305,7 @@ func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl tim
score := float64(processAt.Unix()) score := float64(processAt.Unix())
res, err := scheduleUniqueCmd.Run(r.client, res, err := scheduleUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, base.ScheduledQueue, base.AllQueues}, []string{msg.UniqueKey, base.ScheduledQueue, base.AllQueues},
msg.ID.String(), int(ttl.Seconds()), score, bytes, qkey).Result() msg.ID.String(), int(ttl.Seconds()), score, encoded, qkey).Result()
if err != nil { if err != nil {
return err return err
} }
@@ -281,37 +320,43 @@ func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl tim
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:retry // KEYS[2] -> asynq:deadlines
// KEYS[3] -> asynq:processed:<yyyy-mm-dd> // KEYS[3] -> asynq:retry
// KEYS[4] -> asynq:failure:<yyyy-mm-dd> // KEYS[4] -> asynq:processed:<yyyy-mm-dd>
// KEYS[5] -> asynq:failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue // ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue // ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp // ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp // ARGV[4] -> stats expiration timestamp
var retryCmd = redis.NewScript(` var retryCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2]) return redis.error_reply("NOT FOUND")
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[4])
end end
local m = redis.call("INCR", KEYS[4]) if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
if tonumber(m) == 1 then return redis.error_reply("NOT FOUND")
end
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[4])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[4]) redis.call("EXPIREAT", KEYS[4], ARGV[4])
end end
local m = redis.call("INCR", KEYS[5])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[5], ARGV[4])
end
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Retry moves the task from in-progress to retry queue, incrementing retry count // Retry moves the task from in-progress to retry queue, incrementing retry count
// and assigning error message to the task message. // and assigning error message to the task message.
func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error { func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
bytesToRemove, err := json.Marshal(msg) msgToRemove, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
modified := *msg modified := *msg
modified.Retried++ modified.Retried++
modified.ErrorMsg = errMsg modified.ErrorMsg = errMsg
bytesToAdd, err := json.Marshal(&modified) msgToAdd, err := base.EncodeMessage(&modified)
if err != nil { if err != nil {
return err return err
} }
@@ -320,8 +365,8 @@ func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) e
failureKey := base.FailureKey(now) failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return retryCmd.Run(r.client, return retryCmd.Run(r.client,
[]string{base.InProgressQueue, base.RetryQueue, processedKey, failureKey}, []string{base.InProgressQueue, base.KeyDeadlines, base.RetryQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), processAt.Unix(), expireAt.Unix()).Err() msgToRemove, msgToAdd, processAt.Unix(), expireAt.Unix()).Err()
} }
const ( const (
@@ -330,9 +375,10 @@ const (
) )
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:dead // KEYS[2] -> asynq:deadlines
// KEYS[3] -> asynq:processed:<yyyy-mm-dd> // KEYS[3] -> asynq:dead
// KEYS[4] -> asynq.failure:<yyyy-mm-dd> // KEYS[4] -> asynq:processed:<yyyy-mm-dd>
// KEYS[5] -> asynq.failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue // ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Dead queue // ARGV[2] -> base.TaskMessage value to add to Dead queue
// ARGV[3] -> died_at UNIX timestamp // ARGV[3] -> died_at UNIX timestamp
@@ -340,31 +386,36 @@ const (
// ARGV[5] -> max number of tasks in dead queue (e.g., 100) // ARGV[5] -> max number of tasks in dead queue (e.g., 100)
// ARGV[6] -> stats expiration timestamp // ARGV[6] -> stats expiration timestamp
var killCmd = redis.NewScript(` var killCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1]) if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2]) return redis.error_reply("NOT FOUND")
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[6])
end end
local m = redis.call("INCR", KEYS[4]) if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
if tonumber(m) == 1 then return redis.error_reply("NOT FOUND")
end
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[3], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[3], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[4])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[6]) redis.call("EXPIREAT", KEYS[4], ARGV[6])
end end
local m = redis.call("INCR", KEYS[5])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[5], ARGV[6])
end
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Kill sends the task to "dead" queue from in-progress queue, assigning // Kill sends the task to "dead" queue from in-progress queue, assigning
// the error message to the task. // the error message to the task.
// It also trims the set by timestamp and set size. // It also trims the set by timestamp and set size.
func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error { func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
bytesToRemove, err := json.Marshal(msg) msgToRemove, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
modified := *msg modified := *msg
modified.ErrorMsg = errMsg modified.ErrorMsg = errMsg
bytesToAdd, err := json.Marshal(&modified) msgToAdd, err := base.EncodeMessage(&modified)
if err != nil { if err != nil {
return err return err
} }
@@ -374,51 +425,21 @@ func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
failureKey := base.FailureKey(now) failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return killCmd.Run(r.client, return killCmd.Run(r.client,
[]string{base.InProgressQueue, base.DeadQueue, processedKey, failureKey}, []string{base.InProgressQueue, base.KeyDeadlines, base.DeadQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err() msgToRemove, msgToAdd, now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err()
} }
// KEYS[1] -> asynq:in_progress // CheckAndEnqueue checks for all scheduled/retry tasks and enqueues any tasks that
// ARGV[1] -> queue prefix // are ready to be processed.
var requeueAllCmd = redis.NewScript(` func (r *RDB) CheckAndEnqueue() (err error) {
local msgs = redis.call("LRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[1] .. decoded["Queue"]
redis.call("RPUSH", qkey, msg)
redis.call("LREM", KEYS[1], 0, msg)
end
return table.getn(msgs)`)
// RequeueAll moves all tasks from in-progress list to the queue
// and reports the number of tasks restored.
func (r *RDB) RequeueAll() (int64, error) {
res, err := requeueAllCmd.Run(r.client, []string{base.InProgressQueue}, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
n, ok := res.(int64)
if !ok {
return 0, fmt.Errorf("could not cast %v to int64", res)
}
return n, nil
}
// CheckAndEnqueue checks for all scheduled tasks and enqueues any tasks that
// have to be processed.
//
// qnames specifies to which queues to send tasks.
func (r *RDB) CheckAndEnqueue(qnames ...string) error {
delayed := []string{base.ScheduledQueue, base.RetryQueue} delayed := []string{base.ScheduledQueue, base.RetryQueue}
for _, zset := range delayed { for _, zset := range delayed {
var err error n := 1
if len(qnames) == 1 { for n != 0 {
err = r.forwardSingle(zset, base.QueueKey(qnames[0])) n, err = r.forward(zset)
} else { if err != nil {
err = r.forward(zset) return err
} }
if err != nil {
return err
} }
} }
return nil return nil
@@ -427,53 +448,61 @@ func (r *RDB) CheckAndEnqueue(qnames ...string) error {
// KEYS[1] -> source queue (e.g. scheduled or retry queue) // KEYS[1] -> source queue (e.g. scheduled or retry queue)
// ARGV[1] -> current unix time // ARGV[1] -> current unix time
// ARGV[2] -> queue prefix // ARGV[2] -> queue prefix
// Note: Script moves tasks up to 100 at a time to keep the runtime of script short.
var forwardCmd = redis.NewScript(` var forwardCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1]) local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1], "LIMIT", 0, 100)
for _, msg in ipairs(msgs) do for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg) local decoded = cjson.decode(msg)
local qkey = ARGV[2] .. decoded["Queue"] local qkey = ARGV[2] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg) redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg) redis.call("ZREM", KEYS[1], msg)
end end
return msgs`) return table.getn(msgs)`)
// forward moves all tasks with a score less than the current unix time // forward moves tasks with a score less than the current unix time
// from the src zset. // from the src zset. It returns the number of tasks moved.
func (r *RDB) forward(src string) error { func (r *RDB) forward(src string) (int, error) {
now := float64(time.Now().Unix()) now := float64(time.Now().Unix())
return forwardCmd.Run(r.client, res, err := forwardCmd.Run(r.client,
[]string{src}, now, base.QueuePrefix).Err() []string{src}, now, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
return cast.ToInt(res), nil
} }
// KEYS[1] -> source queue (e.g. scheduled or retry queue) // ListDeadlineExceeded returns a list of task messages that have exceeded the given deadline.
// KEYS[2] -> destination queue func (r *RDB) ListDeadlineExceeded(deadline time.Time) ([]*base.TaskMessage, error) {
var forwardSingleCmd = redis.NewScript(` var msgs []*base.TaskMessage
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1]) opt := &redis.ZRangeBy{
for _, msg in ipairs(msgs) do Min: "-inf",
redis.call("LPUSH", KEYS[2], msg) Max: strconv.FormatInt(deadline.Unix(), 10),
redis.call("ZREM", KEYS[1], msg) }
end res, err := r.client.ZRangeByScore(base.KeyDeadlines, opt).Result()
return msgs`) if err != nil {
return nil, err
// forwardSingle moves all tasks with a score less than the current unix time }
// from the src zset to dst list. for _, s := range res {
func (r *RDB) forwardSingle(src, dst string) error { msg, err := base.DecodeMessage(s)
now := float64(time.Now().Unix()) if err != nil {
return forwardSingleCmd.Run(r.client, return nil, err
[]string{src, dst}, now).Err() }
msgs = append(msgs, msg)
}
return msgs, nil
} }
// KEYS[1] -> asynq:ps:<host:pid> // KEYS[1] -> asynq:servers:<host:pid:sid>
// KEYS[2] -> asynq:ps // KEYS[2] -> asynq:servers
// KEYS[3] -> asynq:workers<host:pid> // KEYS[3] -> asynq:workers<host:pid:sid>
// keys[4] -> asynq:workers // KEYS[4] -> asynq:workers
// ARGV[1] -> expiration time // ARGV[1] -> expiration time
// ARGV[2] -> TTL in seconds // ARGV[2] -> TTL in seconds
// ARGV[3] -> process info // ARGV[3] -> server info
// ARGV[4:] -> alternate key-value pair of (worker id, worker data) // ARGV[4:] -> alternate key-value pair of (worker id, worker data)
// Note: Add key to ZSET with expiration time as score. // Note: Add key to ZSET with expiration time as score.
// ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996 // ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996
var writeProcessInfoCmd = redis.NewScript(` var writeServerStateCmd = redis.NewScript(`
redis.call("SETEX", KEYS[1], ARGV[2], ARGV[3]) redis.call("SETEX", KEYS[1], ARGV[2], ARGV[3])
redis.call("ZADD", KEYS[2], ARGV[1], KEYS[1]) redis.call("ZADD", KEYS[2], ARGV[1], KEYS[1])
redis.call("DEL", KEYS[3]) redis.call("DEL", KEYS[3])
@@ -484,50 +513,45 @@ redis.call("EXPIRE", KEYS[3], ARGV[2])
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3]) redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// WriteProcessState writes process state data to redis with expiration set to the value ttl. // WriteServerState writes server state data to redis with expiration set to the value ttl.
func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error { func (r *RDB) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
info := ps.Get()
bytes, err := json.Marshal(info) bytes, err := json.Marshal(info)
if err != nil { if err != nil {
return err return err
} }
var args []interface{} // args to the lua script
exp := time.Now().Add(ttl).UTC() exp := time.Now().Add(ttl).UTC()
workers := ps.GetWorkers() args := []interface{}{float64(exp.Unix()), ttl.Seconds(), bytes} // args to the lua script
args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes)
for _, w := range workers { for _, w := range workers {
bytes, err := json.Marshal(w) bytes, err := json.Marshal(w)
if err != nil { if err != nil {
continue // skip bad data continue // skip bad data
} }
args = append(args, w.ID.String(), bytes) args = append(args, w.ID, bytes)
} }
pkey := base.ProcessInfoKey(info.Host, info.PID) skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
wkey := base.WorkersKey(info.Host, info.PID) wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
return writeProcessInfoCmd.Run(r.client, return writeServerStateCmd.Run(r.client,
[]string{pkey, base.AllProcesses, wkey, base.AllWorkers}, []string{skey, base.AllServers, wkey, base.AllWorkers},
args...).Err() args...).Err()
} }
// KEYS[1] -> asynq:ps // KEYS[1] -> asynq:servers
// KEYS[2] -> asynq:ps:<host:pid> // KEYS[2] -> asynq:servers:<host:pid:sid>
// KEYS[3] -> asynq:workers // KEYS[3] -> asynq:workers
// KEYS[4] -> asynq:workers<host:pid> // KEYS[4] -> asynq:workers<host:pid:sid>
var clearProcessInfoCmd = redis.NewScript(` var clearServerStateCmd = redis.NewScript(`
redis.call("ZREM", KEYS[1], KEYS[2]) redis.call("ZREM", KEYS[1], KEYS[2])
redis.call("DEL", KEYS[2]) redis.call("DEL", KEYS[2])
redis.call("ZREM", KEYS[3], KEYS[4]) redis.call("ZREM", KEYS[3], KEYS[4])
redis.call("DEL", KEYS[4]) redis.call("DEL", KEYS[4])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// ClearProcessState deletes process state data from redis. // ClearServerState deletes server state data from redis.
func (r *RDB) ClearProcessState(ps *base.ProcessState) error { func (r *RDB) ClearServerState(host string, pid int, serverID string) error {
info := ps.Get() skey := base.ServerInfoKey(host, pid, serverID)
host, pid := info.Host, info.PID wkey := base.WorkersKey(host, pid, serverID)
pkey := base.ProcessInfoKey(host, pid) return clearServerStateCmd.Run(r.client,
wkey := base.WorkersKey(host, pid) []string{base.AllServers, skey, base.AllWorkers, wkey}).Err()
return clearProcessInfoCmd.Run(r.client,
[]string{base.AllProcesses, pkey, base.AllWorkers, wkey}).Err()
} }
// CancelationPubSub returns a pubsub for cancelation messages. // CancelationPubSub returns a pubsub for cancelation messages.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,190 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package testbroker exports a broker implementation that should be used in package testing.
package testbroker
import (
"errors"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
)
var errRedisDown = errors.New("asynqtest: redis is down")
// TestBroker is a broker implementation which enables
// to simulate Redis failure in tests.
type TestBroker struct {
mu sync.Mutex
sleeping bool
// real broker
real base.Broker
}
// Make sure TestBroker implements Broker interface at compile time.
var _ base.Broker = (*TestBroker)(nil)
func NewTestBroker(b base.Broker) *TestBroker {
return &TestBroker{real: b}
}
func (tb *TestBroker) Sleep() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = true
}
func (tb *TestBroker) Wakeup() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = false
}
func (tb *TestBroker) Enqueue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Enqueue(msg)
}
func (tb *TestBroker) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.EnqueueUnique(msg, ttl)
}
func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, time.Time{}, errRedisDown
}
return tb.real.Dequeue(qnames...)
}
func (tb *TestBroker) Done(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Done(msg)
}
func (tb *TestBroker) Requeue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Requeue(msg)
}
func (tb *TestBroker) Schedule(msg *base.TaskMessage, processAt time.Time) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Schedule(msg, processAt)
}
func (tb *TestBroker) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ScheduleUnique(msg, processAt, ttl)
}
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Retry(msg, processAt, errMsg)
}
func (tb *TestBroker) Kill(msg *base.TaskMessage, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Kill(msg, errMsg)
}
func (tb *TestBroker) CheckAndEnqueue() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.CheckAndEnqueue()
}
func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time) ([]*base.TaskMessage, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.ListDeadlineExceeded(deadline)
}
func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.WriteServerState(info, workers, ttl)
}
func (tb *TestBroker) ClearServerState(host string, pid int, serverID string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ClearServerState(host, pid, serverID)
}
func (tb *TestBroker) CancelationPubSub() (*redis.PubSub, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.CancelationPubSub()
}
func (tb *TestBroker) PublishCancelation(id string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.PublishCancelation(id)
}
func (tb *TestBroker) Close() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Close()
}

View File

@@ -5,6 +5,7 @@
package asynq package asynq
import ( import (
"encoding/json"
"fmt" "fmt"
"time" "time"
@@ -30,6 +31,19 @@ func (p Payload) Has(key string) bool {
return ok return ok
} }
func toInt(v interface{}) (int, error) {
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return int(val), nil
default:
return cast.ToIntE(v)
}
}
// GetString returns a string value if a string type is associated with // GetString returns a string value if a string type is associated with
// the key, otherwise reports an error. // the key, otherwise reports an error.
func (p Payload) GetString(key string) (string, error) { func (p Payload) GetString(key string) (string, error) {
@@ -47,7 +61,7 @@ func (p Payload) GetInt(key string) (int, error) {
if !ok { if !ok {
return 0, &errKeyNotFound{key} return 0, &errKeyNotFound{key}
} }
return cast.ToIntE(v) return toInt(v)
} }
// GetFloat64 returns a float64 value if a numeric type is associated with // GetFloat64 returns a float64 value if a numeric type is associated with
@@ -57,7 +71,12 @@ func (p Payload) GetFloat64(key string) (float64, error) {
if !ok { if !ok {
return 0, &errKeyNotFound{key} return 0, &errKeyNotFound{key}
} }
return cast.ToFloat64E(v) switch v := v.(type) {
case json.Number:
return v.Float64()
default:
return cast.ToFloat64E(v)
}
} }
// GetBool returns a boolean value if a boolean type is associated with // GetBool returns a boolean value if a boolean type is associated with
@@ -87,7 +106,20 @@ func (p Payload) GetIntSlice(key string) ([]int, error) {
if !ok { if !ok {
return nil, &errKeyNotFound{key} return nil, &errKeyNotFound{key}
} }
return cast.ToIntSliceE(v) switch v := v.(type) {
case []interface{}:
var res []int
for _, elem := range v {
val, err := toInt(elem)
if err != nil {
return nil, err
}
res = append(res, int(val))
}
return res, nil
default:
return cast.ToIntSliceE(v)
}
} }
// GetStringMap returns a map of string to empty interface // GetStringMap returns a map of string to empty interface
@@ -131,7 +163,20 @@ func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
if !ok { if !ok {
return nil, &errKeyNotFound{key} return nil, &errKeyNotFound{key}
} }
return cast.ToStringMapIntE(v) switch v := v.(type) {
case map[string]interface{}:
res := make(map[string]int)
for key, val := range v {
ival, err := toInt(val)
if err != nil {
return nil, err
}
res[key] = ival
}
return res, nil
default:
return cast.ToStringMapIntE(v)
}
} }
// GetStringMapBool returns a map of string to boolean // GetStringMapBool returns a map of string to boolean
@@ -162,5 +207,14 @@ func (p Payload) GetDuration(key string) (time.Duration, error) {
if !ok { if !ok {
return 0, &errKeyNotFound{key} return 0, &errKeyNotFound{key}
} }
return cast.ToDurationE(v) switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return time.Duration(val), nil
default:
return cast.ToDurationE(v)
}
} }

View File

@@ -10,6 +10,7 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
@@ -40,12 +41,11 @@ func TestPayloadString(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -85,12 +85,11 @@ func TestPayloadInt(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -130,12 +129,11 @@ func TestPayloadFloat64(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -175,12 +173,11 @@ func TestPayloadBool(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -221,12 +218,11 @@ func TestPayloadStringSlice(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -268,12 +264,11 @@ func TestPayloadIntSlice(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -315,21 +310,28 @@ func TestPayloadStringMap(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
payload = Payload{out.Payload} payload = Payload{out.Payload}
got, err = payload.GetStringMap(tc.key) got, err = payload.GetStringMap(tc.key)
diff = cmp.Diff(got, tc.data[tc.key]) ignoreOpt := cmpopts.IgnoreMapEntries(func(key string, val interface{}) bool {
switch val.(type) {
case json.Number:
return true
default:
return false
}
})
diff = cmp.Diff(got, tc.data[tc.key], ignoreOpt)
if err != nil || diff != "" { if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil", t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil;(-want,+got)\n%s",
tc.key, got, err, tc.data[tc.key]) tc.key, got, err, tc.data[tc.key], diff)
} }
// access non-existent key. // access non-existent key.
@@ -362,12 +364,11 @@ func TestPayloadStringMapString(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -413,12 +414,11 @@ func TestPayloadStringMapStringSlice(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -465,12 +465,11 @@ func TestPayloadStringMapInt(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -517,12 +516,11 @@ func TestPayloadStringMapBool(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -564,12 +562,11 @@ func TestPayloadTime(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@@ -611,12 +608,11 @@ func TestPayloadDuration(t *testing.T) {
// encode and then decode task messsage. // encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data) in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in) encoded, err := base.EncodeMessage(in)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
var out base.TaskMessage out, err := base.DecodeMessage(encoded)
err = json.Unmarshal(b, &out)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@@ -13,15 +13,14 @@ import (
"time" "time"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"golang.org/x/time/rate" "golang.org/x/time/rate"
) )
type processor struct { type processor struct {
logger Logger logger *log.Logger
rdb *rdb.RDB broker base.Broker
ps *base.ProcessState
handler Handler handler Handler
@@ -34,6 +33,8 @@ type processor struct {
errHandler ErrorHandler errHandler ErrorHandler
shutdownTimeout time.Duration
// channel via which to send sync requests to syncer. // channel via which to send sync requests to syncer.
syncRequestCh chan<- *syncRequest syncRequestCh chan<- *syncRequest
@@ -49,43 +50,60 @@ type processor struct {
done chan struct{} done chan struct{}
once sync.Once once sync.Once
// abort channel is closed when the shutdown of the "processor" goroutine starts. // quit channel is closed when the shutdown of the "processor" goroutine starts.
abort chan struct{}
// quit channel communicates to the in-flight worker goroutines to stop.
quit chan struct{} quit chan struct{}
// abort channel communicates to the in-flight worker goroutines to stop.
abort chan struct{}
// cancelations is a set of cancel functions for all in-progress tasks. // cancelations is a set of cancel functions for all in-progress tasks.
cancelations *base.Cancelations cancelations *base.Cancelations
starting chan<- *base.TaskMessage
finished chan<- *base.TaskMessage
} }
type retryDelayFunc func(n int, err error, task *Task) time.Duration type retryDelayFunc func(n int, err error, task *Task) time.Duration
type processorParams struct {
logger *log.Logger
broker base.Broker
retryDelayFunc retryDelayFunc
syncCh chan<- *syncRequest
cancelations *base.Cancelations
concurrency int
queues map[string]int
strictPriority bool
errHandler ErrorHandler
shutdownTimeout time.Duration
starting chan<- *base.TaskMessage
finished chan<- *base.TaskMessage
}
// newProcessor constructs a new processor. // newProcessor constructs a new processor.
func newProcessor(l Logger, r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc, func newProcessor(params processorParams) *processor {
syncCh chan<- *syncRequest, c *base.Cancelations, errHandler ErrorHandler) *processor { queues := normalizeQueues(params.queues)
info := ps.Get()
qcfg := normalizeQueueCfg(info.Queues)
orderedQueues := []string(nil) orderedQueues := []string(nil)
if info.StrictPriority { if params.strictPriority {
orderedQueues = sortByPriority(qcfg) orderedQueues = sortByPriority(queues)
} }
return &processor{ return &processor{
logger: l, logger: params.logger,
rdb: r, broker: params.broker,
ps: ps, queueConfig: queues,
queueConfig: qcfg,
orderedQueues: orderedQueues, orderedQueues: orderedQueues,
retryDelayFunc: fn, retryDelayFunc: params.retryDelayFunc,
syncRequestCh: syncCh, syncRequestCh: params.syncCh,
cancelations: c, cancelations: params.cancelations,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1), errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, info.Concurrency), sema: make(chan struct{}, params.concurrency),
done: make(chan struct{}), done: make(chan struct{}),
abort: make(chan struct{}),
quit: make(chan struct{}), quit: make(chan struct{}),
errHandler: errHandler, abort: make(chan struct{}),
errHandler: params.errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }), handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
starting: params.starting,
finished: params.finished,
} }
} }
@@ -93,9 +111,9 @@ func newProcessor(l Logger, r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc
// It's safe to call this method multiple times. // It's safe to call this method multiple times.
func (p *processor) stop() { func (p *processor) stop() {
p.once.Do(func() { p.once.Do(func() {
p.logger.Info("Processor shutting down...") p.logger.Debug("Processor shutting down...")
// Unblock if processor is waiting for sema token. // Unblock if processor is waiting for sema token.
close(p.abort) close(p.quit)
// Signal the processor goroutine to stop processing tasks // Signal the processor goroutine to stop processing tasks
// from the queue. // from the queue.
p.done <- struct{}{} p.done <- struct{}{}
@@ -106,35 +124,24 @@ func (p *processor) stop() {
func (p *processor) terminate() { func (p *processor) terminate() {
p.stop() p.stop()
// IDEA: Allow user to customize this timeout value. time.AfterFunc(p.shutdownTimeout, func() { close(p.abort) })
const timeout = 8 * time.Second
time.AfterFunc(timeout, func() { close(p.quit) })
p.logger.Info("Waiting for all workers to finish...") p.logger.Info("Waiting for all workers to finish...")
// send cancellation signal to all in-progress task handlers
for _, cancel := range p.cancelations.GetAll() {
cancel()
}
// block until all workers have released the token // block until all workers have released the token
for i := 0; i < cap(p.sema); i++ { for i := 0; i < cap(p.sema); i++ {
p.sema <- struct{}{} p.sema <- struct{}{}
} }
p.logger.Info("All workers have finished") p.logger.Info("All workers have finished")
p.restore() // move any unfinished tasks back to the queue.
} }
func (p *processor) start(wg *sync.WaitGroup) { func (p *processor) start(wg *sync.WaitGroup) {
// NOTE: The call to "restore" needs to complete before starting
// the processor goroutine.
p.restore()
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
for { for {
select { select {
case <-p.done: case <-p.done:
p.logger.Info("Processor done") p.logger.Debug("Processor done")
return return
default: default:
p.exec() p.exec()
@@ -146,51 +153,57 @@ func (p *processor) start(wg *sync.WaitGroup) {
// exec pulls a task out of the queue and starts a worker goroutine to // exec pulls a task out of the queue and starts a worker goroutine to
// process the task. // process the task.
func (p *processor) exec() { func (p *processor) exec() {
qnames := p.queues()
msg, err := p.rdb.Dequeue(qnames...)
if err == rdb.ErrNoProcessableTask {
// queues are empty, this is a normal behavior.
if len(p.queueConfig) > 1 {
// sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: With multiple queues, we are not using blocking pop operation and
// polling queues instead. This adds significant load to redis.
time.Sleep(time.Second)
}
return
}
if err != nil {
if p.errLogLimiter.Allow() {
p.logger.Error("Dequeue error: %v", err)
}
return
}
select { select {
case <-p.abort: case <-p.quit:
// shutdown is starting, return immediately after requeuing the message.
p.requeue(msg)
return return
case p.sema <- struct{}{}: // acquire token case p.sema <- struct{}{}: // acquire token
p.ps.AddWorkerStats(msg, time.Now()) qnames := p.queues()
msg, deadline, err := p.broker.Dequeue(qnames...)
switch {
case err == rdb.ErrNoProcessableTask:
p.logger.Debug("All queues are empty")
// Queues are empty, this is a normal behavior.
// Sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: We are not using blocking pop operation and polling queues instead.
// This adds significant load to redis.
time.Sleep(time.Second)
<-p.sema // release token
return
case err != nil:
if p.errLogLimiter.Allow() {
p.logger.Errorf("Dequeue error: %v", err)
}
<-p.sema // release token
return
}
p.starting <- msg
go func() { go func() {
defer func() { defer func() {
p.ps.DeleteWorkerStats(msg) p.finished <- msg
<-p.sema /* release token */ <-p.sema // release token
}()
ctx, cancel := createContext(msg, deadline)
p.cancelations.Add(msg.ID.String(), cancel)
defer func() {
cancel()
p.cancelations.Delete(msg.ID.String())
}() }()
resCh := make(chan error, 1) resCh := make(chan error, 1)
task := NewTask(msg.Type, msg.Payload) task := NewTask(msg.Type, msg.Payload)
ctx, cancel := createContext(msg) go func() { resCh <- perform(ctx, task, p.handler) }()
p.cancelations.Add(msg.ID.String(), cancel)
go func() {
resCh <- perform(ctx, task, p.handler)
p.cancelations.Delete(msg.ID.String())
}()
select { select {
case <-p.quit: case <-p.abort:
// time is up, quit this worker goroutine. // time is up, push the message back to queue and quit this worker goroutine.
p.logger.Warn("Quitting worker. task id=%s", msg.ID) p.logger.Warnf("Quitting worker. task id=%s", msg.ID)
p.requeue(msg)
return
case <-ctx.Done():
p.logger.Debugf("Retrying task. task id=%s", msg.ID) // TODO: Improve this log message and above
p.retryOrKill(ctx, msg, ctx.Err())
return return
case resErr := <-resCh: case resErr := <-resCh:
// Note: One of three things should happen. // Note: One of three things should happen.
@@ -199,81 +212,90 @@ func (p *processor) exec() {
// 3) Kill -> Removes the message from InProgress & Adds the message to Dead // 3) Kill -> Removes the message from InProgress & Adds the message to Dead
if resErr != nil { if resErr != nil {
if p.errHandler != nil { if p.errHandler != nil {
p.errHandler.HandleError(task, resErr, msg.Retried, msg.Retry) p.errHandler.HandleError(ctx, task, resErr)
}
if msg.Retried >= msg.Retry {
p.kill(msg, resErr)
} else {
p.retry(msg, resErr)
} }
p.retryOrKill(ctx, msg, resErr)
return return
} }
p.markAsDone(msg) p.markAsDone(ctx, msg)
} }
}() }()
} }
} }
// restore moves all tasks from "in-progress" back to queue
// to restore all unfinished tasks.
func (p *processor) restore() {
n, err := p.rdb.RequeueAll()
if err != nil {
p.logger.Error("Could not restore unfinished tasks: %v", err)
}
if n > 0 {
p.logger.Info("Restored %d unfinished tasks back to queue", n)
}
}
func (p *processor) requeue(msg *base.TaskMessage) { func (p *processor) requeue(msg *base.TaskMessage) {
err := p.rdb.Requeue(msg) err := p.broker.Requeue(msg)
if err != nil { if err != nil {
p.logger.Error("Could not push task id=%s back to queue: %v", msg.ID, err) p.logger.Errorf("Could not push task id=%s back to queue: %v", msg.ID, err)
} else {
p.logger.Infof("Pushed task id=%s back to queue", msg.ID)
} }
} }
func (p *processor) markAsDone(msg *base.TaskMessage) { func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) {
err := p.rdb.Done(msg) err := p.broker.Done(msg)
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue) errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.InProgressQueue, err)
p.logger.Warn("%s; Will retry syncing", errMsg) deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Done(msg) return p.broker.Done(msg)
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline,
} }
} }
} }
func (p *processor) retry(msg *base.TaskMessage, e error) { func (p *processor) retryOrKill(ctx context.Context, msg *base.TaskMessage, err error) {
if msg.Retried >= msg.Retry {
p.kill(ctx, msg, err)
} else {
p.retry(ctx, msg, err)
}
}
func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error) {
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload)) d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(d) retryAt := time.Now().Add(d)
err := p.rdb.Retry(msg, retryAt, e.Error()) err := p.broker.Retry(msg, retryAt, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue)
p.logger.Warn("%s; Will retry syncing", errMsg) deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Retry(msg, retryAt, e.Error()) return p.broker.Retry(msg, retryAt, e.Error())
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline,
} }
} }
} }
func (p *processor) kill(msg *base.TaskMessage, e error) { func (p *processor) kill(ctx context.Context, msg *base.TaskMessage, e error) {
p.logger.Warn("Retry exhausted for task id=%s", msg.ID) p.logger.Warnf("Retry exhausted for task id=%s", msg.ID)
err := p.rdb.Kill(msg, e.Error()) err := p.broker.Kill(msg, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue)
p.logger.Warn("%s; Will retry syncing", errMsg) deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Kill(msg, e.Error()) return p.broker.Kill(msg, e.Error())
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline,
} }
} }
} }
@@ -296,7 +318,7 @@ func (p *processor) queues() []string {
} }
var names []string var names []string
for qname, priority := range p.queueConfig { for qname, priority := range p.queueConfig {
for i := 0; i < int(priority); i++ { for i := 0; i < priority; i++ {
names = append(names, qname) names = append(names, qname)
} }
} }
@@ -360,16 +382,15 @@ func (x byPriority) Len() int { return len(x) }
func (x byPriority) Less(i, j int) bool { return x[i].priority < x[j].priority } func (x byPriority) Less(i, j int) bool { return x[i].priority < x[j].priority }
func (x byPriority) Swap(i, j int) { x[i], x[j] = x[j], x[i] } func (x byPriority) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
// normalizeQueueCfg divides priority numbers by their // normalizeQueues divides priority numbers by their greatest common divisor.
// greatest common divisor. func normalizeQueues(queues map[string]int) map[string]int {
func normalizeQueueCfg(queueCfg map[string]int) map[string]int {
var xs []int var xs []int
for _, x := range queueCfg { for _, x := range queues {
xs = append(xs, x) xs = append(xs, x)
} }
d := gcd(xs...) d := gcd(xs...)
res := make(map[string]int) res := make(map[string]int)
for q, x := range queueCfg { for q, x := range queues {
res[q] = x / d res[q] = x / d
} }
return res return res
@@ -391,20 +412,3 @@ func gcd(xs ...int) int {
} }
return res return res
} }
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage) (ctx context.Context, cancel context.CancelFunc) {
ctx = context.Background()
timeout, err := time.ParseDuration(msg.Timeout)
if err == nil && timeout != 0 {
ctx, cancel = context.WithTimeout(ctx, timeout)
}
deadline, err := time.Parse(time.RFC3339, msg.Deadline)
if err == nil && !deadline.IsZero() {
ctx, cancel = context.WithDeadline(ctx, deadline)
}
if cancel == nil {
ctx, cancel = context.WithCancel(ctx)
}
return ctx, cancel
}

View File

@@ -17,9 +17,31 @@ import (
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
) )
// fakeHeartbeater receives from starting and finished channels and do nothing.
func fakeHeartbeater(starting, finished <-chan *base.TaskMessage, done <-chan struct{}) {
for {
select {
case <-starting:
case <-finished:
case <-done:
return
}
}
}
// fakeSyncer receives from sync channel and do nothing.
func fakeSyncer(syncCh <-chan *syncRequest, done <-chan struct{}) {
for {
select {
case <-syncCh:
case <-done:
return
}
}
}
func TestProcessorSuccess(t *testing.T) { func TestProcessorSuccess(t *testing.T) {
r := setup(t) r := setup(t)
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
@@ -37,19 +59,16 @@ func TestProcessorSuccess(t *testing.T) {
tests := []struct { tests := []struct {
enqueued []*base.TaskMessage // initial default queue state enqueued []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run incoming []*base.TaskMessage // tasks to be enqueued during run
wait time.Duration // wait duration between starting and stopping processor for this test case
wantProcessed []*Task // tasks to be processed at the end wantProcessed []*Task // tasks to be processed at the end
}{ }{
{ {
enqueued: []*base.TaskMessage{m1}, enqueued: []*base.TaskMessage{m1},
incoming: []*base.TaskMessage{m2, m3, m4}, incoming: []*base.TaskMessage{m2, m3, m4},
wait: time.Second,
wantProcessed: []*Task{t1, t2, t3, t4}, wantProcessed: []*Task{t1, t2, t3, t4},
}, },
{ {
enqueued: []*base.TaskMessage{}, enqueued: []*base.TaskMessage{},
incoming: []*base.TaskMessage{m1}, incoming: []*base.TaskMessage{m1},
wait: time.Second,
wantProcessed: []*Task{t1}, wantProcessed: []*Task{t1},
}, },
} }
@@ -67,13 +86,30 @@ func TestProcessorSuccess(t *testing.T) {
processed = append(processed, task) processed = append(processed, task)
return nil return nil
} }
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false) starting := make(chan *base.TaskMessage)
cancelations := base.NewCancelations() finished := make(chan *base.TaskMessage)
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil) syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: defaultDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
for _, msg := range tc.incoming { for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg) err := rdbClient.Enqueue(msg)
if err != nil { if err != nil {
@@ -81,16 +117,90 @@ func TestProcessorSuccess(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
} }
time.Sleep(tc.wait) time.Sleep(2 * time.Second) // wait for two second to allow all enqueued tasks to be processed.
p.terminate()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
if l := r.LLen(base.InProgressQueue).Val(); l != 0 { if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l) t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
} }
p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
mu.Unlock()
}
}
// https://github.com/hibiken/asynq/issues/166
func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("large_number", map[string]interface{}{"data": 111111111111111111})
t1 := NewTask(m1.Type, m1.Payload)
tests := []struct {
enqueued []*base.TaskMessage // initial default queue state
wantProcessed []*Task // tasks to be processed at the end
}{
{
enqueued: []*base.TaskMessage{m1},
wantProcessed: []*Task{t1},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue.
var mu sync.Mutex
var processed []*Task
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
if data, err := task.Payload.GetInt("data"); err != nil {
t.Errorf("coult not get data from payload: %v", err)
} else {
t.Logf("data == %d", data)
}
processed = append(processed, task)
return nil
}
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: defaultDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler)
p.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second) // wait for two second to allow all enqueued tasks to be processed.
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmpopts.IgnoreUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
mu.Unlock()
} }
} }
@@ -105,19 +215,6 @@ func TestProcessorRetry(t *testing.T) {
m4 := h.NewTaskMessage("sync", nil) m4 := h.NewTaskMessage("sync", nil)
errMsg := "something went wrong" errMsg := "something went wrong"
// r* is m* after retry
r1 := *m1
r1.ErrorMsg = errMsg
r2 := *m2
r2.ErrorMsg = errMsg
r2.Retried = m2.Retried + 1
r3 := *m3
r3.ErrorMsg = errMsg
r3.Retried = m3.Retried + 1
r4 := *m4
r4.ErrorMsg = errMsg
r4.Retried = m4.Retried + 1
now := time.Now() now := time.Now()
tests := []struct { tests := []struct {
@@ -137,13 +234,13 @@ func TestProcessorRetry(t *testing.T) {
handler: HandlerFunc(func(ctx context.Context, task *Task) error { handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return fmt.Errorf(errMsg) return fmt.Errorf(errMsg)
}), }),
wait: time.Second, wait: 2 * time.Second,
wantRetry: []h.ZSetEntry{ wantRetry: []h.ZSetEntry{
{Msg: &r2, Score: float64(now.Add(time.Minute).Unix())}, {Msg: h.TaskMessageAfterRetry(*m2, errMsg), Score: float64(now.Add(time.Minute).Unix())},
{Msg: &r3, Score: float64(now.Add(time.Minute).Unix())}, {Msg: h.TaskMessageAfterRetry(*m3, errMsg), Score: float64(now.Add(time.Minute).Unix())},
{Msg: &r4, Score: float64(now.Add(time.Minute).Unix())}, {Msg: h.TaskMessageAfterRetry(*m4, errMsg), Score: float64(now.Add(time.Minute).Unix())},
}, },
wantDead: []*base.TaskMessage{&r1}, wantDead: []*base.TaskMessage{h.TaskMessageWithError(*m1, errMsg)},
wantErrCount: 4, wantErrCount: 4,
}, },
} }
@@ -160,18 +257,33 @@ func TestProcessorRetry(t *testing.T) {
mu sync.Mutex // guards n mu sync.Mutex // guards n
n int // number of times error handler is called n int // number of times error handler is called
) )
errHandler := func(t *Task, err error, retried, maxRetry int) { errHandler := func(ctx context.Context, t *Task, err error) {
mu.Lock() mu.Lock()
defer mu.Unlock() defer mu.Unlock()
n++ n++
} }
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false) starting := make(chan *base.TaskMessage)
cancelations := base.NewCancelations() finished := make(chan *base.TaskMessage)
p := newProcessor(testLogger, rdbClient, ps, delayFunc, nil, cancelations, ErrorHandlerFunc(errHandler)) done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: delayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: ErrorHandlerFunc(errHandler),
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = tc.handler p.handler = tc.handler
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
for _, msg := range tc.incoming { for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg) err := rdbClient.Enqueue(msg)
if err != nil { if err != nil {
@@ -179,10 +291,10 @@ func TestProcessorRetry(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
} }
time.Sleep(tc.wait) time.Sleep(tc.wait) // FIXME: This makes test flaky.
p.terminate() p.terminate()
cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to second difference in zset score cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to a second difference in zset score
gotRetry := h.GetRetryEntries(t, r) gotRetry := h.GetRetryEntries(t, r)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" { if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff) t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff)
@@ -231,9 +343,25 @@ func TestProcessorQueues(t *testing.T) {
} }
for _, tc := range tests { for _, tc := range tests {
cancelations := base.NewCancelations() starting := make(chan *base.TaskMessage)
ps := base.NewProcessState("localhost", 1234, 10, tc.queueCfg, false) finished := make(chan *base.TaskMessage)
p := newProcessor(testLogger, nil, ps, defaultDelayFunc, nil, cancelations, nil) done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: nil,
retryDelayFunc: defaultDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: tc.queueCfg,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
got := p.queues() got := p.queues()
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" { if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s", t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s",
@@ -298,14 +426,28 @@ func TestProcessorWithStrictPriority(t *testing.T) {
base.DefaultQueueName: 2, base.DefaultQueueName: 2,
"low": 1, "low": 1,
} }
// Note: Set concurrency to 1 to make sure tasks are processed one at a time. starting := make(chan *base.TaskMessage)
cancelations := base.NewCancelations() finished := make(chan *base.TaskMessage)
ps := base.NewProcessState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/) done := make(chan struct{})
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil) defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: defaultDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time.
queues: queueCfg,
strictPriority: true,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
time.Sleep(tc.wait) time.Sleep(tc.wait)
p.terminate() p.terminate()
@@ -365,84 +507,82 @@ func TestPerform(t *testing.T) {
} }
} }
func TestCreateContextWithTimeRestrictions(t *testing.T) { func TestGCD(t *testing.T) {
var (
noTimeout = time.Duration(0)
noDeadline = time.Time{}
)
tests := []struct { tests := []struct {
desc string input []int
timeout time.Duration want int
deadline time.Time
wantDeadline time.Time
}{ }{
{"only with timeout", 10 * time.Second, noDeadline, time.Now().Add(10 * time.Second)}, {[]int{6, 2, 12}, 2},
{"only with deadline", noTimeout, time.Now().Add(time.Hour), time.Now().Add(time.Hour)}, {[]int{3, 3, 3}, 3},
{"with timeout and deadline (timeout < deadline)", 10 * time.Second, time.Now().Add(time.Hour), time.Now().Add(10 * time.Second)}, {[]int{6, 3, 1}, 1},
{"with timeout and deadline (timeout > deadline)", 10 * time.Minute, time.Now().Add(30 * time.Second), time.Now().Add(30 * time.Second)}, {[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
} }
for _, tc := range tests { for _, tc := range tests {
msg := &base.TaskMessage{ got := gcd(tc.input...)
Type: "something", if got != tc.want {
ID: xid.New(), t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
Timeout: tc.timeout.String(),
Deadline: tc.deadline.Format(time.RFC3339),
}
ctx, cancel := createContext(msg)
select {
case x := <-ctx.Done():
t.Errorf("%s: <-ctx.Done() == %v, want nothing (it should block)", tc.desc, x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("%s: ctx.Deadline() returned false, want deadline to be set", tc.desc)
}
if !cmp.Equal(tc.wantDeadline, got, cmpopts.EquateApproxTime(time.Second)) {
t.Errorf("%s: ctx.Deadline() returned %v, want %v", tc.desc, got, tc.wantDeadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
} }
} }
} }
func TestCreateContextWithoutTimeRestrictions(t *testing.T) { func TestNormalizeQueues(t *testing.T) {
msg := &base.TaskMessage{ tests := []struct {
Type: "something", input map[string]int
ID: xid.New(), want map[string]int
Timeout: time.Duration(0).String(), // zero value to indicate no timeout }{
Deadline: time.Time{}.Format(time.RFC3339), // zero value to indicate no deadline {
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
} }
ctx, cancel := createContext(msg) for _, tc := range tests {
got := normalizeQueues(tc.input)
select { if diff := cmp.Diff(tc.want, got); diff != "" {
case x := <-ctx.Done(): t.Errorf("normalizeQueues(%v) = %v, want %v; (-want, +got):\n%s",
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x) tc.input, got, tc.want, diff)
default: }
}
_, ok := ctx.Deadline()
if ok {
t.Error("ctx.Deadline() returned true, want deadline to not be set")
}
cancel()
select {
case <-ctx.Done():
default:
t.Error("ctx.Done() blocked, want it to be non-blocking")
} }
} }

96
recoverer.go Normal file
View File

@@ -0,0 +1,96 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"fmt"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
type recoverer struct {
logger *log.Logger
broker base.Broker
retryDelayFunc retryDelayFunc
// channel to communicate back to the long running "recoverer" goroutine.
done chan struct{}
// poll interval.
interval time.Duration
}
type recovererParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
retryDelayFunc retryDelayFunc
}
func newRecoverer(params recovererParams) *recoverer {
return &recoverer{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
interval: params.interval,
retryDelayFunc: params.retryDelayFunc,
}
}
func (r *recoverer) terminate() {
r.logger.Debug("Recoverer shutting down...")
// Signal the recoverer goroutine to stop polling.
r.done <- struct{}{}
}
func (r *recoverer) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
timer := time.NewTimer(r.interval)
for {
select {
case <-r.done:
r.logger.Debug("Recoverer done")
timer.Stop()
return
case <-timer.C:
// Get all tasks which have expired 30 seconds ago or earlier.
deadline := time.Now().Add(-30 * time.Second)
msgs, err := r.broker.ListDeadlineExceeded(deadline)
if err != nil {
r.logger.Warn("recoverer: could not list deadline exceeded tasks")
continue
}
const errMsg = "deadline exceeded" // TODO: better error message
for _, msg := range msgs {
if msg.Retried >= msg.Retry {
r.kill(msg, errMsg)
} else {
r.retry(msg, errMsg)
}
}
}
}
}()
}
func (r *recoverer) retry(msg *base.TaskMessage, errMsg string) {
delay := r.retryDelayFunc(msg.Retried, fmt.Errorf(errMsg), NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(delay)
if err := r.broker.Retry(msg, retryAt, errMsg); err != nil {
r.logger.Warnf("recoverer: could not retry deadline exceeded task: %v", err)
}
}
func (r *recoverer) kill(msg *base.TaskMessage, errMsg string) {
if err := r.broker.Kill(msg, errMsg); err != nil {
r.logger.Warnf("recoverer: could not move task to dead queue: %v", err)
}
}

162
recoverer_test.go Normal file
View File

@@ -0,0 +1,162 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestRecoverer(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
t1 := h.NewTaskMessage("task1", nil)
t2 := h.NewTaskMessage("task2", nil)
t3 := h.NewTaskMessageWithQueue("task3", nil, "critical")
t4 := h.NewTaskMessage("task4", nil)
t4.Retried = t4.Retry // t4 has reached its max retry count
now := time.Now()
oneHourFromNow := now.Add(1 * time.Hour)
fiveMinutesFromNow := now.Add(5 * time.Minute)
fiveMinutesAgo := now.Add(-5 * time.Minute)
oneHourAgo := now.Add(-1 * time.Hour)
tests := []struct {
desc string
inProgress []*base.TaskMessage
deadlines []h.ZSetEntry
retry []h.ZSetEntry
dead []h.ZSetEntry
wantInProgress []*base.TaskMessage
wantDeadlines []h.ZSetEntry
wantRetry []*base.TaskMessage
wantDead []*base.TaskMessage
}{
{
desc: "with one task in-progress",
inProgress: []*base.TaskMessage{t1},
deadlines: []h.ZSetEntry{
{Msg: t1, Score: float64(fiveMinutesAgo.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{},
wantDeadlines: []h.ZSetEntry{},
wantRetry: []*base.TaskMessage{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"),
},
wantDead: []*base.TaskMessage{},
},
{
desc: "with a task with max-retry reached",
inProgress: []*base.TaskMessage{t4},
deadlines: []h.ZSetEntry{
{Msg: t4, Score: float64(fiveMinutesAgo.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{},
wantDeadlines: []h.ZSetEntry{},
wantRetry: []*base.TaskMessage{},
wantDead: []*base.TaskMessage{h.TaskMessageWithError(*t4, "deadline exceeded")},
},
{
desc: "with multiple tasks in-progress, and one expired",
inProgress: []*base.TaskMessage{t1, t2, t3},
deadlines: []h.ZSetEntry{
{Msg: t1, Score: float64(oneHourAgo.Unix())},
{Msg: t2, Score: float64(fiveMinutesFromNow.Unix())},
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{t2, t3},
wantDeadlines: []h.ZSetEntry{
{Msg: t2, Score: float64(fiveMinutesFromNow.Unix())},
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
wantRetry: []*base.TaskMessage{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"),
},
wantDead: []*base.TaskMessage{},
},
{
desc: "with multiple expired tasks in-progress",
inProgress: []*base.TaskMessage{t1, t2, t3},
deadlines: []h.ZSetEntry{
{Msg: t1, Score: float64(oneHourAgo.Unix())},
{Msg: t2, Score: float64(fiveMinutesAgo.Unix())},
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{t3},
wantDeadlines: []h.ZSetEntry{
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
wantRetry: []*base.TaskMessage{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"),
h.TaskMessageAfterRetry(*t2, "deadline exceeded"),
},
wantDead: []*base.TaskMessage{},
},
{
desc: "with empty in-progress queue",
inProgress: []*base.TaskMessage{},
deadlines: []h.ZSetEntry{},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{},
wantDeadlines: []h.ZSetEntry{},
wantRetry: []*base.TaskMessage{},
wantDead: []*base.TaskMessage{},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedInProgressQueue(t, r, tc.inProgress)
h.SeedDeadlines(t, r, tc.deadlines)
h.SeedRetryQueue(t, r, tc.retry)
h.SeedDeadQueue(t, r, tc.dead)
recoverer := newRecoverer(recovererParams{
logger: testLogger,
broker: rdbClient,
interval: 1 * time.Second,
retryDelayFunc: func(n int, err error, task *Task) time.Duration { return 30 * time.Second },
})
var wg sync.WaitGroup
recoverer.start(&wg)
time.Sleep(2 * time.Second)
recoverer.terminate()
gotInProgress := h.GetInProgressMessages(t, r)
if diff := cmp.Diff(tc.wantInProgress, gotInProgress, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.InProgressQueue, diff)
}
gotDeadlines := h.GetDeadlinesEntries(t, r)
if diff := cmp.Diff(tc.wantDeadlines, gotDeadlines, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.KeyDeadlines, diff)
}
gotRetry := h.GetRetryMessages(t, r)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryQueue, diff)
}
gotDead := h.GetDeadMessages(t, r)
if diff := cmp.Diff(tc.wantDead, gotDead, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.DeadQueue, diff)
}
}
}

View File

@@ -8,39 +8,38 @@ import (
"sync" "sync"
"time" "time"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
) )
type scheduler struct { type scheduler struct {
logger Logger logger *log.Logger
rdb *rdb.RDB broker base.Broker
// channel to communicate back to the long running "scheduler" goroutine. // channel to communicate back to the long running "scheduler" goroutine.
done chan struct{} done chan struct{}
// poll interval on average // poll interval on average
avgInterval time.Duration avgInterval time.Duration
// list of queues to move the tasks into.
qnames []string
} }
func newScheduler(l Logger, r *rdb.RDB, avgInterval time.Duration, qcfg map[string]int) *scheduler { type schedulerParams struct {
var qnames []string logger *log.Logger
for q := range qcfg { broker base.Broker
qnames = append(qnames, q) interval time.Duration
} }
func newScheduler(params schedulerParams) *scheduler {
return &scheduler{ return &scheduler{
logger: l, logger: params.logger,
rdb: r, broker: params.broker,
done: make(chan struct{}), done: make(chan struct{}),
avgInterval: avgInterval, avgInterval: params.interval,
qnames: qnames,
} }
} }
func (s *scheduler) terminate() { func (s *scheduler) terminate() {
s.logger.Info("Scheduler shutting down...") s.logger.Debug("Scheduler shutting down...")
// Signal the scheduler goroutine to stop polling. // Signal the scheduler goroutine to stop polling.
s.done <- struct{}{} s.done <- struct{}{}
} }
@@ -53,7 +52,7 @@ func (s *scheduler) start(wg *sync.WaitGroup) {
for { for {
select { select {
case <-s.done: case <-s.done:
s.logger.Info("Scheduler done") s.logger.Debug("Scheduler done")
return return
case <-time.After(s.avgInterval): case <-time.After(s.avgInterval):
s.exec() s.exec()
@@ -63,7 +62,7 @@ func (s *scheduler) start(wg *sync.WaitGroup) {
} }
func (s *scheduler) exec() { func (s *scheduler) exec() {
if err := s.rdb.CheckAndEnqueue(s.qnames...); err != nil { if err := s.broker.CheckAndEnqueue(); err != nil {
s.logger.Error("Could not enqueue scheduled tasks: %v", err) s.logger.Errorf("Could not enqueue scheduled tasks: %v", err)
} }
} }

View File

@@ -19,7 +19,11 @@ func TestScheduler(t *testing.T) {
r := setup(t) r := setup(t)
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second const pollInterval = time.Second
s := newScheduler(testLogger, rdbClient, pollInterval, defaultQueueConfig) s := newScheduler(schedulerParams{
logger: testLogger,
broker: rdbClient,
interval: pollInterval,
})
t1 := h.NewTaskMessage("gen_thumbnail", nil) t1 := h.NewTaskMessage("gen_thumbnail", nil)
t2 := h.NewTaskMessage("send_email", nil) t2 := h.NewTaskMessage("send_email", nil)
t3 := h.NewTaskMessage("reindex", nil) t3 := h.NewTaskMessage("reindex", nil)

View File

@@ -15,7 +15,7 @@ import (
// ServeMux is a multiplexer for asynchronous tasks. // ServeMux is a multiplexer for asynchronous tasks.
// It matches the type of each task against a list of registered patterns // It matches the type of each task against a list of registered patterns
// and calls the handler for the pattern that most closely matches the // and calls the handler for the pattern that most closely matches the
// taks's type name. // task's type name.
// //
// Longer patterns take precedence over shorter ones, so that if there are // Longer patterns take precedence over shorter ones, so that if there are
// handlers registered for both "images" and "images:thumbnails", // handlers registered for both "images" and "images:thumbnails",

462
server.go Normal file
View File

@@ -0,0 +1,462 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"errors"
"fmt"
"math"
"math/rand"
"runtime"
"strings"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
)
// Server is responsible for managing the background-task processing.
//
// Server pulls tasks off queues and processes them.
// If the processing of a task is unsuccessful, server will
// schedule it for a retry.
// A task will be retried until either the task gets processed successfully
// or until it reaches its max retry count.
//
// If a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met
// (e.g., queue size reaches a certain limit, or the task has been in the
// queue for a certain amount of time).
type Server struct {
logger *log.Logger
broker base.Broker
status *base.ServerStatus
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
recoverer *recoverer
}
// Config specifies the server's background-task processing behavior.
type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to a zero or negative value, NewServer will overwrite the value
// to the number of CPUs usable by the currennt process.
Concurrency int
// Function to calculate retry delay for a failed task.
//
// By default, it uses exponential backoff algorithm to calculate the delay.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the server will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
// If set to true, tasks in the queue with the highest priority is processed first.
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
// Logger specifies the logger used by the server instance.
//
// If unset, default logger is used.
Logger Logger
// LogLevel specifies the minimum log level to enable.
//
// If unset, InfoLevel is used by default.
LogLevel LogLevel
// ShutdownTimeout specifies the duration to wait to let workers finish their tasks
// before forcing them to abort when stopping the server.
//
// If unset or zero, default timeout of 8 seconds is used.
ShutdownTimeout time.Duration
}
// An ErrorHandler handles errors returned by the task handler.
type ErrorHandler interface {
HandleError(ctx context.Context, task *Task, err error)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(ctx context.Context, task *Task, err error)
// HandleError calls fn(ctx, task, err)
func (fn ErrorHandlerFunc) HandleError(ctx context.Context, task *Task, err error) {
fn(ctx, task, err)
}
// Logger supports logging at various log levels.
type Logger interface {
// Debug logs a message at Debug level.
Debug(args ...interface{})
// Info logs a message at Info level.
Info(args ...interface{})
// Warn logs a message at Warning level.
Warn(args ...interface{})
// Error logs a message at Error level.
Error(args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(args ...interface{})
}
// LogLevel represents logging level.
//
// It satisfies flag.Value interface.
type LogLevel int32
const (
// Note: reserving value zero to differentiate unspecified case.
level_unspecified LogLevel = iota
// DebugLevel is the lowest level of logging.
// Debug logs are intended for debugging and development purposes.
DebugLevel
// InfoLevel is used for general informational log messages.
InfoLevel
// WarnLevel is used for undesired but relatively expected events,
// which may indicate a problem.
WarnLevel
// ErrorLevel is used for undesired and unexpected events that
// the program can recover from.
ErrorLevel
// FatalLevel is used for undesired and unexpected events that
// the program cannot recover from.
FatalLevel
)
// String is part of the flag.Value interface.
func (l *LogLevel) String() string {
switch *l {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warn"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
}
panic(fmt.Sprintf("asynq: unexpected log level: %v", *l))
}
// Set is part of the flag.Value interface.
func (l *LogLevel) Set(val string) error {
switch strings.ToLower(val) {
case "debug":
*l = DebugLevel
case "info":
*l = InfoLevel
case "warn", "warning":
*l = WarnLevel
case "error":
*l = ErrorLevel
case "fatal":
*l = FatalLevel
default:
return fmt.Errorf("asynq: unsupported log level %q", val)
}
return nil
}
func toInternalLogLevel(l LogLevel) log.Level {
switch l {
case DebugLevel:
return log.DebugLevel
case InfoLevel:
return log.InfoLevel
case WarnLevel:
return log.WarnLevel
case ErrorLevel:
return log.ErrorLevel
case FatalLevel:
return log.FatalLevel
}
panic(fmt.Sprintf("asynq: unexpected log level: %v", l))
}
// Formula taken from https://github.com/mperham/sidekiq.
func defaultDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1))
return time.Duration(s) * time.Second
}
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
const defaultShutdownTimeout = 8 * time.Second
// NewServer returns a new Server given a redis connection option
// and background processing configuration.
func NewServer(r RedisConnOpt, cfg Config) *Server {
n := cfg.Concurrency
if n < 1 {
n = runtime.NumCPU()
}
delayFunc := cfg.RetryDelayFunc
if delayFunc == nil {
delayFunc = defaultDelayFunc
}
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
shutdownTimeout := cfg.ShutdownTimeout
if shutdownTimeout == 0 {
shutdownTimeout = defaultShutdownTimeout
}
logger := log.NewLogger(cfg.Logger)
loglevel := cfg.LogLevel
if loglevel == level_unspecified {
loglevel = InfoLevel
}
logger.SetLevel(toInternalLogLevel(loglevel))
rdb := rdb.NewRDB(createRedisClient(r))
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
status := base.NewServerStatus(base.StatusIdle)
cancels := base.NewCancelations()
syncer := newSyncer(syncerParams{
logger: logger,
requestsCh: syncCh,
interval: 5 * time.Second,
})
heartbeater := newHeartbeater(heartbeaterParams{
logger: logger,
broker: rdb,
interval: 5 * time.Second,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
status: status,
starting: starting,
finished: finished,
})
scheduler := newScheduler(schedulerParams{
logger: logger,
broker: rdb,
interval: 5 * time.Second,
})
subscriber := newSubscriber(subscriberParams{
logger: logger,
broker: rdb,
cancelations: cancels,
})
processor := newProcessor(processorParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
syncCh: syncCh,
cancelations: cancels,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
errHandler: cfg.ErrorHandler,
shutdownTimeout: shutdownTimeout,
starting: starting,
finished: finished,
})
recoverer := newRecoverer(recovererParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
interval: 1 * time.Minute,
})
return &Server{
logger: logger,
broker: rdb,
status: status,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
recoverer: recoverer,
}
}
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
type Handler interface {
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped.
var ErrServerStopped = errors.New("asynq: the server has been stopped")
// Run starts the background-task processing and blocks until
// an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all active workers and other
// goroutines to process the tasks.
//
// Run returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Run(handler Handler) error {
if err := srv.Start(handler); err != nil {
return err
}
srv.waitForSignals()
srv.Stop()
return nil
}
// Start starts the worker server. Once the server has started,
// it pulls tasks off queues and starts a worker goroutine for each task.
// Tasks are processed concurrently by the workers up to the number of
// concurrency specified at the initialization time.
//
// Start returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Start(handler Handler) error {
if handler == nil {
return fmt.Errorf("asynq: server cannot run with nil handler")
}
switch srv.status.Get() {
case base.StatusRunning:
return fmt.Errorf("asynq: the server is already running")
case base.StatusStopped:
return ErrServerStopped
}
srv.status.Set(base.StatusRunning)
srv.processor.handler = handler
srv.logger.Info("Starting processing")
srv.heartbeater.start(&srv.wg)
srv.subscriber.start(&srv.wg)
srv.syncer.start(&srv.wg)
srv.recoverer.start(&srv.wg)
srv.scheduler.start(&srv.wg)
srv.processor.start(&srv.wg)
return nil
}
// Stop stops the worker server.
// It gracefully closes all active workers. The server will wait for
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
func (srv *Server) Stop() {
switch srv.status.Get() {
case base.StatusIdle, base.StatusStopped:
// server is not running, do nothing and return.
return
}
srv.logger.Info("Starting graceful shutdown")
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
// processor -> syncer (via syncCh)
// processor -> heartbeater (via starting, finished channels)
srv.scheduler.terminate()
srv.processor.terminate()
srv.recoverer.terminate()
srv.syncer.terminate()
srv.subscriber.terminate()
srv.heartbeater.terminate()
srv.wg.Wait()
srv.broker.Close()
srv.status.Set(base.StatusStopped)
srv.logger.Info("Exiting")
}
// Quiet signals the server to stop pulling new tasks off queues.
// Quiet should be used before stopping the server.
func (srv *Server) Quiet() {
srv.logger.Info("Stopping processor")
srv.processor.stop()
srv.status.Set(base.StatusQuiet)
srv.logger.Info("Processor stopped")
}

240
server_test.go Normal file
View File

@@ -0,0 +1,240 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"syscall"
"testing"
"time"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
"go.uber.org/goleak"
)
func TestServer(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
c := NewClient(r)
srv := NewServer(r, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
_, err = c.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
srv.Stop()
}
func TestServerRun(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
done := make(chan struct{})
// Make sure server exits when receiving TERM signal.
go func() {
time.Sleep(2 * time.Second)
syscall.Kill(syscall.Getpid(), syscall.SIGTERM)
done <- struct{}{}
}()
go func() {
select {
case <-time.After(10 * time.Second):
t.Fatal("server did not stop after receiving TERM signal")
case <-done:
}
}()
mux := NewServeMux()
if err := srv.Run(mux); err != nil {
t.Fatal(err)
}
}
func TestServerErrServerStopped(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
srv.Stop()
err := srv.Start(handler)
if err != ErrServerStopped {
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err)
}
}
func TestServerErrNilHandler(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
err := srv.Start(nil)
if err == nil {
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
srv.Stop()
}
}
func TestServerErrServerRunning(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
err := srv.Start(handler)
if err == nil {
t.Error("Calling (*Server).Start(handler) on already running server did not return error")
}
srv.Stop()
}
func TestServerWithRedisDown(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
testBroker.Sleep()
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
time.Sleep(3 * time.Second)
srv.Stop()
}
func TestServerWithFlakyBroker(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: redisAddr, DB: redisDB}, Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
h := func(ctx context.Context, task *Task) error {
// force task retry.
if task.Type == "bad_task" {
return fmt.Errorf("could not process %q", task.Type)
}
time.Sleep(2 * time.Second)
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
for i := 0; i < 10; i++ {
_, err := c.Enqueue(NewTask("enqueued", nil), MaxRetry(i))
if err != nil {
t.Fatal(err)
}
_, err = c.Enqueue(NewTask("bad_task", nil))
if err != nil {
t.Fatal(err)
}
_, err = c.EnqueueIn(time.Duration(i)*time.Second, NewTask("scheduled", nil))
if err != nil {
t.Fatal(err)
}
}
// simulate redis going down.
testBroker.Sleep()
time.Sleep(3 * time.Second)
// simulate redis comes back online.
testBroker.Wakeup()
time.Sleep(3 * time.Second)
srv.Stop()
}
func TestLogLevel(t *testing.T) {
tests := []struct {
flagVal string
want LogLevel
wantStr string
}{
{"debug", DebugLevel, "debug"},
{"Info", InfoLevel, "info"},
{"WARN", WarnLevel, "warn"},
{"warning", WarnLevel, "warn"},
{"Error", ErrorLevel, "error"},
{"fatal", FatalLevel, "fatal"},
}
for _, tc := range tests {
level := new(LogLevel)
if err := level.Set(tc.flagVal); err != nil {
t.Fatal(err)
}
if *level != tc.want {
t.Errorf("Set(%q): got %v, want %v", tc.flagVal, level, &tc.want)
continue
}
if got := level.String(); got != tc.wantStr {
t.Errorf("String() returned %q, want %q", got, tc.wantStr)
}
}
}

30
signals_unix.go Normal file
View File

@@ -0,0 +1,30 @@
// +build linux bsd darwin
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/unix"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM, SIGINT, and SIGTSTP.
// SIGTERM and SIGINT will signal the process to exit.
// SIGTSTP will signal the process to stop processing new tasks.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TSTP to stop processing new tasks")
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
for {
sig := <-sigs
if sig == unix.SIGTSTP {
srv.Quiet()
continue
}
break
}
}

22
signals_windows.go Normal file
View File

@@ -0,0 +1,22 @@
// +build windows
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/windows"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM and SIGINT.
// SIGTERM and SIGINT will signal the process to exit.
//
// Note: Currently SIGTSTP is not supported for windows build.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}

View File

@@ -6,52 +6,78 @@ package asynq
import ( import (
"sync" "sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/log"
) )
type subscriber struct { type subscriber struct {
logger Logger logger *log.Logger
rdb *rdb.RDB broker base.Broker
// channel to communicate back to the long running "subscriber" goroutine. // channel to communicate back to the long running "subscriber" goroutine.
done chan struct{} done chan struct{}
// cancelations hold cancel functions for all in-progress tasks. // cancelations hold cancel functions for all in-progress tasks.
cancelations *base.Cancelations cancelations *base.Cancelations
// time to wait before retrying to connect to redis.
retryTimeout time.Duration
} }
func newSubscriber(l Logger, rdb *rdb.RDB, cancelations *base.Cancelations) *subscriber { type subscriberParams struct {
logger *log.Logger
broker base.Broker
cancelations *base.Cancelations
}
func newSubscriber(params subscriberParams) *subscriber {
return &subscriber{ return &subscriber{
logger: l, logger: params.logger,
rdb: rdb, broker: params.broker,
done: make(chan struct{}), done: make(chan struct{}),
cancelations: cancelations, cancelations: params.cancelations,
retryTimeout: 5 * time.Second,
} }
} }
func (s *subscriber) terminate() { func (s *subscriber) terminate() {
s.logger.Info("Subscriber shutting down...") s.logger.Debug("Subscriber shutting down...")
// Signal the subscriber goroutine to stop. // Signal the subscriber goroutine to stop.
s.done <- struct{}{} s.done <- struct{}{}
} }
func (s *subscriber) start(wg *sync.WaitGroup) { func (s *subscriber) start(wg *sync.WaitGroup) {
pubsub, err := s.rdb.CancelationPubSub()
cancelCh := pubsub.Channel()
if err != nil {
s.logger.Error("cannot subscribe to cancelation channel: %v", err)
return
}
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
var (
pubsub *redis.PubSub
err error
)
// Try until successfully connect to Redis.
for {
pubsub, err = s.broker.CancelationPubSub()
if err != nil {
s.logger.Errorf("cannot subscribe to cancelation channel: %v", err)
select {
case <-time.After(s.retryTimeout):
continue
case <-s.done:
s.logger.Debug("Subscriber done")
return
}
}
break
}
cancelCh := pubsub.Channel()
for { for {
select { select {
case <-s.done: case <-s.done:
pubsub.Close() pubsub.Close()
s.logger.Info("Subscriber done") s.logger.Debug("Subscriber done")
return return
case msg := <-cancelCh: case msg := <-cancelCh:
cancel, ok := s.cancelations.Get(msg.Payload) cancel, ok := s.cancelations.Get(msg.Payload)

View File

@@ -11,6 +11,7 @@ import (
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
) )
func TestSubscriber(t *testing.T) { func TestSubscriber(t *testing.T) {
@@ -37,16 +38,23 @@ func TestSubscriber(t *testing.T) {
cancelations := base.NewCancelations() cancelations := base.NewCancelations()
cancelations.Add(tc.registeredID, fakeCancelFunc) cancelations.Add(tc.registeredID, fakeCancelFunc)
subscriber := newSubscriber(testLogger, rdbClient, cancelations) subscriber := newSubscriber(subscriberParams{
logger: testLogger,
broker: rdbClient,
cancelations: cancelations,
})
var wg sync.WaitGroup var wg sync.WaitGroup
subscriber.start(&wg) subscriber.start(&wg)
defer subscriber.terminate()
// wait for subscriber to establish connection to pubsub channel
time.Sleep(time.Second)
if err := rdbClient.PublishCancelation(tc.publishID); err != nil { if err := rdbClient.PublishCancelation(tc.publishID); err != nil {
subscriber.terminate()
t.Fatalf("could not publish cancelation message: %v", err) t.Fatalf("could not publish cancelation message: %v", err)
} }
// allow for redis to publish message // wait for redis to publish message
time.Sleep(time.Second) time.Sleep(time.Second)
mu.Lock() mu.Lock()
@@ -58,7 +66,57 @@ func TestSubscriber(t *testing.T) {
} }
} }
mu.Unlock() mu.Unlock()
subscriber.terminate()
} }
} }
func TestSubscriberWithRedisDown(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
cancelations := base.NewCancelations()
subscriber := newSubscriber(subscriberParams{
logger: testLogger,
broker: testBroker,
cancelations: cancelations,
})
subscriber.retryTimeout = 1 * time.Second // set shorter retry timeout for testing purpose.
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
var wg sync.WaitGroup
subscriber.start(&wg)
defer subscriber.terminate()
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.
testBroker.Wakeup() // simulate a situation where redis server is back online.
time.Sleep(2 * time.Second) // allow subscriber to establish pubsub channel.
const id = "test"
var (
mu sync.Mutex
called bool
)
cancelations.Add(id, func() {
mu.Lock()
defer mu.Unlock()
called = true
})
if err := r.PublishCancelation(id); err != nil {
t.Fatalf("could not publish cancelation message: %v", err)
}
time.Sleep(time.Second) // wait for redis to publish message.
mu.Lock()
if !called {
t.Errorf("cancel function was not called")
}
mu.Unlock()
}

View File

@@ -7,12 +7,14 @@ package asynq
import ( import (
"sync" "sync"
"time" "time"
"github.com/hibiken/asynq/internal/log"
) )
// syncer is responsible for queuing up failed requests to redis and retry // syncer is responsible for queuing up failed requests to redis and retry
// those requests to sync state between the background process and redis. // those requests to sync state between the background process and redis.
type syncer struct { type syncer struct {
logger Logger logger *log.Logger
requestsCh <-chan *syncRequest requestsCh <-chan *syncRequest
@@ -24,21 +26,28 @@ type syncer struct {
} }
type syncRequest struct { type syncRequest struct {
fn func() error // sync operation fn func() error // sync operation
errMsg string // error message errMsg string // error message
deadline time.Time // request should be dropped if deadline has been exceeded
} }
func newSyncer(l Logger, requestsCh <-chan *syncRequest, interval time.Duration) *syncer { type syncerParams struct {
logger *log.Logger
requestsCh <-chan *syncRequest
interval time.Duration
}
func newSyncer(params syncerParams) *syncer {
return &syncer{ return &syncer{
logger: l, logger: params.logger,
requestsCh: requestsCh, requestsCh: params.requestsCh,
done: make(chan struct{}), done: make(chan struct{}),
interval: interval, interval: params.interval,
} }
} }
func (s *syncer) terminate() { func (s *syncer) terminate() {
s.logger.Info("Syncer shutting down...") s.logger.Debug("Syncer shutting down...")
// Signal the syncer goroutine to stop. // Signal the syncer goroutine to stop.
s.done <- struct{}{} s.done <- struct{}{}
} }
@@ -57,13 +66,16 @@ func (s *syncer) start(wg *sync.WaitGroup) {
s.logger.Error(req.errMsg) s.logger.Error(req.errMsg)
} }
} }
s.logger.Info("Syncer done") s.logger.Debug("Syncer done")
return return
case req := <-s.requestsCh: case req := <-s.requestsCh:
requests = append(requests, req) requests = append(requests, req)
case <-time.After(s.interval): case <-time.After(s.interval):
var temp []*syncRequest var temp []*syncRequest
for _, req := range requests { for _, req := range requests {
if req.deadline.Before(time.Now()) {
continue // drop stale request
}
if err := req.fn(); err != nil { if err := req.fn(); err != nil {
temp = append(temp, req) temp = append(temp, req)
} }

View File

@@ -27,7 +27,11 @@ func TestSyncer(t *testing.T) {
const interval = time.Second const interval = time.Second
syncRequestCh := make(chan *syncRequest) syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(testLogger, syncRequestCh, interval) syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup var wg sync.WaitGroup
syncer.start(&wg) syncer.start(&wg)
defer syncer.terminate() defer syncer.terminate()
@@ -38,6 +42,7 @@ func TestSyncer(t *testing.T) {
fn: func() error { fn: func() error {
return rdbClient.Done(m) return rdbClient.Done(m)
}, },
deadline: time.Now().Add(5 * time.Minute),
} }
} }
@@ -52,7 +57,11 @@ func TestSyncer(t *testing.T) {
func TestSyncerRetry(t *testing.T) { func TestSyncerRetry(t *testing.T) {
const interval = time.Second const interval = time.Second
syncRequestCh := make(chan *syncRequest) syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(testLogger, syncRequestCh, interval) syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup var wg sync.WaitGroup
syncer.start(&wg) syncer.start(&wg)
@@ -77,8 +86,9 @@ func TestSyncerRetry(t *testing.T) {
} }
syncRequestCh <- &syncRequest{ syncRequestCh <- &syncRequest{
fn: requestFunc, fn: requestFunc,
errMsg: "error", errMsg: "error",
deadline: time.Now().Add(5 * time.Minute),
} }
// allow syncer to retry // allow syncer to retry
@@ -90,3 +100,41 @@ func TestSyncerRetry(t *testing.T) {
} }
mu.Unlock() mu.Unlock()
} }
func TestSyncerDropsStaleRequests(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup
syncer.start(&wg)
var (
mu sync.Mutex
n int // number of times request has been processed
)
for i := 0; i < 10; i++ {
syncRequestCh <- &syncRequest{
fn: func() error {
mu.Lock()
n++
mu.Unlock()
return nil
},
deadline: time.Now().Add(time.Duration(-i) * time.Second), // already exceeded deadline
}
}
time.Sleep(2 * interval) // ensure that syncer runs at least once
syncer.terminate()
mu.Lock()
if n != 0 {
t.Errorf("requests has been processed %d times, want 0", n)
}
mu.Unlock()
}

View File

@@ -1,6 +1,6 @@
# Asynqmon # Asynq CLI
Asynqmon is a command line tool to monitor the tasks managed by `asynq` package. Asynq CLI is a command line tool to monitor the tasks managed by `asynq` package.
## Table of Contents ## Table of Contents
@@ -8,31 +8,32 @@ Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
- [Quick Start](#quick-start) - [Quick Start](#quick-start)
- [Stats](#stats) - [Stats](#stats)
- [History](#history) - [History](#history)
- [Process Status](#process-status) - [Servers](#servers)
- [List](#list) - [List](#list)
- [Enqueue](#enqueue) - [Enqueue](#enqueue)
- [Delete](#delete) - [Delete](#delete)
- [Kill](#kill) - [Kill](#kill)
- [Cancel](#cancel) - [Cancel](#cancel)
- [Pause](#pause)
- [Config File](#config-file) - [Config File](#config-file)
## Installation ## Installation
In order to use the tool, compile it using the following command: In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynqmon go get github.com/hibiken/asynq/tools/asynq
This will create the asynqmon executable under your `$GOPATH/bin` directory. This will create the asynq executable under your `$GOPATH/bin` directory.
## Quickstart ## Quickstart
The tool has a few commands to inspect the state of tasks and queues. The tool has a few commands to inspect the state of tasks and queues.
Run `asynqmon help` to see all the available commands. Run `asynq help` to see all the available commands.
Asynqmon needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application. Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
By default, Asynqmon will try to connect to a redis server running at `localhost:6379`. By default, CLI will try to connect to a redis server running at `localhost:6379`.
### Stats ### Stats
@@ -40,11 +41,11 @@ Stats command gives the overview of the current state of tasks and queues. You c
Example: Example:
watch -n 3 asynqmon stats watch -n 3 asynq stats
This will run `asynqmon stats` command every 3 seconds. This will run `asynq stats` command every 3 seconds.
![Gif](/docs/assets/asynqmon_stats.gif) ![Gif](/docs/assets/asynq_stats.gif)
### History ### History
@@ -54,19 +55,17 @@ By default, it shows the stats from the last 10 days. Use `--days` to specify th
Example: Example:
asynqmon history --days=30 asynq history --days=30
![Gif](/docs/assets/asynqmon_history.gif) ![Gif](/docs/assets/asynq_history.gif)
### Process Status ### Servers
PS (ProcessStatus) command shows the list of running worker processes. Servers command shows the list of running worker servers pulling tasks from the given redis instance.
Example: Example:
asynqmon ps asynq servers
![Gif](/docs/assets/asynqmon_ps.gif)
### List ### List
@@ -74,11 +73,11 @@ List command shows all tasks in the specified state in a table format
Example: Example:
asynqmon ls retry asynq ls retry
asynqmon ls scheduled asynq ls scheduled
asynqmon ls dead asynq ls dead
asynqmon ls enqueued:default asynq ls enqueued:default
asynqmon ls inprogress asynq ls inprogress
### Enqueue ### Enqueue
@@ -88,13 +87,13 @@ Command `enq` takes a task ID and moves the task to **Enqueued** state. You can
Example: Example:
asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g asynq enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state. Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example: Example:
asynqmon enqall retry asynq enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state. Running the above command will move all **Retry** tasks to **Enqueued** state.
@@ -106,13 +105,13 @@ Command `del` takes a task ID and deletes the task. You can obtain the task ID b
Example: Example:
asynqmon del r:1575732274:bnogo8gt6toe23vhef0g asynq del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state. Command `delall` deletes all tasks which are in the specified state.
Example: Example:
asynqmon delall retry asynq delall retry
Running the above command will delete all **Retry** tasks. Running the above command will delete all **Retry** tasks.
@@ -124,13 +123,13 @@ Command `kill` takes a task ID and kills the task. You can obtain the task ID by
Example: Example:
asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g asynq kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state. Command `killall` kills all tasks which are in the specified state.
Example: Example:
asynqmon killall retry asynq killall retry
Running the above command will move all **Retry** tasks to **Dead** state. Running the above command will move all **Retry** tasks to **Dead** state.
@@ -144,15 +143,26 @@ Handler implementation needs to be context aware in order to actually stop proce
Example: Example:
asynqmon cancel bnogo8gt6toe23vhef0g asynq cancel bnogo8gt6toe23vhef0g
### Pause
Command `pause` pauses the spcified queue. Tasks in paused queues are not processed by servers.
To resume processing from the queue, use `unpause` command.
To see which queues are currently paused, use `stats` command.
Example:
asynq pause email
asynq unpause email
## Config File ## Config File
You can use a config file to set default values for the flags. You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server. This is useful, for example when you have to connect to a remote redis server.
By default, `asynqmon` will try to read config file located in By default, `asynq` will try to read config file located in
`$HOME/.asynqmon.(yaml|json)`. You can specify the file location via `--config` flag. `$HOME/.asynq.(yaml|json)`. You can specify the file location via `--config` flag.
Config file example: Config file example:

View File

@@ -18,17 +18,17 @@ import (
var cancelCmd = &cobra.Command{ var cancelCmd = &cobra.Command{
Use: "cancel [task id]", Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task", Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynqmon cancel) will send a cancelation signal to the goroutine processing Long: `Cancel (asynq cancel) will send a cancelation signal to the goroutine processing
the specified task. the specified task.
The command takes one argument which specifies the task to cancel. The command takes one argument which specifies the task to cancel.
The task should be in in-progress state. The task should be in in-progress state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
Handler implementation needs to be context aware for cancelation signal to Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing. actually cancel the processing.
Example: asynqmon cancel bnogo8gt6toe23vhef0g`, Example: asynq cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: cancel, Run: cancel,
} }

View File

@@ -18,13 +18,13 @@ import (
var delCmd = &cobra.Command{ var delCmd = &cobra.Command{
Use: "del [task id]", Use: "del [task id]",
Short: "Deletes a task given an identifier", Short: "Deletes a task given an identifier",
Long: `Del (asynqmon del) will delete a task given an identifier. Long: `Del (asynq del) will delete a task given an identifier.
The command takes one argument which specifies the task to delete. The command takes one argument which specifies the task to delete.
The task should be in either scheduled, retry or dead state. The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`, Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: del, Run: del,
} }

View File

@@ -20,11 +20,11 @@ var delallValidArgs = []string{"scheduled", "retry", "dead"}
var delallCmd = &cobra.Command{ var delallCmd = &cobra.Command{
Use: "delall [state]", Use: "delall [state]",
Short: "Deletes all tasks in the specified state", Short: "Deletes all tasks in the specified state",
Long: `Delall (asynqmon delall) will delete all tasks in the specified state. Long: `Delall (asynq delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead". The argument should be one of "scheduled", "retry", or "dead".
Example: asynqmon delall dead -> Deletes all dead tasks`, Example: asynq delall dead -> Deletes all dead tasks`,
ValidArgs: delallValidArgs, ValidArgs: delallValidArgs,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: delall, Run: delall,
@@ -60,7 +60,7 @@ func delall(cmd *cobra.Command, args []string) {
case "dead": case "dead":
err = r.DeleteAllDeadTasks() err = r.DeleteAllDeadTasks()
default: default:
fmt.Printf("error: `asynqmon delall [state]` only accepts %v as the argument.\n", delallValidArgs) fmt.Printf("error: `asynq delall [state]` only accepts %v as the argument.\n", delallValidArgs)
os.Exit(1) os.Exit(1)
} }
if err != nil { if err != nil {

View File

@@ -18,16 +18,16 @@ import (
var enqCmd = &cobra.Command{ var enqCmd = &cobra.Command{
Use: "enq [task id]", Use: "enq [task id]",
Short: "Enqueues a task given an identifier", Short: "Enqueues a task given an identifier",
Long: `Enq (asynqmon enq) will enqueue a task given an identifier. Long: `Enq (asynq enq) will enqueue a task given an identifier.
The command takes one argument which specifies the task to enqueue. The command takes one argument which specifies the task to enqueue.
The task should be in either scheduled, retry or dead state. The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
The task enqueued by this command will be processed as soon as the task The task enqueued by this command will be processed as soon as the task
gets dequeued by a processor. gets dequeued by a processor.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`, Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: enq, Run: enq,
} }

View File

@@ -20,14 +20,14 @@ var enqallValidArgs = []string{"scheduled", "retry", "dead"}
var enqallCmd = &cobra.Command{ var enqallCmd = &cobra.Command{
Use: "enqall [state]", Use: "enqall [state]",
Short: "Enqueues all tasks in the specified state", Short: "Enqueues all tasks in the specified state",
Long: `Enqall (asynqmon enqall) will enqueue all tasks in the specified state. Long: `Enqall (asynq enqall) will enqueue all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead". The argument should be one of "scheduled", "retry", or "dead".
The tasks enqueued by this command will be processed as soon as it The tasks enqueued by this command will be processed as soon as it
gets dequeued by a processor. gets dequeued by a processor.
Example: asynqmon enqall dead -> Enqueues all dead tasks`, Example: asynq enqall dead -> Enqueues all dead tasks`,
ValidArgs: enqallValidArgs, ValidArgs: enqallValidArgs,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: enqall, Run: enqall,
@@ -64,7 +64,7 @@ func enqall(cmd *cobra.Command, args []string) {
case "dead": case "dead":
n, err = r.EnqueueAllDeadTasks() n, err = r.EnqueueAllDeadTasks()
default: default:
fmt.Printf("error: `asynqmon enqall [state]` only accepts %v as the argument.\n", enqallValidArgs) fmt.Printf("error: `asynq enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
os.Exit(1) os.Exit(1)
} }
if err != nil { if err != nil {

View File

@@ -22,12 +22,12 @@ var days int
var historyCmd = &cobra.Command{ var historyCmd = &cobra.Command{
Use: "history", Use: "history",
Short: "Shows historical aggregate data", Short: "Shows historical aggregate data",
Long: `History (asynqmon history) will show the number of processed and failed tasks Long: `History (asynq history) will show the number of processed and failed tasks
from the last x days. from the last x days.
By default, it will show the data from the last 10 days. By default, it will show the data from the last 10 days.
Example: asynqmon history -x=30 -> Shows stats from the last 30 days`, Example: asynq history -x=30 -> Shows stats from the last 30 days`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: history, Run: history,
} }

View File

@@ -18,13 +18,13 @@ import (
var killCmd = &cobra.Command{ var killCmd = &cobra.Command{
Use: "kill [task id]", Use: "kill [task id]",
Short: "Kills a task given an identifier", Short: "Kills a task given an identifier",
Long: `Kill (asynqmon kill) will put a task in dead state given an identifier. Long: `Kill (asynq kill) will put a task in dead state given an identifier.
The command takes one argument which specifies the task to kill. The command takes one argument which specifies the task to kill.
The task should be in either scheduled or retry state. The task should be in either scheduled or retry state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
Example: asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g`, Example: asynq kill r:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: kill, Run: kill,
} }

View File

@@ -20,11 +20,11 @@ var killallValidArgs = []string{"scheduled", "retry"}
var killallCmd = &cobra.Command{ var killallCmd = &cobra.Command{
Use: "killall [state]", Use: "killall [state]",
Short: "Kills all tasks in the specified state", Short: "Kills all tasks in the specified state",
Long: `Killall (asynqmon killall) will update all tasks from the specified state to dead state. Long: `Killall (asynq killall) will update all tasks from the specified state to dead state.
The argument should be either "scheduled" or "retry". The argument should be either "scheduled" or "retry".
Example: asynqmon killall retry -> Update all retry tasks to dead tasks`, Example: asynq killall retry -> Update all retry tasks to dead tasks`,
ValidArgs: killallValidArgs, ValidArgs: killallValidArgs,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: killall, Run: killall,
@@ -59,7 +59,7 @@ func killall(cmd *cobra.Command, args []string) {
case "retry": case "retry":
n, err = r.KillAllRetryTasks() n, err = r.KillAllRetryTasks()
default: default:
fmt.Printf("error: `asynqmon killall [state]` only accepts %v as the argument.\n", killallValidArgs) fmt.Printf("error: `asynq killall [state]` only accepts %v as the argument.\n", killallValidArgs)
os.Exit(1) os.Exit(1)
} }
if err != nil { if err != nil {

View File

@@ -13,8 +13,8 @@ import (
"time" "time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
) )
@@ -25,19 +25,19 @@ var lsValidArgs = []string{"enqueued", "inprogress", "scheduled", "retry", "dead
var lsCmd = &cobra.Command{ var lsCmd = &cobra.Command{
Use: "ls [state]", Use: "ls [state]",
Short: "Lists tasks in the specified state", Short: "Lists tasks in the specified state",
Long: `Ls (asynqmon ls) will list all tasks in the specified state in a table format. Long: `Ls (asynq ls) will list all tasks in the specified state in a table format.
The command takes one argument which specifies the state of tasks. The command takes one argument which specifies the state of tasks.
The argument value should be one of "enqueued", "inprogress", "scheduled", The argument value should be one of "enqueued", "inprogress", "scheduled",
"retry", or "dead". "retry", or "dead".
Example: Example:
asynqmon ls dead -> Lists all tasks in dead state asynq ls dead -> Lists all tasks in dead state
Enqueued tasks requires a queue name after ":" Enqueued tasks requires a queue name after ":"
Example: Example:
asynqmon ls enqueued:default -> List tasks from default queue asynq ls enqueued:default -> List tasks from default queue
asynqmon ls enqueued:critical -> List tasks from critical queue asynq ls enqueued:critical -> List tasks from critical queue
`, `,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: ls, Run: ls,
@@ -72,7 +72,7 @@ func ls(cmd *cobra.Command, args []string) {
switch parts[0] { switch parts[0] {
case "enqueued": case "enqueued":
if len(parts) != 2 { if len(parts) != 2 {
fmt.Printf("error: Missing queue name\n`asynqmon ls enqueued:[queue name]`\n") fmt.Printf("error: Missing queue name\n`asynq ls enqueued:[queue name]`\n")
os.Exit(1) os.Exit(1)
} }
listEnqueued(r, parts[1]) listEnqueued(r, parts[1])
@@ -85,7 +85,7 @@ func ls(cmd *cobra.Command, args []string) {
case "dead": case "dead":
listDead(r) listDead(r)
default: default:
fmt.Printf("error: `asynqmon ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs) fmt.Printf("error: `asynq ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
os.Exit(1) os.Exit(1)
} }
} }
@@ -93,7 +93,7 @@ func ls(cmd *cobra.Command, args []string) {
// queryID returns an identifier used for "enq" command. // queryID returns an identifier used for "enq" command.
// score is the zset score and queryType should be one // score is the zset score and queryType should be one
// of "s", "r" or "d" (scheduled, retry, dead respectively). // of "s", "r" or "d" (scheduled, retry, dead respectively).
func queryID(id xid.ID, score int64, qtype string) string { func queryID(id uuid.UUID, score int64, qtype string) string {
const format = "%v:%v:%v" const format = "%v:%v:%v"
return fmt.Sprintf(format, qtype, score, id) return fmt.Sprintf(format, qtype, score, id)
} }
@@ -101,22 +101,22 @@ func queryID(id xid.ID, score int64, qtype string) string {
// parseQueryID is a reverse operation of queryID function. // parseQueryID is a reverse operation of queryID function.
// It takes a queryID and return each part of id with proper // It takes a queryID and return each part of id with proper
// type if valid, otherwise it reports an error. // type if valid, otherwise it reports an error.
func parseQueryID(queryID string) (id xid.ID, score int64, qtype string, err error) { func parseQueryID(queryID string) (id uuid.UUID, score int64, qtype string, err error) {
parts := strings.Split(queryID, ":") parts := strings.Split(queryID, ":")
if len(parts) != 3 { if len(parts) != 3 {
return xid.NilID(), 0, "", fmt.Errorf("invalid id") return uuid.Nil, 0, "", fmt.Errorf("invalid id")
} }
id, err = xid.FromString(parts[2]) id, err = uuid.Parse(parts[2])
if err != nil { if err != nil {
return xid.NilID(), 0, "", fmt.Errorf("invalid id") return uuid.Nil, 0, "", fmt.Errorf("invalid id")
} }
score, err = strconv.ParseInt(parts[1], 10, 64) score, err = strconv.ParseInt(parts[1], 10, 64)
if err != nil { if err != nil {
return xid.NilID(), 0, "", fmt.Errorf("invalid id") return uuid.Nil, 0, "", fmt.Errorf("invalid id")
} }
qtype = parts[0] qtype = parts[0]
if len(qtype) != 1 || !strings.Contains("srd", qtype) { if len(qtype) != 1 || !strings.Contains("srd", qtype) {
return xid.NilID(), 0, "", fmt.Errorf("invalid id") return uuid.Nil, 0, "", fmt.Errorf("invalid id")
} }
return id, score, qtype, nil return id, score, qtype, nil
} }

212
tools/asynq/cmd/migrate.go Normal file
View File

@@ -0,0 +1,212 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/spf13/cast"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// migrateCmd represents the migrate command
var migrateCmd = &cobra.Command{
Use: "migrate",
Short: fmt.Sprintf("Migrate all tasks to be compatible with asynq@%s", base.Version),
Long: fmt.Sprintf("Migrate (asynq migrate) will convert all tasks in redis to be compatible with asynq@%s.", base.Version),
Run: migrate,
}
func init() {
rootCmd.AddCommand(migrateCmd)
}
func migrate(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
lists := []string{base.InProgressQueue}
allQueues, err := c.SMembers(base.AllQueues).Result()
if err != nil {
fmt.Printf("error: could not read all queues: %v", err)
os.Exit(1)
}
lists = append(lists, allQueues...)
for _, key := range lists {
if err := migrateList(c, key); err != nil {
fmt.Printf("error: %v", err)
os.Exit(1)
}
}
zsets := []string{base.ScheduledQueue, base.RetryQueue, base.DeadQueue}
for _, key := range zsets {
if err := migrateZSet(c, key); err != nil {
fmt.Printf("error: %v", err)
os.Exit(1)
}
}
}
type oldTaskMessage struct {
// Unchanged
Type string
Payload map[string]interface{}
ID uuid.UUID
Queue string
Retry int
Retried int
ErrorMsg string
UniqueKey string
// Following fields have changed.
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Timeout string
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Deadline string
}
var defaultTimeout = 30 * time.Minute
func convertMessage(old *oldTaskMessage) (*base.TaskMessage, error) {
timeout, err := time.ParseDuration(old.Timeout)
if err != nil {
return nil, fmt.Errorf("could not parse Timeout field of %+v", old)
}
deadline, err := time.Parse(time.RFC3339, old.Deadline)
if err != nil {
return nil, fmt.Errorf("could not parse Deadline field of %+v", old)
}
if timeout == 0 && deadline.IsZero() {
timeout = defaultTimeout
}
if deadline.IsZero() {
// Zero value used to be time.Time{},
// in the new schema zero value is represented by
// zero in Unix time.
deadline = time.Unix(0, 0)
}
return &base.TaskMessage{
Type: old.Type,
Payload: old.Payload,
ID: uuid.New(),
Queue: old.Queue,
Retry: old.Retry,
Retried: old.Retried,
ErrorMsg: old.ErrorMsg,
UniqueKey: old.UniqueKey,
Timeout: int64(timeout.Seconds()),
Deadline: deadline.Unix(),
}, nil
}
func deserialize(s string) (*base.TaskMessage, error) {
// Try deserializing as old message.
d := json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var old *oldTaskMessage
if err := d.Decode(&old); err != nil {
// Try deserializing as new message.
d = json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg *base.TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, fmt.Errorf("could not deserialize %s into task message: %v", s, err)
}
return msg, nil
}
return convertMessage(old)
}
func migrateZSet(c *redis.Client, key string) error {
if c.Exists(key).Val() == 0 {
// skip if key doesn't exist.
return nil
}
res, err := c.ZRangeWithScores(key, 0, -1).Result()
if err != nil {
return err
}
var msgs []*redis.Z
for _, z := range res {
s, err := cast.ToStringE(z.Member)
if err != nil {
return fmt.Errorf("could not cast to string: %v", err)
}
msg, err := deserialize(s)
if err != nil {
return err
}
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, &redis.Z{Score: z.Score, Member: encoded})
}
if err := c.Rename(key, key+":backup").Err(); err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
if err := c.ZAdd(key, msgs...).Err(); err != nil {
return fmt.Errorf("could not write new messages to %q: %v", key, err)
}
if err := c.Del(key + ":backup").Err(); err != nil {
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
}
return nil
}
func migrateList(c *redis.Client, key string) error {
if c.Exists(key).Val() == 0 {
// skip if key doesn't exist.
return nil
}
res, err := c.LRange(key, 0, -1).Result()
if err != nil {
return err
}
var msgs []interface{}
for _, s := range res {
msg, err := deserialize(s)
if err != nil {
return err
}
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, encoded)
}
if err := c.Rename(key, key+":backup").Err(); err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
if err := c.LPush(key, msgs...).Err(); err != nil {
return fmt.Errorf("could not write new messages to %q: %v", key, err)
}
if err := c.Del(key + ":backup").Err(); err != nil {
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
}
return nil
}

47
tools/asynq/cmd/pause.go Normal file
View File

@@ -0,0 +1,47 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// pauseCmd represents the pause command
var pauseCmd = &cobra.Command{
Use: "pause [queue name]",
Short: "Pauses the specified queue",
Long: `Pause (asynq pause) will pause the specified queue.
Asynq servers will not process tasks from paused queues.
Use the "unpause" command to resume a paused queue.
Example: asynq pause default -> Pause the "default" queue`,
Args: cobra.ExactValidArgs(1),
Run: pause,
}
func init() {
rootCmd.AddCommand(pauseCmd)
}
func pause(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.Pause(args[0])
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully paused queue %q\n", args[0])
}

View File

@@ -18,11 +18,11 @@ import (
var rmqCmd = &cobra.Command{ var rmqCmd = &cobra.Command{
Use: "rmq [queue name]", Use: "rmq [queue name]",
Short: "Removes the specified queue", Short: "Removes the specified queue",
Long: `Rmq (asynqmon rmq) will remove the specified queue. Long: `Rmq (asynq rmq) will remove the specified queue.
By default, it will remove the queue only if it's empty. By default, it will remove the queue only if it's empty.
Use --force option to override this behavior. Use --force option to override this behavior.
Example: asynqmon rmq low -> Removes "low" queue`, Example: asynq rmq low -> Removes "low" queue`,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: rmq, Run: rmq,
} }
@@ -44,7 +44,7 @@ func rmq(cmd *cobra.Command, args []string) {
err := r.RemoveQueue(args[0], rmqForce) err := r.RemoveQueue(args[0], rmqForce)
if err != nil { if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok { if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynqmon rmq --force %s'\n", err, args[0]) fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq rmq --force %s'\n", err, args[0])
os.Exit(1) os.Exit(1)
} }
fmt.Printf("error: %v", err) fmt.Printf("error: %v", err)

View File

@@ -11,6 +11,7 @@ import (
"strings" "strings"
"text/tabwriter" "text/tabwriter"
"github.com/hibiken/asynq/internal/base"
"github.com/spf13/cobra" "github.com/spf13/cobra"
homedir "github.com/mitchellh/go-homedir" homedir "github.com/mitchellh/go-homedir"
@@ -26,9 +27,20 @@ var password string
// rootCmd represents the base command when called without any subcommands // rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
Use: "asynqmon", Use: "asynq",
Short: "A monitoring tool for asynq queues", Short: "A monitoring tool for asynq queues",
Long: `Asynqmon is a montoring CLI to inspect tasks and queues managed by asynq.`, Long: `Asynq is a montoring CLI to inspect tasks and queues managed by asynq.`,
Version: base.Version,
}
var versionOutput = fmt.Sprintf("asynq version %s\n", base.Version)
var versionCmd = &cobra.Command{
Use: "version",
Hidden: true,
Run: func(cmd *cobra.Command, args []string) {
fmt.Print(versionOutput)
},
} }
// Execute adds all child commands to the root command and sets flags appropriately. // Execute adds all child commands to the root command and sets flags appropriately.
@@ -43,7 +55,10 @@ func Execute() {
func init() { func init() {
cobra.OnInitialize(initConfig) cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynqmon.yaml)") rootCmd.AddCommand(versionCmd)
rootCmd.SetVersionTemplate(versionOutput)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynq.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI") rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)") rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server") rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
@@ -65,9 +80,9 @@ func initConfig() {
os.Exit(1) os.Exit(1)
} }
// Search config in home directory with name ".asynqmon" (without extension). // Search config in home directory with name ".asynq" (without extension).
viper.AddConfigPath(home) viper.AddConfigPath(home)
viper.SetConfigName(".asynqmon") viper.SetConfigName(".asynq")
} }
viper.AutomaticEnv() // read in environment variables that match viper.AutomaticEnv() // read in environment variables that match

View File

@@ -18,64 +18,64 @@ import (
"github.com/spf13/viper" "github.com/spf13/viper"
) )
// psCmd represents the ps command // serversCmd represents the servers command
var psCmd = &cobra.Command{ var serversCmd = &cobra.Command{
Use: "ps", Use: "servers",
Short: "Shows all background worker processes", Short: "Shows all running worker servers",
Long: `Ps (asynqmon ps) will show all background worker processes Long: `Servers (asynq servers) will show all running worker servers
backed by the specified redis instance. pulling tasks from the specified redis instance.
The command shows the following for each process: The command shows the following for each server:
* Host and PID of the process * Host and PID of the process in which the server is running
* Number of active workers out of worker pool * Number of active workers out of worker pool
* Queue configuration * Queue configuration
* State of the worker process ("running" | "stopped") * State of the worker server ("running" | "quiet")
* Time the process was started * Time the server was started
A "running" process is processing tasks in queues. A "running" server is pulling tasks from queues and processing them.
A "stopped" process is no longer processing new tasks.`, A "quiet" server is no longer pulling new tasks from queues`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: ps, Run: servers,
} }
func init() { func init() {
rootCmd.AddCommand(psCmd) rootCmd.AddCommand(serversCmd)
} }
func ps(cmd *cobra.Command, args []string) { func servers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{ r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"), Addr: viper.GetString("uri"),
DB: viper.GetInt("db"), DB: viper.GetInt("db"),
Password: viper.GetString("password"), Password: viper.GetString("password"),
})) }))
processes, err := r.ListProcesses() servers, err := r.ListServers()
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
if len(processes) == 0 { if len(servers) == 0 {
fmt.Println("No processes") fmt.Println("No running servers")
return return
} }
// sort by hostname and pid // sort by hostname and pid
sort.Slice(processes, func(i, j int) bool { sort.Slice(servers, func(i, j int) bool {
x, y := processes[i], processes[j] x, y := servers[i], servers[j]
if x.Host != y.Host { if x.Host != y.Host {
return x.Host < y.Host return x.Host < y.Host
} }
return x.PID < y.PID return x.PID < y.PID
}) })
// print processes // print server info
cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"} cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"}
printRows := func(w io.Writer, tmpl string) { printRows := func(w io.Writer, tmpl string) {
for _, ps := range processes { for _, info := range servers {
fmt.Fprintf(w, tmpl, fmt.Fprintf(w, tmpl,
ps.Host, ps.PID, ps.Status, info.Host, info.PID, info.Status,
fmt.Sprintf("%d/%d", ps.ActiveWorkerCount, ps.Concurrency), fmt.Sprintf("%d/%d", info.ActiveWorkerCount, info.Concurrency),
formatQueues(ps.Queues), timeAgo(ps.Started)) formatQueues(info.Queues), timeAgo(info.Started))
} }
} }
printTable(cols, printRows) printTable(cols, printRows)

View File

@@ -7,7 +7,6 @@ package cmd
import ( import (
"fmt" "fmt"
"os" "os"
"sort"
"strconv" "strconv"
"strings" "strings"
"text/tabwriter" "text/tabwriter"
@@ -33,7 +32,7 @@ Specifically, the command shows the following:
To monitor the tasks continuously, it's recommended that you run this To monitor the tasks continuously, it's recommended that you run this
command in conjunction with the watch command. command in conjunction with the watch command.
Example: watch -n 3 asynqmon stats -> Shows current state of tasks every three seconds`, Example: watch -n 3 asynq stats -> Shows current state of tasks every three seconds`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: stats, Run: stats,
} }
@@ -96,24 +95,31 @@ func printStates(s *rdb.Stats) {
tw.Flush() tw.Flush()
} }
func printQueues(queues map[string]int) { func printQueues(queues []*rdb.Queue) {
var qnames, seps, counts []string var headers, seps, counts []string
for q := range queues { for _, q := range queues {
qnames = append(qnames, strings.Title(q)) title := queueTitle(q)
headers = append(headers, title)
seps = append(seps, strings.Repeat("-", len(title)))
counts = append(counts, strconv.Itoa(q.Size))
} }
sort.Strings(qnames) // sort for stable order format := strings.Repeat("%v\t", len(headers)) + "\n"
for _, q := range qnames {
seps = append(seps, strings.Repeat("-", len(q)))
counts = append(counts, strconv.Itoa(queues[strings.ToLower(q)]))
}
format := strings.Repeat("%v\t", len(qnames)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, toInterfaceSlice(qnames)...) fmt.Fprintf(tw, format, toInterfaceSlice(headers)...)
fmt.Fprintf(tw, format, toInterfaceSlice(seps)...) fmt.Fprintf(tw, format, toInterfaceSlice(seps)...)
fmt.Fprintf(tw, format, toInterfaceSlice(counts)...) fmt.Fprintf(tw, format, toInterfaceSlice(counts)...)
tw.Flush() tw.Flush()
} }
func queueTitle(q *rdb.Queue) string {
var b strings.Builder
b.WriteString(strings.Title(q.Name))
if q.Paused {
b.WriteString(" (Paused)")
}
return b.String()
}
func printStats(s *rdb.Stats) { func printStats(s *rdb.Stats) {
format := strings.Repeat("%v\t", 3) + "\n" format := strings.Repeat("%v\t", 3) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)

View File

@@ -0,0 +1,46 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// unpauseCmd represents the unpause command
var unpauseCmd = &cobra.Command{
Use: "unpause [queue name]",
Short: "Unpauses the specified queue",
Long: `Unpause (asynq unpause) will unpause the specified queue.
Asynq servers will process tasks from unpaused/resumed queues.
Example: asynq unpause default -> Resume the "default" queue`,
Args: cobra.ExactValidArgs(1),
Run: unpause,
}
func init() {
rootCmd.AddCommand(unpauseCmd)
}
func unpause(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.Unpause(args[0])
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully resumed queue %q\n", args[0])
}

View File

@@ -20,7 +20,7 @@ import (
var workersCmd = &cobra.Command{ var workersCmd = &cobra.Command{
Use: "workers", Use: "workers",
Short: "Shows all running workers information", Short: "Shows all running workers information",
Long: `Workers (asynqmon workers) will show all running workers information. Long: `Workers (asynq workers) will show all running workers information.
The command shows the following for each worker: The command shows the following for each worker:
* Process in which the worker is running * Process in which the worker is running
@@ -61,7 +61,7 @@ func workers(cmd *cobra.Command, args []string) {
if x.Started != y.Started { if x.Started != y.Started {
return x.Started.Before(y.Started) return x.Started.Before(y.Started)
} }
return x.ID.String() < y.ID.String() return x.ID < y.ID
}) })
cols := []string{"Process", "ID", "Type", "Payload", "Queue", "Started"} cols := []string{"Process", "ID", "Type", "Payload", "Queue", "Started"}

View File

@@ -4,7 +4,7 @@
package main package main
import "github.com/hibiken/asynq/tools/asynqmon/cmd" import "github.com/hibiken/asynq/tools/asynq/cmd"
func main() { func main() {
cmd.Execute() cmd.Execute()

View File

@@ -4,9 +4,10 @@ go 1.13
require ( require (
github.com/go-redis/redis/v7 v7.2.0 github.com/go-redis/redis/v7 v7.2.0
github.com/google/uuid v1.1.1
github.com/hibiken/asynq v0.4.0 github.com/hibiken/asynq v0.4.0
github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-homedir v1.1.0
github.com/rs/xid v1.2.1 github.com/spf13/cast v1.3.1
github.com/spf13/cobra v0.0.5 github.com/spf13/cobra v0.0.5
github.com/spf13/viper v1.6.2 github.com/spf13/viper v1.6.2
) )

View File

@@ -1,4 +1,5 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
@@ -15,6 +16,7 @@ github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3Ee
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
@@ -24,10 +26,6 @@ github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeME
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis v6.15.7+incompatible h1:3skhDh95XQMpnqeqNftPkQD9jL9e5e36z/1SUm6dy1U=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.1.0 h1:I4C4a8UGbFejiVjtYVTRVOiMIJ5pm5Yru6ibvDX/OS0=
github.com/go-redis/redis/v7 v7.1.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs= github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
@@ -38,10 +36,15 @@ github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4er
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
@@ -49,19 +52,22 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgf
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hibiken/asynq v0.4.0 h1:NvAfYX0DRe04WgGMKRg5oX7bs6ktv2fu9YwB6O356FI= github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hibiken/asynq v0.4.0/go.mod h1:dtrVkxCsGPVhVNHMDXAH7lFq64kbj43+G6lt4FQZfW4=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4= github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
@@ -74,15 +80,14 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc= github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
@@ -94,18 +99,16 @@ github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI= github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
@@ -113,17 +116,13 @@ github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk= github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg= github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.6.0/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E= github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E=
github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k= github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s= github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
@@ -148,6 +147,7 @@ golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -172,6 +172,7 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
@@ -180,11 +181,14 @@ google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ij
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno= gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=