Compare commits

...

131 Commits

Author SHA1 Message Date
Ken Hibino
9bd3d8e19e v0.10.0 2020-07-06 05:53:56 -07:00
Ken Hibino
7382e2aeb8 Do not start worker goroutine for task already exceeded its deadline 2020-07-06 05:48:31 -07:00
Ken Hibino
007fac8055 Invoke error handler when ctx.Done channel is closed 2020-07-06 05:48:31 -07:00
Ken Hibino
8d43fe407a Change ErrorHandler function signature 2020-07-06 05:48:31 -07:00
Ken Hibino
34b90ecc8a Return Result struct to caller of Enqueue 2020-07-06 05:48:31 -07:00
Ken Hibino
8b60e6a268 Replace github.com/rs/xid with github.com/google/uuid 2020-07-06 05:48:31 -07:00
Ken Hibino
486dcd799b Add version command to CLI 2020-07-06 05:48:31 -07:00
Ken Hibino
195f4603bb Add migrate command to CLI
The command converts all messages in redis to be compatible for asynq
v0.10.0
2020-07-06 05:48:31 -07:00
Ken Hibino
2e2c9b9f6b Update docs 2020-07-06 05:48:31 -07:00
Ken Hibino
199bf4d66a Minor code cleanup 2020-07-06 05:48:31 -07:00
Ken Hibino
7e942ec241 Use int64 type for Timeout and Deadline in TaskMessage 2020-07-06 05:48:31 -07:00
Ken Hibino
379da8f7a2 Clean up processor test 2020-07-06 05:48:31 -07:00
Ken Hibino
feee87adda Add recoverer 2020-07-06 05:48:31 -07:00
Ken Hibino
7657f560ec Add RDB.ListDeadlineExceeded 2020-07-06 05:48:31 -07:00
Ken Hibino
7c7de0d8e0 Fix processor 2020-07-06 05:48:31 -07:00
Ken Hibino
83f1e20d74 Add deadline to syncRequest
- syncer will drop a request if its deadline has been exceeded
2020-07-06 05:48:31 -07:00
Ken Hibino
4e8ac151ae Update processor to adapt for deadlines set change
- Processor dequeues tasks only when it's available to process
- Processor retries a task when its context's Done channel is closed
2020-07-06 05:48:31 -07:00
Ken Hibino
08b71672aa Update RDB.Requeue to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
92af00f9fd Update RDB.Dequeue to return deadline as time.Time 2020-07-06 05:48:31 -07:00
Ken Hibino
113451ce6a Update RDB.Kill to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
9cd9f3d6b4 Update RDB.Retry to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
7b9119c703 Update RDB.Done to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
9b05dea394 Update RDB.Dequeue to return message and deadline 2020-07-06 05:48:31 -07:00
Ken Hibino
6cc5bafaba Add task message to deadlines set on dequeue
Updated dequeueCmd to decode the message and compute its deadline and add
the message to the Deadline set.
2020-07-06 05:48:31 -07:00
Ken Hibino
716d3d987e Use default timeout of 30mins if both timeout and deadline are not
provided
2020-07-06 05:48:31 -07:00
Ken Hibino
0527b93432 Change TaskMessage Timeout and Deadline to int
* This change breaks existing tasks in Redis
2020-07-06 05:48:31 -07:00
Ken Hibino
5dddc35d7c Add redis key for deadlines in base package 2020-07-06 05:48:31 -07:00
Ken Hibino
4e5f596910 Fix Client.Enqueue to always call enqueue
Closes https://github.com/hibiken/asynq/issues/158
2020-06-14 05:54:18 -07:00
Ken Hibino
8bf5917cd9 v0.9.4 2020-06-13 06:27:28 -07:00
Ken Hibino
7f30fa2bb6 Fix requeue logic in processor 2020-06-13 06:22:32 -07:00
Ken Hibino
ade6e61f51 v0.9.3 2020-06-12 06:31:42 -07:00
Ken Hibino
a2abeedaa0 Fix JSON number ovewflow issue 2020-06-12 06:29:36 -07:00
lion.zhao
81bb52b08c processor: log detail err in markAsDone func 2020-06-10 05:57:31 -07:00
Ken Hibino
bc2a7635a0 v0.9.2 2020-06-08 06:23:02 -07:00
Ken Hibino
f65d408bf9 Update docs for pause feature 2020-06-08 06:22:14 -07:00
Ken Hibino
4749b4bbfc Add benchmark test to verify client enqueue performance while server is
running
2020-06-08 06:06:18 -07:00
Ken Hibino
06c4a1c7f8 Limit the number of tasks moved by CheckAndEnqueue to prevent a long
running script
2020-06-08 06:06:18 -07:00
Ken Hibino
8af4cbad51 Fix data race in test 2020-06-08 06:06:18 -07:00
Ken Hibino
4e800a7f68 Update stats command to show queue paused status 2020-06-08 06:06:18 -07:00
Ken Hibino
d6a5c84dc6 Add pause and unpause command to CLI 2020-06-08 06:06:18 -07:00
Ken Hibino
363cfedb49 Update Dequeue operation to skip paused queues 2020-06-08 06:06:18 -07:00
Ken Hibino
4595bd41c3 Add Pause and Unpause methods to rdb 2020-06-08 06:06:18 -07:00
Ken Hibino
e236d55477 Fix cli build 2020-06-04 06:35:50 -07:00
Ken Hibino
a38f628f3b Refactor server state management 2020-05-31 06:41:19 -07:00
Ken Hibino
69ad583278 v0.9.1 2020-05-29 05:42:40 -07:00
Ken Hibino
23f46dde52 Add helper functions to extract task metadata from context 2020-05-29 05:40:42 -07:00
lihe
39188fe930 remove typo and redundant code 2020-05-22 05:11:54 -07:00
Ken Hibino
4492ed9255 Change internal constructor signatures.
Created "params" type to avoid positional arguments.
Personally it feels more explicit and reads better.
2020-05-17 13:25:24 -07:00
Ken Hibino
4e3e053989 Update readme 2020-05-16 11:00:44 -07:00
Ken Hibino
aef0775c05 v0.9.0 2020-05-16 08:02:57 -07:00
Ken Hibino
de146993d2 Add log messages around Server.Quiet 2020-05-16 08:01:39 -07:00
Ken Hibino
60cbf8dc5a Minor code cleanup 2020-05-16 08:00:35 -07:00
Ken Hibino
fb38086590 Clean up log messages
Moved development purpose log messages to DEBUG level.
2020-05-16 08:00:35 -07:00
Ken Hibino
cfcd19a222 Change default log level to info 2020-05-16 08:00:35 -07:00
Ken Hibino
24ee4b9693 Define test flags for package testing
Added test flags for

- redis address (defaults to "localhost:6379")
- redis db number (defaults to 14)
- log level (defaults to FATAL)
2020-05-16 08:00:35 -07:00
Ken Hibino
7849b395bd Update changelog 2020-05-16 08:00:35 -07:00
Ken Hibino
fa3082e5bb Change LogLevel to satisfy flag.Value interface 2020-05-16 08:00:35 -07:00
Ken Hibino
d13f7e900f Allow setting minimum log level for logger 2020-05-16 08:00:35 -07:00
Ken Hibino
b63476ddc8 Simplify Logger interface 2020-05-16 08:00:35 -07:00
Ken Hibino
210b026b01 Add log messages around Server.Quiet 2020-05-16 07:12:08 -07:00
Ken Hibino
556b2103fe Minor code cleanup 2020-05-15 08:19:35 -07:00
Ken Hibino
0289bc7a10 Clean up log messages
Moved development purpose log messages to DEBUG level.
2020-05-11 20:28:49 -07:00
Ken Hibino
ae942c93e5 Change default log level to info 2020-05-11 20:28:49 -07:00
Ken Hibino
0faf97f146 Define test flags for package testing
Added test flags for

- redis address (defaults to "localhost:6379")
- redis db number (defaults to 14)
- log level (defaults to FATAL)
2020-05-11 06:22:43 -07:00
Ken Hibino
711bfa371f Update changelog 2020-05-11 06:22:43 -07:00
Ken Hibino
73d62844e6 Change LogLevel to satisfy flag.Value interface 2020-05-11 06:22:43 -07:00
Ken Hibino
00b82904c6 Allow setting minimum log level for logger 2020-05-11 06:22:43 -07:00
Ken Hibino
a866369866 Simplify Logger interface 2020-05-11 06:22:43 -07:00
Ken Hibino
26b78136ba v0.8.3 2020-05-08 06:21:01 -07:00
t-asaka
44aad7f037 Add redis conn close func to client 2020-05-08 06:15:14 -07:00
Ken Hibino
9884d5f2fa v0.8.2 2020-05-03 16:55:34 -07:00
Ken Hibino
826f1ecff4 Update docs 2020-05-03 16:54:39 -07:00
Ken Hibino
24f2b64c6c Make sure to invoke CancelFunc in all cases 2020-05-03 15:58:23 -07:00
Ken Hibino
1c1474c55c Add tests to simulate cases where server cannot talk to redis 2020-05-02 07:05:26 -07:00
Ken Hibino
5161b9368a Clean up tests 2020-05-02 07:05:26 -07:00
Ken Hibino
0c998a8e17 Add test for signal handling 2020-04-28 06:56:05 -07:00
Ken Hibino
49160f2536 v0.8.1 2020-04-27 06:49:12 -07:00
Ken Hibino
e33d297d8e Add SetDefaultOptions method to Client 2020-04-27 06:45:13 -07:00
Ken Hibino
eb8ced6bdd Add ParseRedisURI helper function 2020-04-25 13:06:20 -07:00
Ken Hibino
789a9fd711 Update readme 2020-04-20 07:52:26 -07:00
Ken Hibino
5924cdac33 Add example tests 2020-04-19 11:36:43 -07:00
Ken Hibino
442c9275a0 v0.8.0 2020-04-19 09:08:20 -07:00
Ken Hibino
a0865df33c Change default concurrency to the number of CPUs 2020-04-19 08:51:17 -07:00
Ken Hibino
431a96a1f7 Update changelog 2020-04-19 08:51:17 -07:00
Ken Hibino
74e5582cfc Update readme 2020-04-19 08:51:17 -07:00
Ken Hibino
bf542a781c Add failure test for heartbeater 2020-04-19 08:51:17 -07:00
Ken Hibino
7c7f8e5f30 Move Broker interface to base package 2020-04-19 08:51:17 -07:00
Ken Hibino
46ab4417dd Add test to simulate situation where redis is down 2020-04-19 08:51:17 -07:00
Ken Hibino
f8a94fb839 Define broker interface 2020-04-19 08:51:17 -07:00
Ken Hibino
42453280f4 Fix subscriber to not panic when it cannot establish pubsub channel on
startup
2020-04-19 08:51:17 -07:00
Ken Hibino
4ec2dc9e47 Minor reorganization in tests 2020-04-19 08:51:17 -07:00
Ken Hibino
45933eb6b0 Reword doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
4df372b369 Allow user to configure shutdown timeout 2020-04-19 08:51:17 -07:00
Ken Hibino
c688b8f4f9 Fix test for base package 2020-04-19 08:51:17 -07:00
Ken Hibino
eef2f5f3cb Add test cases for server error 2020-04-19 08:51:17 -07:00
Ken Hibino
239ef27a6e Update doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
24da281aa7 Update docs with new APIs 2020-04-19 08:51:17 -07:00
Ken Hibino
b086e88a47 Rename ps command to servers 2020-04-19 08:51:17 -07:00
Ken Hibino
cf61911a49 Update all reference to asynqmon to Asynq CLI 2020-04-19 08:51:17 -07:00
Ken Hibino
aafd8a5b74 Rename internal ProcessState to ServerState 2020-04-19 08:51:17 -07:00
Ken Hibino
4f11e52558 Rename CLI to asynq 2020-04-19 08:51:17 -07:00
Ken Hibino
b14c73809e Refactor server state 2020-04-19 08:51:17 -07:00
Ken Hibino
779065c269 Export Start, Stop and Quiet method on Server type 2020-04-19 08:51:17 -07:00
Ken Hibino
f9842ba914 Rename Background to Server 2020-04-19 08:51:17 -07:00
Ken Hibino
022dc29701 Add overview section in readme 2020-04-11 17:08:31 -07:00
Ken Hibino
40d1889ba0 Highlight stability and compatibility section in readme 2020-04-11 09:30:00 -07:00
Ken Hibino
7e96e893fe (fix): Change log messages depending on signals being handled 2020-04-10 08:56:01 -07:00
Ken Hibino
84b0c76c8b v0.7.1 2020-04-05 14:56:06 -07:00
Ken Hibino
60b887b8e3 Fix singnal handling for different systems 2020-04-05 14:37:23 -07:00
Ken Hibino
7864bea55c Update readme
Add features section
2020-03-28 08:44:06 -07:00
Apos Spanos
47220554ca Correct typo 2020-03-23 13:47:05 -07:00
Ken Hibino
f91c05b92c v0.7.0 2020-03-22 12:04:37 -07:00
Ken Hibino
9b4438347e Fix comment 2020-03-21 11:44:26 -07:00
Ken Hibino
c33dd447ac Allow client to enqueue a task with unique option
Changes:

- Added Unique option for clients
- Require go v.13 or above (to use new errors wrapping functions)
- Fixed adding queue key to all-queues set (asynq:queues) when scheduling.
2020-03-21 11:40:40 -07:00
Ken Hibino
6df2c3ae2b v0.6.2 2020-03-15 21:02:28 -07:00
Ken Hibino
37554fd23c Update readme example code 2020-03-15 14:56:00 -07:00
Ken Hibino
77f5a38453 Refactor payload_test to reduce cyclomatic complexities 2020-03-14 12:30:42 -07:00
Ken Hibino
8d2b9d6be7 Add comments to exported types and functions from internal/log package 2020-03-13 21:04:45 -07:00
Bo-Yi Wu
1b7d557c66 fix typo 2020-03-13 20:02:26 -07:00
Bo-Yi Wu
30b68728d4 chore(lint): fix from gofmt -s 2020-03-13 20:01:39 -07:00
Ken Hibino
310d38620d Minor tweak to readme example code 2020-03-13 17:27:20 -07:00
Ken Hibino
1a53bbf21b Update changelog 2020-03-13 17:27:20 -07:00
Ken Hibino
9c79a7d507 Simplify code with gofmt -s 2020-03-13 14:24:24 -07:00
Ken Hibino
516f95edff Add Use method to better support middlewares with ServeMux 2020-03-13 14:13:17 -07:00
Ken Hibino
cf7a677312 v0.6.1 2020-03-12 08:42:34 -07:00
Ken Hibino
0bc6eba021 Allow custom logger to be used in Background 2020-03-12 08:40:37 -07:00
Ken Hibino
d664d68fa4 Extract out log package 2020-03-09 07:17:52 -07:00
Ken Hibino
a425f54d23 [ci skip] Remove todo comment 2020-03-09 06:09:07 -07:00
Ken Hibino
3c722386b0 Add Deadline option when enqueuing tasks
Deadline option sets the deadline for the given task's context deadline.
2020-03-08 17:12:42 -07:00
Ken Hibino
25992c2781 [ci skip] Minor readme update 2020-03-03 21:39:16 -08:00
Ken Hibino
b9e3cad7a7 [ci skip] Update readme
- Added flow chart for task queue
- Reordered sections
2020-03-02 07:06:11 -08:00
76 changed files with 7191 additions and 2339 deletions

6
.gitignore vendored
View File

@@ -15,7 +15,7 @@
/examples
# Ignore command binary
/tools/asynqmon/asynqmon
/tools/asynq/asynq
# Ignore asynqmon config file
.asynqmon.*
# Ignore asynq config file
.asynq.*

View File

@@ -2,11 +2,10 @@ language: go
go_import_path: github.com/hibiken/asynq
git:
depth: 1
env:
- GO111MODULE=on # go modules are the default
go: [1.12.x, 1.13.x, 1.14.x]
go: [1.13.x, 1.14.x]
script:
- go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
- go test -run=XXX -bench=. -loglevel=debug ./...
services:
- redis-server
after_success:

View File

@@ -3,13 +3,16 @@ if [ "${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}" != "master" ]; then
cd ${TRAVIS_BUILD_DIR}/.. && \
git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \
cd "${TRAVIS_REPO_SLUG}-bench" && \
# Benchmark master
git checkout master && \
go test -run=XXX -bench=. ./... > master.txt && \
# Benchmark feature branch
git checkout ${TRAVIS_COMMIT} && \
go test -run=XXX -bench=. ./... > feature.txt && \
go get -u golang.org/x/tools/cmd/benchcmp && \
# compare two benchmarks
go get -u golang.org/x/tools/cmd/benchcmp && \
benchcmp master.txt feature.txt;
fi
fi

View File

@@ -7,6 +7,117 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.10.0] - 2020-07-06
### Changed
- All tasks now requires timeout or deadline. By default, timeout is set to 30 mins.
- Tasks that exceed its deadline are automatically retried.
- Encoding schema for task message has changed. Please install the latest CLI and run `migrate` command if
you have tasks enqueued with the previous version of asynq.
- API of `(*Client).Enqueue`, `(*Client).EnqueueIn`, and `(*Client).EnqueueAt` has changed to return a `*Result`.
- API of `ErrorHandler` has changed. It now takes context as the first argument and removed `retried`, `maxRetry` from the argument list.
Use `GetRetryCount` and/or `GetMaxRetry` to get the count values.
## [0.9.4] - 2020-06-13
### Fixed
- Fixes issue of same tasks processed by more than one worker (https://github.com/hibiken/asynq/issues/90).
## [0.9.3] - 2020-06-12
### Fixed
- Fixes the JSON number overflow issue (https://github.com/hibiken/asynq/issues/166).
## [0.9.2] - 2020-06-08
### Added
- The `pause` and `unpause` commands were added to the CLI. See README for the CLI for details.
## [0.9.1] - 2020-05-29
### Added
- `GetTaskID`, `GetRetryCount`, and `GetMaxRetry` functions were added to extract task metadata from context.
## [0.9.0] - 2020-05-16
### Changed
- `Logger` interface has changed. Please see the godoc for the new interface.
### Added
- `LogLevel` type is added. Server's log level can be specified through `LogLevel` field in `Config`.
## [0.8.3] - 2020-05-08
### Added
- `Close` method is added to `Client`.
## [0.8.2] - 2020-05-03
### Fixed
- [Fixed cancelfunc leak](https://github.com/hibiken/asynq/pull/145)
## [0.8.1] - 2020-04-27
### Added
- `ParseRedisURI` helper function is added to create a `RedisConnOpt` from a URI string.
- `SetDefaultOptions` method is added to `Client`.
## [0.8.0] - 2020-04-19
### Changed
- `Background` type is renamed to `Server`.
- To upgrade from the previous version, Update `NewBackground` to `NewServer` and pass `Config` by value.
- CLI is renamed to `asynq`.
- To upgrade the CLI to the latest version run `go get -u github.com/hibiken/tools/asynq`
- The `ps` command in CLI is renamed to `servers`
- `Concurrency` defaults to the number of CPUs when unset or set to a negative value.
### Added
- `ShutdownTimeout` field is added to `Config` to speicfy timeout duration used during graceful shutdown.
- New `Server` type exposes `Start`, `Stop`, and `Quiet` as well as `Run`.
## [0.7.1] - 2020-04-05
### Fixed
- Fixed signal handling for windows.
## [0.7.0] - 2020-03-22
### Changed
- Support Go v1.13+, dropped support for go v1.12
### Added
- `Unique` option was added to allow client to enqueue a task only if it's unique within a certain time period.
## [0.6.2] - 2020-03-15
### Added
- `Use` method was added to `ServeMux` to apply middlewares to all handlers.
## [0.6.1] - 2020-03-12
### Added
- `Client` can optionally schedule task with `asynq.Deadline(time)` to specify deadline for task's context. Default is no deadline.
- `Logger` option was added to config, which allows user to specify the logger used by the background instance.
## [0.6.0] - 2020-03-01
### Added

299
README.md
View File

@@ -7,20 +7,43 @@
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq)
Asynq is a simple Go library for queueing tasks and processing them in the background with workers.
It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
## Overview
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before v1.0.0 release.
Asynq is a Go library for queueing tasks and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
![Gif](/docs/assets/demo.gif)
Highlevel overview of how Asynq works:
## Installation
- Client puts task on a queue
- Server pulls task off queues and starts a worker goroutine for each task
- Tasks are processed concurrently by multiple workers
To install `asynq` library, run the following command:
Task queues are used as a mechanism to distribute work across multiple machines.
A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
```sh
go get -u github.com/hibiken/asynq
```
![Task Queue Diagram](/docs/assets/overview.png)
## Stability and Compatibility
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users (Feedback on APIs are appreciated!). The public API could change without a major version update before v1.0.0 release.
**Status**: The library is currently undergoing heavy development with frequent, breaking API changes.
## Features
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
- Scheduling of tasks
- Durability since tasks are written to Redis
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
- Automatic recovery of tasks in the event of a worker crash
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#strict-priority-queues)
- Low latency to add a task since writes are fast in Redis
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
- [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive)
- [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for HA
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Quickstart
@@ -30,64 +53,180 @@ First, make sure you are running a Redis server locally.
$ redis-server
```
To create and schedule tasks, use `Client` and provide a task and when to enqueue the task.
Next, write a package that encapsulates task creation and task handling.
```go
func main() {
r := &asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
package tasks
import (
"fmt"
"github.com/hibiken/asynq"
)
// A list of task types.
const (
EmailDelivery = "email:deliver"
ImageProcessing = "image:process"
)
//----------------------------------------------
// Write a function NewXXXTask to create a task.
// A task consists of a type and a payload.
//----------------------------------------------
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
return asynq.NewTask(EmailDelivery, payload)
}
func NewImageProcessingTask(src, dst string) *asynq.Task {
payload := map[string]interface{}{"src": src, "dst": dst}
return asynq.NewTask(ImageProcessing, payload)
}
//---------------------------------------------------------------
// Write a function HandleXXXTask to handle the input task.
// Note that it satisfies the asynq.HandlerFunc interface.
//
// Handler doesn't need to be a function. You can define a type
// that satisfies asynq.Handler interface. See examples below.
//---------------------------------------------------------------
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
userID, err := t.Payload.GetInt("user_id")
if err != nil {
return err
}
tmplID, err := t.Payload.GetString("template_id")
if err != nil {
return err
}
fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID)
// Email delivery logic ...
return nil
}
client := asynq.NewClient(r)
// ImageProcessor implements asynq.Handler interface.
type ImageProcesser struct {
// ... fields for struct
}
// Create a task with task type and payload
t1 := asynq.NewTask("email:signup", map[string]interface{}{"user_id": 42})
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
src, err := t.Payload.GetString("src")
if err != nil {
return err
}
dst, err := t.Payload.GetString("dst")
if err != nil {
return err
}
fmt.Printf("Process image: src = %s, dst = %s\n", src, dst)
// Image processing logic ...
return nil
}
t2 := asynq.NewTask("email:reminder", map[string]interface{}{"user_id": 42})
// Process immediately
err := client.Enqueue(t1)
// Process 24 hrs later
err = client.EnqueueIn(24*time.Hour, t2)
// Process at specified time.
target := time.Date(2020, time.March, 6, 10, 0, 0, 0, time.UTC)
err = client.EnqueueAt(target, t2)
// Pass options to specify processing behavior for a given task.
//
// MaxRetry specifies the maximum number of times this task will be retried (Default is 25).
// Queue specifies which queue to enqueue this task to (Default is "default").
// Timeout specifies the the timeout for the task's context (Default is no timeout).
err = client.Enqueue(t1, asynq.MaxRetry(10), asynq.Queue("critical"), asynq.Timeout(time.Minute))
func NewImageProcessor() *ImageProcessor {
// ... return an instance
}
```
To start the background workers, use `Background` and provide your `Handler` to process the tasks.
`Handler` is an interface with one method `ProcessTask` with the following signature.
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
A task will be processed asynchronously by a background worker as soon as the task gets enqueued.
Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
```go
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
type Handler interface {
ProcessTask(context.Context, *asynq.Task) error
package main
import (
"time"
"github.com/hibiken/asynq"
"your/app/package/tasks"
)
const redisAddr = "127.0.0.1:6379"
func main() {
r := asynq.RedisClientOpt{Addr: redisAddr}
c := asynq.NewClient(r)
defer c.Close()
// ------------------------------------------------------
// Example 1: Enqueue task to be processed immediately.
// Use (*Client).Enqueue method.
// ------------------------------------------------------
t := tasks.NewEmailDeliveryTask(42, "some:template:id")
res, err := c.Enqueue(t)
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
// ------------------------------------------------------------
// Example 2: Schedule task to be processed in the future.
// Use (*Client).EnqueueIn or (*Client).EnqueueAt.
// ------------------------------------------------------------
t = tasks.NewEmailDeliveryTask(42, "other:template:id")
res, err = c.EnqueueIn(24*time.Hour, t)
if err != nil {
log.Fatal("could not schedule task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
// ----------------------------------------------------------------------------
// Example 3: Set options to tune task processing behavior.
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
// ----------------------------------------------------------------------------
c.SetDefaultOptions(tasks.ImageProcessing, asynq.MaxRetry(10), asynq.Timeout(time.Minute))
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
res, err = c.Enqueue(t)
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
// ---------------------------------------------------------------------------
// Example 4: Pass options to tune task processing behavior at enqueue time.
// Options passed at enqueue time override default ones, if any.
// ---------------------------------------------------------------------------
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
res, err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
fmt.Printf("Enqueued Result: %+v\n", res)
}
```
You can optionally use `ServeMux` to create a handler, just as you would with `"net/http"` Handler.
Next, create a worker server to process these tasks in the background.
To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
```go
func main() {
r := &asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
}
package main
bg := asynq.NewBackground(r, &asynq.Config{
import (
"log"
"github.com/hibiken/asynq"
"your/app/package/tasks"
)
const redisAddr = "127.0.0.1:6379"
func main() {
r := asynq.RedisClientOpt{Addr: redisAddr}
srv := asynq.NewServer(r, asynq.Config{
// Specify how many concurrent workers to use
Concurrency: 10,
// Optionally specify multiple queues with different priority.
@@ -99,48 +238,52 @@ func main() {
// See the godoc for other configuration options
})
// mux maps a type to a handler
mux := asynq.NewServeMux()
mux.HandleFunc("email:signup", signupEmailHandler)
mux.HandleFunc("email:reminder", reminderEmailHandler)
mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask)
mux.Handle(tasks.ImageProcessing, tasks.NewImageProcessor())
// ...register other handlers...
bg.Run(mux)
}
// function with the same signature as the ProcessTask method for the Handler interface.
func signupEmailHandler(ctx context.Context, t *asynq.Task) error {
id, err := t.Payload.GetInt("user_id")
if err != nil {
return err
if err := srv.Run(mux); err != nil {
log.Fatalf("could not run server: %v", err)
}
fmt.Printf("Send welcome email to user %d\n", id)
// ...your email sending logic...
return nil
}
```
For a more detailed walk-through of the library, see our [Getting Started Guide](https://github.com/hibiken/asynq/wiki/Getting-Started).
To Learn more about `asynq` features and APIs, see our [Wiki pages](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq).
To Learn more about `asynq` features and APIs, see our [Wiki](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq).
## Command Line Tool
Asynq ships with a command line tool to inspect the state of queues and tasks.
Here's an example of running the `stats` command.
![Gif](/docs/assets/demo.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
## Installation
To install `asynq` library, run the following command:
```sh
go get -u github.com/hibiken/asynq
```
To install the CLI tool, run the following command:
```sh
go get -u github.com/hibiken/asynq/tools/asynq
```
## Requirements
| Dependency | Version |
| -------------------------- | ------- |
| [Redis](https://redis.io/) | v2.8+ |
| [Go](https://golang.org/) | v1.12+ |
## Command Line Tool
Asynq ships with a command line tool to inspect the state of queues and tasks.
To install, run the following command:
```sh
go get -u github.com/hibiken/asynq/tools/asynqmon
```
For details on how to use the tool, refer to the tool's [README](/tools/asynqmon/README.md).
| [Go](https://golang.org/) | v1.13+ |
## Contributing
@@ -151,7 +294,7 @@ Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynqmon CLI is built with cobra
- [Cobra](https://github.com/spf13/cobra) : Asynq CLI is built with cobra
## License

View File

@@ -7,6 +7,9 @@ package asynq
import (
"crypto/tls"
"fmt"
"net/url"
"strconv"
"strings"
"github.com/go-redis/redis/v7"
)
@@ -94,6 +97,79 @@ type RedisFailoverClientOpt struct {
TLSConfig *tls.Config
}
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
// It returns a non-nil error if uri cannot be parsed.
//
// Three URI schemes are supported, which are redis:, redis-socket:, and redis-sentinel:.
// Supported formats are:
// redis://[:password@]host[:port][/dbnumber]
// redis-socket://[:password@]path[?db=dbnumber]
// redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName]
func ParseRedisURI(uri string) (RedisConnOpt, error) {
u, err := url.Parse(uri)
if err != nil {
return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err)
}
switch u.Scheme {
case "redis":
return parseRedisURI(u)
case "redis-socket":
return parseRedisSocketURI(u)
case "redis-sentinel":
return parseRedisSentinelURI(u)
default:
return nil, fmt.Errorf("asynq: unsupported uri scheme: %q", u.Scheme)
}
}
func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
var db int
var err error
if len(u.Path) > 0 {
xs := strings.Split(strings.Trim(u.Path, "/"), "/")
db, err = strconv.Atoi(xs[0])
if err != nil {
return nil, fmt.Errorf("asynq: could not parse redis uri: database number should be the first segment of the path")
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Addr: u.Host, DB: db, Password: password}, nil
}
func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) {
const errPrefix = "asynq: could not parse redis socket uri"
if len(u.Path) == 0 {
return nil, fmt.Errorf("%s: path does not exist", errPrefix)
}
q := u.Query()
var db int
var err error
if n := q.Get("db"); n != "" {
db, err = strconv.Atoi(n)
if err != nil {
return nil, fmt.Errorf("%s: query param `db` should be a number", errPrefix)
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Network: "unix", Addr: u.Path, DB: db, Password: password}, nil
}
func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
addrs := strings.Split(u.Host, ",")
master := u.Query().Get("master")
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, Password: password}, nil
}
// createRedisClient returns a redis client given a redis connection configuration.
//
// Passing an unexpected type as a RedisConnOpt argument will cause panic.

View File

@@ -5,23 +5,39 @@
package asynq
import (
"flag"
"sort"
"testing"
"github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/log"
)
// This file defines test helper functions used by
// other test files.
//============================================================================
// This file defines helper functions and variables used in other test files.
//============================================================================
// redis used for package testing.
const (
redisAddr = "localhost:6379"
redisDB = 14
// variables used for package testing.
var (
redisAddr string
redisDB int
testLogLevel = FatalLevel
)
var testLogger *log.Logger
func init() {
flag.StringVar(&redisAddr, "redis_addr", "localhost:6379", "redis address to use in testing")
flag.IntVar(&redisDB, "redis_db", 14, "redis db number to use in testing")
flag.Var(&testLogLevel, "loglevel", "log level to use in testing")
testLogger = log.NewLogger(nil)
testLogger.SetLevel(toInternalLogLevel(testLogLevel))
}
func setup(tb testing.TB) *redis.Client {
tb.Helper()
r := redis.NewClient(&redis.Options{
@@ -40,3 +56,106 @@ var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
})
return out
})
func TestParseRedisURI(t *testing.T) {
tests := []struct {
uri string
want RedisConnOpt
}{
{
"redis://localhost:6379",
RedisClientOpt{Addr: "localhost:6379"},
},
{
"redis://localhost:6379/3",
RedisClientOpt{Addr: "localhost:6379", DB: 3},
},
{
"redis://:mypassword@localhost:6379",
RedisClientOpt{Addr: "localhost:6379", Password: "mypassword"},
},
{
"redis://:mypassword@127.0.0.1:6379/11",
RedisClientOpt{Addr: "127.0.0.1:6379", Password: "mypassword", DB: 11},
},
{
"redis-socket:///var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock"},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword"},
},
{
"redis-socket:///var/run/redis/redis.sock?db=7",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", DB: 7},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock?db=12",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword", DB: 12},
},
{
"redis-sentinel://localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
},
},
{
"redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
Password: "mypassword",
},
},
}
for _, tc := range tests {
got, err := ParseRedisURI(tc.uri)
if err != nil {
t.Errorf("ParseRedisURI(%q) returned an error: %v", tc.uri, err)
continue
}
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff)
}
}
}
func TestParseRedisURIErrors(t *testing.T) {
tests := []struct {
desc string
uri string
}{
{
"unsupported scheme",
"rdb://localhost:6379",
},
{
"missing scheme",
"localhost:6379",
},
{
"multiple db numbers",
"redis://localhost:6379/1,2,3",
},
{
"missing path for socket connection",
"redis-socket://?db=one",
},
{
"non integer for db numbers for socket",
"redis-socket:///some/path/to/redis?db=one",
},
}
for _, tc := range tests {
_, err := ParseRedisURI(tc.uri)
if err == nil {
t.Errorf("%s: ParseRedisURI(%q) succeeded for malformed input, want error",
tc.desc, tc.uri)
}
}
}

View File

@@ -1,275 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"math"
"math/rand"
"os"
"os/signal"
"sync"
"syscall"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
// Background is responsible for managing the background-task processing.
//
// Background manages task queues to process tasks.
// If the processing of a task is unsuccessful, background will
// schedule it for a retry until either the task gets processed successfully
// or it exhausts its max retry count.
//
// Once a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met
// (e.g., queue size reaches a certain limit, or the task has been in the
// queue for a certain amount of time).
type Background struct {
mu sync.Mutex
running bool
ps *base.ProcessState
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
rdb *rdb.RDB
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
}
// Config specifies the background-task processing behavior.
type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to a zero or negative value, NewBackground will overwrite the value to one.
Concurrency int
// Function to calculate retry delay for a failed task.
//
// By default, it uses exponential backoff algorithm to calculate the delay.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the background will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
// If set to true, tasks in the queue with the highest priority is processed first.
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
}
// An ErrorHandler handles errors returned by the task handler.
type ErrorHandler interface {
HandleError(task *Task, err error, retried, maxRetry int)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(task *Task, err error, retried, maxRetry int)
// HandleError calls fn(task, err, retried, maxRetry)
func (fn ErrorHandlerFunc) HandleError(task *Task, err error, retried, maxRetry int) {
fn(task, err, retried, maxRetry)
}
// Formula taken from https://github.com/mperham/sidekiq.
func defaultDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1))
return time.Duration(s) * time.Second
}
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
// NewBackground returns a new Background given a redis connection option
// and background processing configuration.
func NewBackground(r RedisConnOpt, cfg *Config) *Background {
n := cfg.Concurrency
if n < 1 {
n = 1
}
delayFunc := cfg.RetryDelayFunc
if delayFunc == nil {
delayFunc = defaultDelayFunc
}
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
pid := os.Getpid()
rdb := rdb.NewRDB(createRedisClient(r))
ps := base.NewProcessState(host, pid, n, queues, cfg.StrictPriority)
syncCh := make(chan *syncRequest)
cancels := base.NewCancelations()
syncer := newSyncer(syncCh, 5*time.Second)
heartbeater := newHeartbeater(rdb, ps, 5*time.Second)
scheduler := newScheduler(rdb, 5*time.Second, queues)
processor := newProcessor(rdb, ps, delayFunc, syncCh, cancels, cfg.ErrorHandler)
subscriber := newSubscriber(rdb, cancels)
return &Background{
rdb: rdb,
ps: ps,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
}
}
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
type Handler interface {
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// Run starts the background-task processing and blocks until
// an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all pending workers and other
// goroutines to process the tasks.
func (bg *Background) Run(handler Handler) {
logger.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid()))
logger.info("Starting processing")
bg.start(handler)
defer bg.stop()
logger.info("Send signal TSTP to stop processing new tasks")
logger.info("Send signal TERM or INT to terminate the process")
// Wait for a signal to terminate.
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGTSTP)
for {
sig := <-sigs
if sig == syscall.SIGTSTP {
bg.processor.stop()
bg.ps.SetStatus(base.StatusStopped)
continue
}
break
}
fmt.Println()
logger.info("Starting graceful shutdown")
}
// starts the background-task processing.
func (bg *Background) start(handler Handler) {
bg.mu.Lock()
defer bg.mu.Unlock()
if bg.running {
return
}
bg.running = true
bg.processor.handler = handler
bg.heartbeater.start(&bg.wg)
bg.subscriber.start(&bg.wg)
bg.syncer.start(&bg.wg)
bg.scheduler.start(&bg.wg)
bg.processor.start(&bg.wg)
}
// stops the background-task processing.
func (bg *Background) stop() {
bg.mu.Lock()
defer bg.mu.Unlock()
if !bg.running {
return
}
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
//
// processor -> syncer (via syncCh)
bg.scheduler.terminate()
bg.processor.terminate()
bg.syncer.terminate()
bg.subscriber.terminate()
bg.heartbeater.terminate()
bg.wg.Wait()
bg.rdb.Close()
bg.running = false
logger.info("Bye!")
}

View File

@@ -1,128 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"go.uber.org/goleak"
)
func TestBackground(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
client := NewClient(r)
bg := NewBackground(r, &Config{
Concurrency: 10,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
bg.start(HandlerFunc(h))
err := client.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
err = client.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
bg.stop()
}
func TestGCD(t *testing.T) {
tests := []struct {
input []int
want int
}{
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
got := gcd(tc.input...)
if got != tc.want {
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
}
}
}
func TestNormalizeQueueCfg(t *testing.T) {
tests := []struct {
input map[string]int
want map[string]int
}{
{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
}
for _, tc := range tests {
got := normalizeQueueCfg(tc.input)
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
tc.input, got, tc.want, diff)
}
}
}

View File

@@ -7,7 +7,6 @@ package asynq
import (
"context"
"fmt"
"math/rand"
"sync"
"testing"
"time"
@@ -24,16 +23,17 @@ func BenchmarkEndToEndSimple(b *testing.B) {
DB: redisDB,
}
client := NewClient(redis)
bg := NewBackground(redis, &Config{
srv := NewServer(redis, Config{
Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second
},
LogLevel: testLogLevel,
})
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
@@ -46,11 +46,11 @@ func BenchmarkEndToEndSimple(b *testing.B) {
}
b.StartTimer() // end setup
bg.start(HandlerFunc(handler))
srv.Start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
bg.stop()
srv.Stop()
b.StartTimer() // end teardown
}
}
@@ -60,29 +60,29 @@ func BenchmarkEndToEnd(b *testing.B) {
const count = 100000
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
rand.Seed(time.Now().UnixNano())
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis)
bg := NewBackground(redis, &Config{
srv := NewServer(redis, Config{
Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second
},
LogLevel: testLogLevel,
})
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
if _, err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
@@ -90,8 +90,16 @@ func BenchmarkEndToEnd(b *testing.B) {
var wg sync.WaitGroup
wg.Add(count * 2)
handler := func(ctx context.Context, t *Task) error {
// randomly fail 1% of tasks
if rand.Intn(100) == 1 {
n, err := t.Payload.GetInt("data")
if err != nil {
b.Logf("internal error: %v", err)
}
retried, ok := GetRetryCount(ctx)
if !ok {
b.Logf("internal error: %v", err)
}
// Fail 1% of tasks for the first attempt.
if retried == 0 && n%100 == 0 {
return fmt.Errorf(":(")
}
wg.Done()
@@ -99,11 +107,11 @@ func BenchmarkEndToEnd(b *testing.B) {
}
b.StartTimer() // end setup
bg.start(HandlerFunc(handler))
srv.Start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
bg.stop()
srv.Stop()
b.StartTimer() // end teardown
}
}
@@ -124,30 +132,31 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
DB: redisDB,
}
client := NewClient(redis)
bg := NewBackground(redis, &Config{
srv := NewServer(redis, Config{
Concurrency: 10,
Queues: map[string]int{
"high": 6,
"default": 3,
"low": 1,
},
LogLevel: testLogLevel,
})
// Create a bunch of tasks
for i := 0; i < highCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("high")); err != nil {
if _, err := client.Enqueue(t, Queue("high")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < defaultCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < lowCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("low")); err != nil {
if _, err := client.Enqueue(t, Queue("low")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
@@ -160,11 +169,70 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
}
b.StartTimer() // end setup
bg.start(HandlerFunc(handler))
srv.Start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
bg.stop()
srv.Stop()
b.StartTimer() // end teardown
}
}
// E2E benchmark to check client enqueue operation performs correctly,
// while server is busy processing tasks.
func BenchmarkClientWhileServerRunning(b *testing.B) {
const count = 10000
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis)
srv := NewServer(redis, Config{
Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second
},
LogLevel: testLogLevel,
})
// Enqueue 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
// Schedule 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if _, err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
handler := func(ctx context.Context, t *Task) error {
return nil
}
srv.Start(HandlerFunc(handler))
b.StartTimer() // end setup
b.Log("Starting enqueueing")
enqueued := 0
for enqueued < 100000 {
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), map[string]interface{}{"data": enqueued})
if _, err := client.Enqueue(t); err != nil {
b.Logf("could not enqueue task %d: %v", enqueued, err)
continue
}
enqueued++
}
b.Logf("Finished enqueueing %d tasks", enqueued)
b.StopTimer() // begin teardown
srv.Stop()
b.StartTimer() // end teardown
}
}

253
client.go
View File

@@ -5,12 +5,16 @@
package asynq
import (
"errors"
"fmt"
"sort"
"strings"
"sync"
"time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
)
// A Client is responsible for scheduling tasks.
@@ -20,13 +24,18 @@ import (
//
// Clients are safe for concurrent use by multiple goroutines.
type Client struct {
rdb *rdb.RDB
mu sync.Mutex
opts map[string][]Option
rdb *rdb.RDB
}
// NewClient and returns a new Client given a redis connection option.
func NewClient(r RedisConnOpt) *Client {
rdb := rdb.NewRDB(createRedisClient(r))
return &Client{rdb}
return &Client{
opts: make(map[string][]Option),
rdb: rdb,
}
}
// Option specifies the task processing behavior.
@@ -34,9 +43,11 @@ type Option interface{}
// Internal option representations.
type (
retryOption int
queueOption string
timeoutOption time.Duration
retryOption int
queueOption string
timeoutOption time.Duration
deadlineOption time.Time
uniqueOption time.Duration
)
// MaxRetry returns an option to specify the max number of times
@@ -58,23 +69,59 @@ func Queue(name string) Option {
}
// Timeout returns an option to specify how long a task may run.
// If the timeout elapses before the Handler returns, then the task
// will be retried.
//
// Zero duration means no limit.
//
// If there's a conflicting Deadline option, whichever comes earliest
// will be used.
func Timeout(d time.Duration) Option {
return timeoutOption(d)
}
// Deadline returns an option to specify the deadline for the given task.
// If it reaches the deadline before the Handler returns, then the task
// will be retried.
//
// If there's a conflicting Timeout option, whichever comes earliest
// will be used.
func Deadline(t time.Time) Option {
return deadlineOption(t)
}
// Unique returns an option to enqueue a task only if the given task is unique.
// Task enqueued with this option is guaranteed to be unique within the given ttl.
// Once the task gets processed successfully or once the TTL has expired, another task with the same uniqueness may be enqueued.
// ErrDuplicateTask error is returned when enqueueing a duplicate task.
//
// Uniqueness of a task is based on the following properties:
// - Task Type
// - Task Payload
// - Queue Name
func Unique(ttl time.Duration) Option {
return uniqueOption(ttl)
}
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
//
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
var ErrDuplicateTask = errors.New("task already exists")
type option struct {
retry int
queue string
timeout time.Duration
retry int
queue string
timeout time.Duration
deadline time.Time
uniqueTTL time.Duration
}
func composeOptions(opts ...Option) option {
res := option{
retry: defaultMaxRetry,
queue: base.DefaultQueueName,
timeout: 0,
retry: defaultMaxRetry,
queue: base.DefaultQueueName,
timeout: 0, // do not set to deafultTimeout here
deadline: time.Time{},
}
for _, opt := range opts {
switch opt := opt.(type) {
@@ -84,6 +131,10 @@ func composeOptions(opts ...Option) option {
res.queue = string(opt)
case timeoutOption:
res.timeout = time.Duration(opt)
case deadlineOption:
res.deadline = time.Time(opt)
case uniqueOption:
res.uniqueTTL = time.Duration(opt)
default:
// ignore unexpected option
}
@@ -91,28 +142,102 @@ func composeOptions(opts ...Option) option {
return res
}
// uniqueKey computes the redis key used for the given task.
// It returns an empty string if ttl is zero.
func uniqueKey(t *Task, ttl time.Duration, qname string) string {
if ttl == 0 {
return ""
}
return fmt.Sprintf("%s:%s:%s", t.Type, serializePayload(t.Payload.data), qname)
}
func serializePayload(payload map[string]interface{}) string {
if payload == nil {
return "nil"
}
type entry struct {
k string
v interface{}
}
var es []entry
for k, v := range payload {
es = append(es, entry{k, v})
}
// sort entries by key
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
var b strings.Builder
for _, e := range es {
if b.Len() > 0 {
b.WriteString(",")
}
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
}
return b.String()
}
const (
// Max retry count by default
// Default max retry count used if nothing is specified.
defaultMaxRetry = 25
// Default timeout used if both timeout and deadline are not specified.
defaultTimeout = 30 * time.Minute
)
// Value zero indicates no timeout and no deadline.
var (
noTimeout time.Duration = 0
noDeadline time.Time = time.Unix(0, 0)
)
// SetDefaultOptions sets options to be used for a given task type.
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
//
// Default options can be overridden by options passed at enqueue time.
func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
c.mu.Lock()
defer c.mu.Unlock()
c.opts[taskType] = opts
}
// A Result holds enqueued task's metadata.
type Result struct {
// ID is a unique identifier for the task.
ID string
// Retry is the maximum number of retry for the task.
Retry int
// Queue is a name of the queue the task is enqueued to.
Queue string
// Timeout is the timeout value for the task.
// Counting for timeout starts when a worker starts processing the task.
// If task processing doesn't complete within the timeout, the task will be retried.
// The value zero means no timeout.
//
// If deadline is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Timeout time.Duration
// Deadline is the deadline value for the task.
// If task processing doesn't complete before the deadline, the task will be retried.
// The value time.Unix(0, 0) means no deadline.
//
// If timeout is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Deadline time.Time
}
// EnqueueAt schedules task to be enqueued at the specified time.
//
// EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
opt := composeOptions(opts...)
msg := &base.TaskMessage{
ID: xid.New(),
Type: task.Type,
Payload: task.Payload.data,
Queue: opt.queue,
Retry: opt.retry,
Timeout: opt.timeout.String(),
}
return c.enqueue(msg, t)
// By deafult, max retry is set to 25 and timeout is set to 30 minutes.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(t, task, opts...)
}
// Enqueue enqueues task to be processed immediately.
@@ -121,8 +246,9 @@ func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now(), task, opts...)
// By deafult, max retry is set to 25 and timeout is set to 30 minutes.
func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(time.Now(), task, opts...)
}
// EnqueueIn schedules task to be enqueued after the specified delay.
@@ -131,13 +257,78 @@ func (c *Client) Enqueue(task *Task, opts ...Option) error {
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now().Add(d), task, opts...)
// By deafult, max retry is set to 25 and timeout is set to 30 minutes.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(time.Now().Add(d), task, opts...)
}
func (c *Client) enqueue(msg *base.TaskMessage, t time.Time) error {
if time.Now().After(t) {
return c.rdb.Enqueue(msg)
// Close closes the connection with redis server.
func (c *Client) Close() error {
return c.rdb.Close()
}
func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) (*Result, error) {
c.mu.Lock()
defer c.mu.Unlock()
if defaults, ok := c.opts[task.Type]; ok {
opts = append(defaults, opts...)
}
opt := composeOptions(opts...)
deadline := noDeadline
if !opt.deadline.IsZero() {
deadline = opt.deadline
}
timeout := noTimeout
if opt.timeout != 0 {
timeout = opt.timeout
}
if deadline.Equal(noDeadline) && timeout == noTimeout {
// If neither deadline nor timeout are set, use default timeout.
timeout = defaultTimeout
}
msg := &base.TaskMessage{
ID: uuid.New(),
Type: task.Type,
Payload: task.Payload.data,
Queue: opt.queue,
Retry: opt.retry,
Deadline: deadline.Unix(),
Timeout: int64(timeout.Seconds()),
UniqueKey: uniqueKey(task, opt.uniqueTTL, opt.queue),
}
var err error
now := time.Now()
if t.Before(now) || t.Equal(now) {
err = c.enqueue(msg, opt.uniqueTTL)
} else {
err = c.schedule(msg, t, opt.uniqueTTL)
}
switch {
case err == rdb.ErrDuplicateTask:
return nil, fmt.Errorf("%w", ErrDuplicateTask)
case err != nil:
return nil, err
}
return &Result{
ID: msg.ID.String(),
Queue: msg.Queue,
Retry: msg.Retry,
Timeout: timeout,
Deadline: deadline,
}, nil
}
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
return c.rdb.EnqueueUnique(msg, uniqueTTL)
}
return c.rdb.Enqueue(msg)
}
func (c *Client) schedule(msg *base.TaskMessage, t time.Time, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
ttl := t.Add(uniqueTTL).Sub(time.Now())
return c.rdb.ScheduleUnique(msg, t, ttl)
}
return c.rdb.Schedule(msg, t)
}

View File

@@ -5,10 +5,12 @@
package asynq
import (
"errors"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
)
@@ -32,6 +34,7 @@ func TestClientEnqueueAt(t *testing.T) {
task *Task
processAt time.Time
opts []Option
wantRes *Result
wantEnqueued map[string][]*base.TaskMessage
wantScheduled []h.ZSetEntry
}{
@@ -40,33 +43,47 @@ func TestClientEnqueueAt(t *testing.T) {
task: task,
processAt: now,
opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
{
desc: "Schedule task to be processed in the future",
task: task,
processAt: oneHourLater,
opts: []Option{},
desc: "Schedule task to be processed in the future",
task: task,
processAt: oneHourLater,
opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{
{
Msg: &base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
Score: float64(oneHourLater.Unix()),
},
@@ -77,11 +94,15 @@ func TestClientEnqueueAt(t *testing.T) {
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...)
gotRes, err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueueAt(processAt, task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname)
@@ -110,6 +131,7 @@ func TestClientEnqueue(t *testing.T) {
desc string
task *Task
opts []Option
wantRes *Result
wantEnqueued map[string][]*base.TaskMessage
}{
{
@@ -118,14 +140,21 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{
MaxRetry(3),
},
wantRes: &Result{
Queue: "default",
Retry: 3,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 3,
Queue: "default",
Timeout: time.Duration(0).String(),
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: 3,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
},
@@ -136,14 +165,21 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{
MaxRetry(-2),
},
wantRes: &Result{
Queue: "default",
Retry: 0,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 0, // Retry count should be set to zero
Queue: "default",
Timeout: time.Duration(0).String(),
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: 0, // Retry count should be set to zero
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
},
@@ -155,14 +191,21 @@ func TestClientEnqueue(t *testing.T) {
MaxRetry(2),
MaxRetry(10),
},
wantRes: &Result{
Queue: "default",
Retry: 10,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 10, // Last option takes precedence
Queue: "default",
Timeout: time.Duration(0).String(),
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: 10, // Last option takes precedence
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
},
@@ -173,14 +216,21 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{
Queue("custom"),
},
wantRes: &Result{
Queue: "custom",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"custom": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "custom",
Timeout: time.Duration(0).String(),
"custom": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "custom",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
},
@@ -191,32 +241,97 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{
Queue("HIGH"),
},
wantRes: &Result{
Queue: "high",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"high": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "high",
Timeout: time.Duration(0).String(),
"high": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "high",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
},
},
{
desc: "Timeout option sets the timeout duration",
desc: "With timeout option",
task: task,
opts: []Option{
Timeout(20 * time.Second),
},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: 20 * time.Second,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: (20 * time.Second).String(),
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: 20,
Deadline: noDeadline.Unix(),
},
},
},
},
{
desc: "With deadline option",
task: task,
opts: []Option{
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: noTimeout,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(noTimeout.Seconds()),
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Unix(),
},
},
},
},
{
desc: "With both deadline and timeout options",
task: task,
opts: []Option{
Timeout(20 * time.Second),
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: 20 * time.Second,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: 20,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Unix(),
},
},
},
@@ -226,11 +341,15 @@ func TestClientEnqueue(t *testing.T) {
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.Enqueue(tc.task, tc.opts...)
gotRes, err := client.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued {
got := h.GetEnqueuedMessages(t, r, qname)
@@ -255,23 +374,31 @@ func TestClientEnqueueIn(t *testing.T) {
task *Task
delay time.Duration
opts []Option
wantRes *Result
wantEnqueued map[string][]*base.TaskMessage
wantScheduled []h.ZSetEntry
}{
{
desc: "schedule a task to be enqueued in one hour",
task: task,
delay: time.Hour,
opts: []Option{},
desc: "schedule a task to be enqueued in one hour",
task: task,
delay: time.Hour,
opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{
{
Msg: &base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
Score: float64(time.Now().Add(time.Hour).Unix()),
},
@@ -282,14 +409,21 @@ func TestClientEnqueueIn(t *testing.T) {
task: task,
delay: 0,
opts: []Option{},
wantRes: &Result{
Queue: "default",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
},
@@ -300,11 +434,15 @@ func TestClientEnqueueIn(t *testing.T) {
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueIn(tc.delay, tc.task, tc.opts...)
gotRes, err := client.EnqueueIn(tc.delay, tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueueIn(delay, task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname)
@@ -319,3 +457,313 @@ func TestClientEnqueueIn(t *testing.T) {
}
}
}
func TestClientDefaultOptions(t *testing.T) {
r := setup(t)
tests := []struct {
desc string
defaultOpts []Option // options set at the client level.
opts []Option // options used at enqueue time.
task *Task
wantRes *Result
queue string // queue that the message should go into.
want *base.TaskMessage
}{
{
desc: "With queue routing option",
defaultOpts: []Option{Queue("feed")},
opts: []Option{},
task: NewTask("feed:import", nil),
wantRes: &Result{
Queue: "feed",
Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: defaultMaxRetry,
Queue: "feed",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
{
desc: "With multiple options",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{},
task: NewTask("feed:import", nil),
wantRes: &Result{
Queue: "feed",
Retry: 5,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "feed",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
{
desc: "With overriding options at enqueue time",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{Queue("critical")},
task: NewTask("feed:import", nil),
wantRes: &Result{
Queue: "critical",
Retry: 5,
Timeout: defaultTimeout,
Deadline: noDeadline,
},
queue: "critical",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "critical",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...)
gotRes, err := c.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" {
t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff)
}
enqueued := h.GetEnqueuedMessages(t, r, tc.queue)
if len(enqueued) != 1 {
t.Errorf("%s;\nexpected queue %q to have one message; got %d messages in the queue.",
tc.desc, tc.queue, len(enqueued))
continue
}
got := enqueued[0]
if diff := cmp.Diff(tc.want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in enqueued task message; (-want,+got)\n%s",
tc.desc, diff)
}
}
}
func TestUniqueKey(t *testing.T) {
tests := []struct {
desc string
task *Task
ttl time.Duration
qname string
want string
}{
{
"with zero TTL",
NewTask("email:send", map[string]interface{}{"a": 123, "b": "hello", "c": true}),
0,
"default",
"",
},
{
"with primitive types",
NewTask("email:send", map[string]interface{}{"a": 123, "b": "hello", "c": true}),
10 * time.Minute,
"default",
"email:send:a=123,b=hello,c=true:default",
},
{
"with unsorted keys",
NewTask("email:send", map[string]interface{}{"b": "hello", "c": true, "a": 123}),
10 * time.Minute,
"default",
"email:send:a=123,b=hello,c=true:default",
},
{
"with composite types",
NewTask("email:send",
map[string]interface{}{
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
"names": []string{"bob", "mike", "rob"}}),
10 * time.Minute,
"default",
"email:send:address=map[city:Boston line:123 Main St state:MA],names=[bob mike rob]:default",
},
{
"with complex types",
NewTask("email:send",
map[string]interface{}{
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
"duration": time.Hour}),
10 * time.Minute,
"default",
"email:send:duration=1h0m0s,time=2020-07-28 00:00:00 +0000 UTC:default",
},
{
"with nil payload",
NewTask("reindex", nil),
10 * time.Minute,
"default",
"reindex:nil:default",
},
}
for _, tc := range tests {
got := uniqueKey(tc.task, tc.ttl, tc.qname)
if got != tc.want {
t.Errorf("%s: uniqueKey(%v, %v, %q) = %q, want %q", tc.desc, tc.task, tc.ttl, tc.qname, got, tc.want)
}
}
}
func TestEnqueueUnique(t *testing.T) {
r := setup(t)
c := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
tests := []struct {
task *Task
ttl time.Duration
}{
{
NewTask("email", map[string]interface{}{"user_id": 123}),
time.Hour,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
_, err := c.Enqueue(tc.task, Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val()
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl)
continue
}
// Enqueue the task again. It should fail.
_, err = c.Enqueue(tc.task, Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}
func TestEnqueueInUnique(t *testing.T) {
r := setup(t)
c := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
tests := []struct {
task *Task
d time.Duration
ttl time.Duration
}{
{
NewTask("reindex", nil),
time.Hour,
10 * time.Minute,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
_, err := c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val()
wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
continue
}
// Enqueue the task again. It should fail.
_, err = c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}
func TestEnqueueAtUnique(t *testing.T) {
r := setup(t)
c := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
tests := []struct {
task *Task
at time.Time
ttl time.Duration
}{
{
NewTask("reindex", nil),
time.Now().Add(time.Hour),
10 * time.Minute,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
_, err := c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val()
wantTTL := tc.at.Add(tc.ttl).Sub(time.Now())
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
continue
}
// Enqueue the task again. It should fail.
_, err = c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}

74
context.go Normal file
View File

@@ -0,0 +1,74 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"time"
"github.com/hibiken/asynq/internal/base"
)
// A taskMetadata holds task scoped data to put in context.
type taskMetadata struct {
id string
maxRetry int
retryCount int
}
// ctxKey type is unexported to prevent collisions with context keys defined in
// other packages.
type ctxKey int
// metadataCtxKey is the context key for the task metadata.
// Its value of zero is arbitrary.
const metadataCtxKey ctxKey = 0
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage, deadline time.Time) (context.Context, context.CancelFunc) {
metadata := taskMetadata{
id: msg.ID.String(),
maxRetry: msg.Retry,
retryCount: msg.Retried,
}
ctx := context.WithValue(context.Background(), metadataCtxKey, metadata)
return context.WithDeadline(ctx, deadline)
}
// GetTaskID extracts a task ID from a context, if any.
//
// ID of a task is guaranteed to be unique.
// ID of a task doesn't change if the task is being retried.
func GetTaskID(ctx context.Context) (id string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.id, true
}
// GetRetryCount extracts retry count from a context, if any.
//
// Return value n indicates the number of times associated task has been
// retried so far.
func GetRetryCount(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.retryCount, true
}
// GetMaxRetry extracts maximum retry from a context, if any.
//
// Return value n indicates the maximum number of times the assoicated task
// can be retried if ProcessTask returns a non-nil error.
func GetMaxRetry(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.maxRetry, true
}

148
context_test.go Normal file
View File

@@ -0,0 +1,148 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
)
func TestCreateContextWithFutureDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.New(),
Payload: nil,
}
ctx, cancel := createContext(msg, tc.deadline)
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
}
}
func TestCreateContextWithPastDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(-2 * time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.New(),
Payload: nil,
}
ctx, cancel := createContext(msg, tc.deadline)
defer cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
}
}
func TestGetTaskMetadataFromContext(t *testing.T) {
tests := []struct {
desc string
msg *base.TaskMessage
}{
{"with zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 25, Retried: 0, Timeout: 1800}},
{"with non-zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 10, Retried: 5, Timeout: 1800}},
}
for _, tc := range tests {
ctx, cancel := createContext(tc.msg, time.Now().Add(30*time.Minute))
defer cancel()
id, ok := GetTaskID(ctx)
if !ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == false", tc.desc)
}
if ok && id != tc.msg.ID.String() {
t.Errorf("%s: GetTaskID(ctx) returned id == %q, want %q", tc.desc, id, tc.msg.ID.String())
}
retried, ok := GetRetryCount(ctx)
if !ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == false", tc.desc)
}
if ok && retried != tc.msg.Retried {
t.Errorf("%s: GetRetryCount(ctx) returned n == %d want %d", tc.desc, retried, tc.msg.Retried)
}
maxRetry, ok := GetMaxRetry(ctx)
if !ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == false", tc.desc)
}
if ok && maxRetry != tc.msg.Retry {
t.Errorf("%s: GetMaxRetry(ctx) returned n == %d want %d", tc.desc, maxRetry, tc.msg.Retry)
}
}
}
func TestGetTaskMetadataFromContextError(t *testing.T) {
tests := []struct {
desc string
ctx context.Context
}{
{"with background context", context.Background()},
}
for _, tc := range tests {
if _, ok := GetTaskID(tc.ctx); ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == true", tc.desc)
}
if _, ok := GetRetryCount(tc.ctx); ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == true", tc.desc)
}
if _, ok := GetMaxRetry(tc.ctx); ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == true", tc.desc)
}
}
}

16
doc.go
View File

@@ -14,7 +14,7 @@ specify the options using one of RedisConnOpt types.
DB: 3,
}
The Client is used to register a task to be processed at the specified time.
The Client is used to enqueue a task to be processed at the specified time.
Task is created with two parameters: its type and payload.
@@ -25,20 +25,20 @@ Task is created with two parameters: its type and payload.
map[string]interface{}{"user_id": 42})
// Enqueue the task to be processed immediately.
err := client.Enqueue(t)
res, err := client.Enqueue(t)
// Schedule the task to be processed in one minute.
err = client.EnqueueIn(time.Minute, t)
// Schedule the task to be processed after one minute.
res, err = client.EnqueueIn(time.Minute, t)
The Background is used to run the background task processing with a given
The Server is used to run the background task processing with a given
handler.
bg := asynq.NewBackground(redis, &asynq.Config{
srv := asynq.NewServer(redis, asynq.Config{
Concurrency: 10,
})
bg.Run(handler)
srv.Run(handler)
Handler is an interface with one method ProcessTask which
Handler is an interface type with a method which
takes a task and returns an error. Handler should return nil if
the processing is successful, otherwise return a non-nil error.
If handler panics or returns a non-nil error, the task will be retried in the future.

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

Before

Width:  |  Height:  |  Size: 582 KiB

After

Width:  |  Height:  |  Size: 582 KiB

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
docs/assets/overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

BIN
docs/assets/task-queue.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

95
example_test.go Normal file
View File

@@ -0,0 +1,95 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq_test
import (
"fmt"
"log"
"os"
"os/signal"
"github.com/hibiken/asynq"
"golang.org/x/sys/unix"
)
func ExampleServer_Run() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
// Run blocks and waits for os signal to terminate the program.
if err := srv.Run(h); err != nil {
log.Fatal(err)
}
}
func ExampleServer_Stop() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs // wait for termination signal
srv.Stop()
}
func ExampleServer_Quiet() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
// Handle SIGTERM, SIGINT to exit the program.
// Handle SIGTSTP to stop processing new tasks.
for {
s := <-sigs
if s == unix.SIGTSTP {
srv.Quiet() // stop processing new tasks
continue
}
break
}
srv.Stop()
}
func ExampleParseRedisURI() {
rconn, err := asynq.ParseRedisURI("redis://localhost:6379/10")
if err != nil {
log.Fatal(err)
}
r, ok := rconn.(asynq.RedisClientOpt)
if !ok {
log.Fatal("unexpected type")
}
fmt.Println(r.Addr)
fmt.Println(r.DB)
// Output:
// localhost:6379
// 10
}

4
go.mod
View File

@@ -5,10 +5,10 @@ go 1.13
require (
github.com/go-redis/redis/v7 v7.2.0
github.com/google/go-cmp v0.4.0
github.com/rs/xid v1.2.1
github.com/google/uuid v1.1.1
github.com/spf13/cast v1.3.1
go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e // indirect
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/yaml.v2 v2.2.7 // indirect
)

22
go.sum
View File

@@ -2,16 +2,15 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-redis/redis/v7 v7.0.0-beta.4 h1:p6z7Pde69EGRWvlC++y8aFcaWegyrKHzOBGo0zUACTQ=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
@@ -20,16 +19,12 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
@@ -39,10 +34,8 @@ go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092 h1:4QSRKanuywn15aTZvI/mIDEgPQpswuFndXpOj3rKEco=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -60,8 +53,7 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=

View File

@@ -5,67 +5,161 @@
package asynq
import (
"os"
"sync"
"time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/log"
)
// heartbeater is responsible for writing process info to redis periodically to
// indicate that the background worker process is up.
type heartbeater struct {
rdb *rdb.RDB
ps *base.ProcessState
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{}
// interval between heartbeats.
interval time.Duration
// following fields are initialized at construction time and are immutable.
host string
pid int
serverID string
concurrency int
queues map[string]int
strictPriority bool
// following fields are mutable and should be accessed only by the
// heartbeater goroutine. In other words, confine these variables
// to this goroutine only.
started time.Time
workers map[string]workerStat
// status is shared with other goroutine but is concurrency safe.
status *base.ServerStatus
// channels to receive updates on active workers.
starting <-chan *base.TaskMessage
finished <-chan *base.TaskMessage
}
func newHeartbeater(rdb *rdb.RDB, ps *base.ProcessState, interval time.Duration) *heartbeater {
type heartbeaterParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
concurrency int
queues map[string]int
strictPriority bool
status *base.ServerStatus
starting <-chan *base.TaskMessage
finished <-chan *base.TaskMessage
}
func newHeartbeater(params heartbeaterParams) *heartbeater {
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
return &heartbeater{
rdb: rdb,
ps: ps,
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
interval: interval,
interval: params.interval,
host: host,
pid: os.Getpid(),
serverID: uuid.New().String(),
concurrency: params.concurrency,
queues: params.queues,
strictPriority: params.strictPriority,
status: params.status,
workers: make(map[string]workerStat),
starting: params.starting,
finished: params.finished,
}
}
func (h *heartbeater) terminate() {
logger.info("Heartbeater shutting down...")
h.logger.Debug("Heartbeater shutting down...")
// Signal the heartbeater goroutine to stop.
h.done <- struct{}{}
}
// A workerStat records the message a worker is working on
// and the time the worker has started processing the message.
type workerStat struct {
started time.Time
msg *base.TaskMessage
}
func (h *heartbeater) start(wg *sync.WaitGroup) {
h.ps.SetStarted(time.Now())
h.ps.SetStatus(base.StatusRunning)
wg.Add(1)
go func() {
defer wg.Done()
h.started = time.Now()
h.beat()
timer := time.NewTimer(h.interval)
for {
select {
case <-h.done:
h.rdb.ClearProcessState(h.ps)
logger.info("Heartbeater done")
h.broker.ClearServerState(h.host, h.pid, h.serverID)
h.logger.Debug("Heartbeater done")
timer.Stop()
return
case <-time.After(h.interval):
case <-timer.C:
h.beat()
timer.Reset(h.interval)
case msg := <-h.starting:
h.workers[msg.ID.String()] = workerStat{time.Now(), msg}
case msg := <-h.finished:
delete(h.workers, msg.ID.String())
}
}
}()
}
func (h *heartbeater) beat() {
info := base.ServerInfo{
Host: h.host,
PID: h.pid,
ServerID: h.serverID,
Concurrency: h.concurrency,
Queues: h.queues,
StrictPriority: h.strictPriority,
Status: h.status.String(),
Started: h.started,
ActiveWorkerCount: len(h.workers),
}
var ws []*base.WorkerInfo
for id, stat := range h.workers {
ws = append(ws, &base.WorkerInfo{
Host: h.host,
PID: h.pid,
ID: id,
Type: stat.msg.Type,
Queue: stat.msg.Queue,
Payload: stat.msg.Payload,
Started: stat.started,
})
}
// Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed.
err := h.rdb.WriteProcessState(h.ps, h.interval*2)
if err != nil {
logger.error("could not write heartbeat data: %v", err)
if err := h.broker.WriteServerState(&info, ws, h.interval*2); err != nil {
h.logger.Errorf("could not write server state data: %v", err)
}
}

View File

@@ -14,6 +14,7 @@ import (
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
)
func TestHeartbeater(t *testing.T) {
@@ -31,17 +32,33 @@ func TestHeartbeater(t *testing.T) {
}
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
ignoreOpt := cmpopts.IgnoreUnexported(base.ServerInfo{})
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
for _, tc := range tests {
h.FlushDB(t, r)
state := base.NewProcessState(tc.host, tc.pid, tc.concurrency, tc.queues, false)
hb := newHeartbeater(rdbClient, state, tc.interval)
status := base.NewServerStatus(base.StatusIdle)
hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: rdbClient,
interval: tc.interval,
concurrency: tc.concurrency,
queues: tc.queues,
strictPriority: false,
status: status,
starting: make(chan *base.TaskMessage),
finished: make(chan *base.TaskMessage),
})
// Change host and pid fields for testing purpose.
hb.host = tc.host
hb.pid = tc.pid
status.Set(base.StatusRunning)
var wg sync.WaitGroup
hb.start(&wg)
want := &base.ProcessInfo{
want := &base.ServerInfo{
Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
@@ -53,47 +70,47 @@ func TestHeartbeater(t *testing.T) {
// allow for heartbeater to write to redis
time.Sleep(tc.interval * 2)
ps, err := rdbClient.ListProcesses()
ss, err := rdbClient.ListServers()
if err != nil {
t.Errorf("could not read process status from redis: %v", err)
t.Errorf("could not read server info from redis: %v", err)
hb.terminate()
continue
}
if len(ps) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
if len(ss) != 1 {
t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
hb.terminate()
continue
}
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate()
continue
}
// status change
state.SetStatus(base.StatusStopped)
status.Set(base.StatusStopped)
// allow for heartbeater to write to redis
time.Sleep(tc.interval * 2)
want.Status = "stopped"
ps, err = rdbClient.ListProcesses()
ss, err = rdbClient.ListServers()
if err != nil {
t.Errorf("could not read process status from redis: %v", err)
hb.terminate()
continue
}
if len(ps) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
hb.terminate()
continue
}
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate()
continue
}
@@ -101,3 +118,35 @@ func TestHeartbeater(t *testing.T) {
hb.terminate()
}
}
func TestHeartbeaterWithRedisDown(t *testing.T) {
// Make sure that heartbeater goroutine doesn't panic
// if it cannot connect to redis.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: testBroker,
interval: time.Second,
concurrency: 10,
queues: map[string]int{"default": 1},
strictPriority: false,
status: base.NewServerStatus(base.StatusRunning),
starting: make(chan *base.TaskMessage),
finished: make(chan *base.TaskMessage),
})
testBroker.Sleep()
var wg sync.WaitGroup
hb.start(&wg)
// wait for heartbeater to try writing data to redis
time.Sleep(2 * time.Second)
hb.terminate()
}

View File

@@ -13,8 +13,8 @@ import (
"github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
)
// ZSetEntry is an entry in redis sorted set.
@@ -41,9 +41,9 @@ var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) [
return out
})
// SortProcessInfoOpt is a cmp.Option to sort base.ProcessInfo for comparing slice of process info.
var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.ProcessInfo) []*base.ProcessInfo {
out := append([]*base.ProcessInfo(nil), in...) // Copy input to avoid mutating it
// SortServerInfoOpt is a cmp.Option to sort base.ServerInfo for comparing slice of process info.
var SortServerInfoOpt = cmp.Transformer("SortServerInfo", func(in []*base.ServerInfo) []*base.ServerInfo {
out := append([]*base.ServerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host
@@ -57,7 +57,7 @@ var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.Proc
var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo {
out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].ID.String() < out[j].ID.String()
return out[i].ID < out[j].ID
})
return out
})
@@ -75,11 +75,13 @@ var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage {
return &base.TaskMessage{
ID: xid.New(),
Type: taskType,
Queue: base.DefaultQueueName,
Retry: 25,
Payload: payload,
ID: uuid.New(),
Type: taskType,
Queue: base.DefaultQueueName,
Retry: 25,
Payload: payload,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
}
}
@@ -87,7 +89,7 @@ func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskM
// task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage {
return &base.TaskMessage{
ID: xid.New(),
ID: uuid.New(),
Type: taskType,
Queue: qname,
Retry: 25,
@@ -95,6 +97,20 @@ func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qn
}
}
// TaskMessageAfterRetry returns an updated copy of t after retry.
// It increments retry count and sets the error message.
func TaskMessageAfterRetry(t base.TaskMessage, errMsg string) *base.TaskMessage {
t.Retried = t.Retried + 1
t.ErrorMsg = errMsg
return &t
}
// TaskMessageWithError returns an updated copy of t with the given error message.
func TaskMessageWithError(t base.TaskMessage, errMsg string) *base.TaskMessage {
t.ErrorMsg = errMsg
return &t
}
// MustMarshal marshals given task message and returns a json string.
// Calling test will fail if marshaling errors out.
func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
@@ -185,6 +201,12 @@ func SeedDeadQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) {
seedRedisZSet(tb, r, base.DeadQueue, entries)
}
// SeedDeadlines initializes the deadlines set with the given entries.
func SeedDeadlines(tb testing.TB, r *redis.Client, entries []ZSetEntry) {
tb.Helper()
seedRedisZSet(tb, r, base.KeyDeadlines, entries)
}
func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.TaskMessage) {
data := MustMarshalSlice(tb, msgs)
for _, s := range data {
@@ -257,6 +279,12 @@ func GetDeadEntries(tb testing.TB, r *redis.Client) []ZSetEntry {
return getZSetEntries(tb, r, base.DeadQueue)
}
// GetDeadlinesEntries returns all task messages and its score in the deadlines set.
func GetDeadlinesEntries(tb testing.TB, r *redis.Client) []ZSetEntry {
tb.Helper()
return getZSetEntries(tb, r, base.KeyDeadlines)
}
func getListMessages(tb testing.TB, r *redis.Client, list string) []*base.TaskMessage {
data := r.LRange(list, 0, -1).Val()
return MustUnmarshalSlice(tb, data)

View File

@@ -7,23 +7,28 @@ package base
import (
"context"
"encoding/json"
"fmt"
"strings"
"sync"
"time"
"github.com/rs/xid"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
)
// Version of asynq library and CLI.
const Version = "0.10.0"
// DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default"
// Redis keys
const (
AllProcesses = "asynq:ps" // ZSET
psPrefix = "asynq:ps:" // STRING - asynq:ps:<host>:<pid>
AllServers = "asynq:servers" // ZSET
serversPrefix = "asynq:servers:" // STRING - asynq:ps:<host>:<pid>:<serverid>
AllWorkers = "asynq:workers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>:<serverid>
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
@@ -33,6 +38,8 @@ const (
RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST
KeyDeadlines = "asynq:deadlines" // ZSET
PausedQueues = "asynq:paused" // SET
CancelChannel = "asynq:cancel" // PubSub channel
)
@@ -51,14 +58,14 @@ func FailureKey(t time.Time) string {
return failurePrefix + t.UTC().Format("2006-01-02")
}
// ProcessInfoKey returns a redis key for process info.
func ProcessInfoKey(hostname string, pid int) string {
return fmt.Sprintf("%s%s:%d", psPrefix, hostname, pid)
// ServerInfoKey returns a redis key for process info.
func ServerInfoKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d:%s", serversPrefix, hostname, pid, sid)
}
// WorkersKey returns a redis key for the workers given hostname and pid.
func WorkersKey(hostname string, pid int) string {
return fmt.Sprintf("%s%s:%d", workersPrefix, hostname, pid)
// WorkersKey returns a redis key for the workers given hostname, pid, and server ID.
func WorkersKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d:%s", workersPrefix, hostname, pid, sid)
}
// TaskMessage is the internal representation of a task with additional metadata fields.
@@ -71,7 +78,7 @@ type TaskMessage struct {
Payload map[string]interface{}
// ID is a unique identifier for each task.
ID xid.ID
ID uuid.UUID
// Queue is a name this message should be enqueued to.
Queue string
@@ -85,156 +92,111 @@ type TaskMessage struct {
// ErrorMsg holds the error message from the last failure.
ErrorMsg string
// Timeout specifies how long a task may run.
// The string value should be compatible with time.Duration.ParseDuration.
// Timeout specifies timeout in seconds.
// If task processing doesn't complete within the timeout, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the dead queue.
//
// Zero means no limit.
Timeout string
// Use zero to indicate no timeout.
Timeout int64
// Deadline specifies the deadline for the task in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// If task processing doesn't complete before the deadline, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the dead queue.
//
// Use zero to indicate no deadline.
Deadline int64
// UniqueKey holds the redis key used for uniqueness lock for this task.
//
// Empty string indicates that no uniqueness lock was used.
UniqueKey string
}
// ProcessState holds process level information.
//
// ProcessStates are safe for concurrent use by multiple goroutines.
type ProcessState struct {
mu sync.Mutex // guards all data fields
concurrency int
queues map[string]int
strictPriority bool
pid int
host string
status PStatus
started time.Time
workers map[string]*workerStats
// EncodeMessage marshals the given task message in JSON and returns an encoded string.
func EncodeMessage(msg *TaskMessage) (string, error) {
b, err := json.Marshal(msg)
if err != nil {
return "", err
}
return string(b), nil
}
// PStatus represents status of a process.
type PStatus int
// DecodeMessage unmarshals the given encoded string and returns a decoded task message.
func DecodeMessage(s string) (*TaskMessage, error) {
d := json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, err
}
return &msg, nil
}
// ServerStatus represents status of a server.
// ServerStatus methods are concurrency safe.
type ServerStatus struct {
mu sync.Mutex
val ServerStatusValue
}
// NewServerStatus returns a new status instance given an initial value.
func NewServerStatus(v ServerStatusValue) *ServerStatus {
return &ServerStatus{val: v}
}
type ServerStatusValue int
const (
// StatusIdle indicates process is in idle state.
StatusIdle PStatus = iota
// StatusIdle indicates the server is in idle state.
StatusIdle ServerStatusValue = iota
// StatusRunning indicates process is up and processing tasks.
// StatusRunning indicates the servier is up and processing tasks.
StatusRunning
// StatusStopped indicates process is up but not processing new tasks.
// StatusQuiet indicates the server is up but not processing new tasks.
StatusQuiet
// StatusStopped indicates the server server has been stopped.
StatusStopped
)
var statuses = []string{
"idle",
"running",
"quiet",
"stopped",
}
func (s PStatus) String() string {
if StatusIdle <= s && s <= StatusStopped {
return statuses[s]
func (s *ServerStatus) String() string {
s.mu.Lock()
defer s.mu.Unlock()
if StatusIdle <= s.val && s.val <= StatusStopped {
return statuses[s.val]
}
return "unknown status"
}
type workerStats struct {
msg *TaskMessage
started time.Time
// Get returns the status value.
func (s *ServerStatus) Get() ServerStatusValue {
s.mu.Lock()
v := s.val
s.mu.Unlock()
return v
}
// NewProcessState returns a new instance of ProcessState.
func NewProcessState(host string, pid, concurrency int, queues map[string]int, strict bool) *ProcessState {
return &ProcessState{
host: host,
pid: pid,
concurrency: concurrency,
queues: cloneQueueConfig(queues),
strictPriority: strict,
status: StatusIdle,
workers: make(map[string]*workerStats),
}
// Set sets the status value.
func (s *ServerStatus) Set(v ServerStatusValue) {
s.mu.Lock()
s.val = v
s.mu.Unlock()
}
// SetStatus updates the state of process.
func (ps *ProcessState) SetStatus(status PStatus) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.status = status
}
// SetStarted records when the process started processing.
func (ps *ProcessState) SetStarted(t time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.started = t
}
// AddWorkerStats records when a worker started and which task it's processing.
func (ps *ProcessState) AddWorkerStats(msg *TaskMessage, started time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.workers[msg.ID.String()] = &workerStats{msg, started}
}
// DeleteWorkerStats removes a worker's entry from the process state.
func (ps *ProcessState) DeleteWorkerStats(msg *TaskMessage) {
ps.mu.Lock()
defer ps.mu.Unlock()
delete(ps.workers, msg.ID.String())
}
// Get returns current state of process as a ProcessInfo.
func (ps *ProcessState) Get() *ProcessInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
return &ProcessInfo{
Host: ps.host,
PID: ps.pid,
Concurrency: ps.concurrency,
Queues: cloneQueueConfig(ps.queues),
StrictPriority: ps.strictPriority,
Status: ps.status.String(),
Started: ps.started,
ActiveWorkerCount: len(ps.workers),
}
}
// GetWorkers returns a list of currently running workers' info.
func (ps *ProcessState) GetWorkers() []*WorkerInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
var res []*WorkerInfo
for _, w := range ps.workers {
res = append(res, &WorkerInfo{
Host: ps.host,
PID: ps.pid,
ID: w.msg.ID,
Type: w.msg.Type,
Queue: w.msg.Queue,
Payload: clonePayload(w.msg.Payload),
Started: w.started,
})
}
return res
}
func cloneQueueConfig(qcfg map[string]int) map[string]int {
res := make(map[string]int)
for qname, n := range qcfg {
res[qname] = n
}
return res
}
func clonePayload(payload map[string]interface{}) map[string]interface{} {
res := make(map[string]interface{})
for k, v := range payload {
res[k] = v
}
return res
}
// ProcessInfo holds information about a running background worker process.
type ProcessInfo struct {
// ServerInfo holds information about a running server.
type ServerInfo struct {
Host string
PID int
ServerID string
Concurrency int
Queues map[string]int
StrictPriority bool
@@ -247,7 +209,7 @@ type ProcessInfo struct {
type WorkerInfo struct {
Host string
PID int
ID xid.ID
ID string
Type string
Queue string
Payload map[string]interface{}
@@ -291,13 +253,24 @@ func (c *Cancelations) Get(id string) (fn context.CancelFunc, ok bool) {
return fn, ok
}
// GetAll returns all cancel funcs.
func (c *Cancelations) GetAll() []context.CancelFunc {
c.mu.Lock()
defer c.mu.Unlock()
var res []context.CancelFunc
for _, fn := range c.cancelFuncs {
res = append(res, fn)
}
return res
// Broker is a message broker that supports operations to manage task queues.
//
// See rdb.RDB as a reference implementation.
type Broker interface {
Enqueue(msg *TaskMessage) error
EnqueueUnique(msg *TaskMessage, ttl time.Duration) error
Dequeue(qnames ...string) (*TaskMessage, time.Time, error)
Done(msg *TaskMessage) error
Requeue(msg *TaskMessage) error
Schedule(msg *TaskMessage, processAt time.Time) error
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
Kill(msg *TaskMessage, errMsg string) error
CheckAndEnqueue() error
ListDeadlineExceeded(deadline time.Time) ([]*TaskMessage, error)
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
ClearServerState(host string, pid int, serverID string) error
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers
PublishCancelation(id string) error
Close() error
}

View File

@@ -6,13 +6,13 @@ package base
import (
"context"
"math/rand"
"encoding/json"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/rs/xid"
"github.com/google/uuid"
)
func TestQueueKey(t *testing.T) {
@@ -67,20 +67,22 @@ func TestFailureKey(t *testing.T) {
}
}
func TestProcessInfoKey(t *testing.T) {
func TestServerInfoKey(t *testing.T) {
tests := []struct {
hostname string
pid int
sid string
want string
}{
{"localhost", 9876, "asynq:ps:localhost:9876"},
{"127.0.0.1", 1234, "asynq:ps:127.0.0.1:1234"},
{"localhost", 9876, "server123", "asynq:servers:localhost:9876:server123"},
{"127.0.0.1", 1234, "server987", "asynq:servers:127.0.0.1:1234:server987"},
}
for _, tc := range tests {
got := ProcessInfoKey(tc.hostname, tc.pid)
got := ServerInfoKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want {
t.Errorf("ProcessInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want)
t.Errorf("ServerInfoKey(%q, %d, %q) = %q, want %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
}
}
}
@@ -89,80 +91,92 @@ func TestWorkersKey(t *testing.T) {
tests := []struct {
hostname string
pid int
sid string
want string
}{
{"localhost", 9876, "asynq:workers:localhost:9876"},
{"127.0.0.1", 1234, "asynq:workers:127.0.0.1:1234"},
{"localhost", 9876, "server1", "asynq:workers:localhost:9876:server1"},
{"127.0.0.1", 1234, "server2", "asynq:workers:127.0.0.1:1234:server2"},
}
for _, tc := range tests {
got := WorkersKey(tc.hostname, tc.pid)
got := WorkersKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want {
t.Errorf("WorkersKey(%q, %d) = %q, want = %q", tc.hostname, tc.pid, got, tc.want)
t.Errorf("WorkersKey(%q, %d, %q) = %q, want = %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
}
}
}
// Test for process state being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestProcessStateConcurrentAccess(t *testing.T) {
ps := NewProcessState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false)
var wg sync.WaitGroup
started := time.Now()
msgs := []*TaskMessage{
&TaskMessage{ID: xid.New(), Type: "type1", Payload: map[string]interface{}{"user_id": 42}},
&TaskMessage{ID: xid.New(), Type: "type2"},
&TaskMessage{ID: xid.New(), Type: "type3"},
func TestMessageEncoding(t *testing.T) {
id := uuid.New()
tests := []struct {
in *TaskMessage
out *TaskMessage
}{
{
in: &TaskMessage{
Type: "task1",
Payload: map[string]interface{}{"a": 1, "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
},
out: &TaskMessage{
Type: "task1",
Payload: map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
},
},
}
// Simulate hearbeater calling SetStatus and SetStarted.
for _, tc := range tests {
encoded, err := EncodeMessage(tc.in)
if err != nil {
t.Errorf("EncodeMessage(msg) returned error: %v", err)
continue
}
decoded, err := DecodeMessage(encoded)
if err != nil {
t.Errorf("DecodeMessage(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(tc.out, decoded); diff != "" {
t.Errorf("Decoded message == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.out, diff)
}
}
}
// Test for status being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestStatusConcurrentAccess(t *testing.T) {
status := NewServerStatus(StatusIdle)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
ps.SetStarted(started)
ps.SetStatus(StatusRunning)
status.Get()
status.String()
}()
// Simulate processor starting worker goroutines.
for _, msg := range msgs {
wg.Add(1)
ps.AddWorkerStats(msg, time.Now())
go func(msg *TaskMessage) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
ps.DeleteWorkerStats(msg)
}(msg)
}
// Simulate hearbeater calling Get and GetWorkers
wg.Add(1)
go func() {
wg.Done()
for i := 0; i < 5; i++ {
ps.Get()
ps.GetWorkers()
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
}
defer wg.Done()
status.Set(StatusStopped)
status.String()
}()
wg.Wait()
want := &ProcessInfo{
Host: "127.0.0.1",
PID: 1234,
Concurrency: 10,
Queues: map[string]int{"default": 1},
StrictPriority: false,
Status: "running",
Started: started,
ActiveWorkerCount: 0,
}
got := ps.Get()
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("(*ProcessState).Get() = %+v, want %+v; (-want,+got)\n%s",
got, want, diff)
}
}
// Test for cancelations being accessed by multiple goroutines.
@@ -208,9 +222,4 @@ func TestCancelationsConcurrentAccess(t *testing.T) {
if ok {
t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2)
}
funcs := c.GetAll()
if len(funcs) != 2 {
t.Errorf("(*Cancelations).GetAll() returns %d functions, want 2", len(funcs))
}
}

215
internal/log/log.go Normal file
View File

@@ -0,0 +1,215 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package log exports logging related types and functions.
package log
import (
"fmt"
"io"
stdlog "log"
"os"
"sync"
)
// Base supports logging at various log levels.
type Base interface {
// Debug logs a message at Debug level.
Debug(args ...interface{})
// Info logs a message at Info level.
Info(args ...interface{})
// Warn logs a message at Warning level.
Warn(args ...interface{})
// Error logs a message at Error level.
Error(args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(args ...interface{})
}
// baseLogger is a wrapper object around log.Logger from the standard library.
// It supports logging at various log levels.
type baseLogger struct {
*stdlog.Logger
}
// Debug logs a message at Debug level.
func (l *baseLogger) Debug(args ...interface{}) {
l.prefixPrint("DEBUG: ", args...)
}
// Info logs a message at Info level.
func (l *baseLogger) Info(args ...interface{}) {
l.prefixPrint("INFO: ", args...)
}
// Warn logs a message at Warning level.
func (l *baseLogger) Warn(args ...interface{}) {
l.prefixPrint("WARN: ", args...)
}
// Error logs a message at Error level.
func (l *baseLogger) Error(args ...interface{}) {
l.prefixPrint("ERROR: ", args...)
}
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
func (l *baseLogger) Fatal(args ...interface{}) {
l.prefixPrint("FATAL: ", args...)
os.Exit(1)
}
func (l *baseLogger) prefixPrint(prefix string, args ...interface{}) {
args = append([]interface{}{prefix}, args...)
l.Print(args...)
}
// newBase creates and returns a new instance of baseLogger.
func newBase(out io.Writer) *baseLogger {
prefix := fmt.Sprintf("asynq: pid=%d ", os.Getpid())
return &baseLogger{
stdlog.New(out, prefix, stdlog.Ldate|stdlog.Ltime|stdlog.Lmicroseconds|stdlog.LUTC),
}
}
// NewLogger creates and returns a new instance of Logger.
// Log level is set to DebugLevel by default.
func NewLogger(base Base) *Logger {
if base == nil {
base = newBase(os.Stderr)
}
return &Logger{base: base, level: DebugLevel}
}
// Logger logs message to io.Writer at various log levels.
type Logger struct {
base Base
mu sync.Mutex
// Minimum log level for this logger.
// Message with level lower than this level won't be outputted.
level Level
}
// Level represents a log level.
type Level int32
const (
// DebugLevel is the lowest level of logging.
// Debug logs are intended for debugging and development purposes.
DebugLevel Level = iota
// InfoLevel is used for general informational log messages.
InfoLevel
// WarnLevel is used for undesired but relatively expected events,
// which may indicate a problem.
WarnLevel
// ErrorLevel is used for undesired and unexpected events that
// the program can recover from.
ErrorLevel
// FatalLevel is used for undesired and unexpected events that
// the program cannot recover from.
FatalLevel
)
// String is part of the fmt.Stringer interface.
//
// Used for testing and debugging purposes.
func (l Level) String() string {
switch l {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warning"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
default:
return "unknown"
}
}
// canLogAt reports whether logger can log at level v.
func (l *Logger) canLogAt(v Level) bool {
l.mu.Lock()
defer l.mu.Unlock()
return v >= l.level
}
func (l *Logger) Debug(args ...interface{}) {
if !l.canLogAt(DebugLevel) {
return
}
l.base.Debug(args...)
}
func (l *Logger) Info(args ...interface{}) {
if !l.canLogAt(InfoLevel) {
return
}
l.base.Info(args...)
}
func (l *Logger) Warn(args ...interface{}) {
if !l.canLogAt(WarnLevel) {
return
}
l.base.Warn(args...)
}
func (l *Logger) Error(args ...interface{}) {
if !l.canLogAt(ErrorLevel) {
return
}
l.base.Error(args...)
}
func (l *Logger) Fatal(args ...interface{}) {
if !l.canLogAt(FatalLevel) {
return
}
l.base.Fatal(args...)
}
func (l *Logger) Debugf(format string, args ...interface{}) {
l.Debug(fmt.Sprintf(format, args...))
}
func (l *Logger) Infof(format string, args ...interface{}) {
l.Info(fmt.Sprintf(format, args...))
}
func (l *Logger) Warnf(format string, args ...interface{}) {
l.Warn(fmt.Sprintf(format, args...))
}
func (l *Logger) Errorf(format string, args ...interface{}) {
l.Error(fmt.Sprintf(format, args...))
}
func (l *Logger) Fatalf(format string, args ...interface{}) {
l.Fatal(fmt.Sprintf(format, args...))
}
// SetLevel sets the logger level.
// It panics if v is less than DebugLevel or greater than FatalLevel.
func (l *Logger) SetLevel(v Level) {
l.mu.Lock()
defer l.mu.Unlock()
if v < DebugLevel || v > FatalLevel {
panic("log: invalid log level")
}
l.level = v
}

388
internal/log/log_test.go Normal file
View File

@@ -0,0 +1,388 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package log
import (
"bytes"
"fmt"
"regexp"
"testing"
)
// regexp for timestamps
const (
rgxPID = `[0-9]+`
rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]`
rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]`
rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]`
)
type tester struct {
desc string
message string
wantPattern string // regexp that log output must match
}
func TestLoggerDebug(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Debug(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Debug(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerInfo(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Info(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerWarn(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Warn(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Warn(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerError(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, world!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Error(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Error(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
type formatTester struct {
desc string
format string
args []interface{}
wantPattern string // regexp that log output must match
}
func TestLoggerDebugf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with DEBUG prefix",
format: "hello, %s!",
args: []interface{}{"Gopher"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s DEBUG: hello, Gopher!\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Debugf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Debugf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerInfof(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with INFO prefix",
format: "%d,%d,%d",
args: []interface{}{1, 2, 3},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s INFO: 1,2,3\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Infof(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Infof(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerWarnf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with WARN prefix",
format: "hello, %s",
args: []interface{}{"Gophers"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s WARN: hello, Gophers\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Warnf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Warnf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerErrorf(t *testing.T) {
tests := []formatTester{
{
desc: "Formats message with ERROR prefix",
format: "hello, %s",
args: []interface{}{"Gophers"},
wantPattern: fmt.Sprintf("^asynq: pid=%s %s %s%s ERROR: hello, Gophers\n$",
rgxPID, rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.Errorf(tc.format, tc.args...)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.Errorf(%q, %v) outputted %q, should match pattern %q",
tc.format, tc.args, got, tc.wantPattern)
}
}
}
func TestLoggerWithLowerLevels(t *testing.T) {
// Logger should not log messages at a level
// lower than the specified level.
tests := []struct {
level Level
op string
}{
// with level one above
{InfoLevel, "Debug"},
{InfoLevel, "Debugf"},
{WarnLevel, "Info"},
{WarnLevel, "Infof"},
{ErrorLevel, "Warn"},
{ErrorLevel, "Warnf"},
{FatalLevel, "Error"},
{FatalLevel, "Errorf"},
// with skip level
{WarnLevel, "Debug"},
{ErrorLevel, "Infof"},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.SetLevel(tc.level)
switch tc.op {
case "Debug":
logger.Debug("hello")
case "Debugf":
logger.Debugf("hello, %s", "world")
case "Info":
logger.Info("hello")
case "Infof":
logger.Infof("hello, %s", "world")
case "Warn":
logger.Warn("hello")
case "Warnf":
logger.Warnf("hello, %s", "world")
case "Error":
logger.Error("hello")
case "Errorf":
logger.Errorf("hello, %s", "world")
default:
t.Fatalf("unexpected op: %q", tc.op)
}
if buf.String() != "" {
t.Errorf("logger.%s outputted log message when level is set to %v", tc.op, tc.level)
}
}
}
func TestLoggerWithSameOrHigherLevels(t *testing.T) {
// Logger should log messages at a level
// same as or higher than the specified level.
tests := []struct {
level Level
op string
}{
// same level
{DebugLevel, "Debug"},
{InfoLevel, "Infof"},
{WarnLevel, "Warn"},
{ErrorLevel, "Errorf"},
// higher level
{DebugLevel, "Info"},
{InfoLevel, "Warnf"},
{WarnLevel, "Error"},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(newBase(&buf))
logger.SetLevel(tc.level)
switch tc.op {
case "Debug":
logger.Debug("hello")
case "Debugf":
logger.Debugf("hello, %s", "world")
case "Info":
logger.Info("hello")
case "Infof":
logger.Infof("hello, %s", "world")
case "Warn":
logger.Warn("hello")
case "Warnf":
logger.Warnf("hello, %s", "world")
case "Error":
logger.Error("hello")
case "Errorf":
logger.Errorf("hello, %s", "world")
default:
t.Fatalf("unexpected op: %q", tc.op)
}
if buf.String() == "" {
t.Errorf("logger.%s did not output log message when level is set to %v", tc.op, tc.level)
}
}
}

View File

@@ -7,12 +7,13 @@ package rdb
import (
"encoding/json"
"fmt"
"sort"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
"github.com/spf13/cast"
)
@@ -25,10 +26,24 @@ type Stats struct {
Dead int
Processed int
Failed int
Queues map[string]int // map of queue name to number of tasks in the queue (e.g., "default": 100, "critical": 20)
Queues []*Queue
Timestamp time.Time
}
// Queue represents a task queue.
type Queue struct {
// Name of the queue (e.g. "default", "critical").
// Note: It doesn't include the prefix "asynq:queues:".
Name string
// Paused indicates whether the queue is paused.
// If true, tasks in the queue should not be processed.
Paused bool
// Size is the number of tasks in the queue.
Size int
}
// DailyStats holds aggregate data for a given day.
type DailyStats struct {
Processed int
@@ -38,7 +53,7 @@ type DailyStats struct {
// EnqueuedTask is a task in a queue and is ready to be processed.
type EnqueuedTask struct {
ID xid.ID
ID uuid.UUID
Type string
Payload map[string]interface{}
Queue string
@@ -46,14 +61,14 @@ type EnqueuedTask struct {
// InProgressTask is a task that's currently being processed.
type InProgressTask struct {
ID xid.ID
ID uuid.UUID
Type string
Payload map[string]interface{}
}
// ScheduledTask is a task that's scheduled to be processed in the future.
type ScheduledTask struct {
ID xid.ID
ID uuid.UUID
Type string
Payload map[string]interface{}
ProcessAt time.Time
@@ -63,7 +78,7 @@ type ScheduledTask struct {
// RetryTask is a task that's in retry queue because worker failed to process the task.
type RetryTask struct {
ID xid.ID
ID uuid.UUID
Type string
Payload map[string]interface{}
// TODO(hibiken): add LastFailedAt time.Time
@@ -77,7 +92,7 @@ type RetryTask struct {
// DeadTask is a task in that has exhausted all retries.
type DeadTask struct {
ID xid.ID
ID uuid.UUID
Type string
Payload map[string]interface{}
LastFailedAt time.Time
@@ -143,8 +158,12 @@ func (r *RDB) CurrentStats() (*Stats, error) {
if err != nil {
return nil, err
}
paused, err := r.client.SMembersMap(base.PausedQueues).Result()
if err != nil {
return nil, err
}
stats := &Stats{
Queues: make(map[string]int),
Queues: make([]*Queue, 0),
Timestamp: now,
}
for i := 0; i < len(data); i += 2 {
@@ -154,7 +173,14 @@ func (r *RDB) CurrentStats() (*Stats, error) {
switch {
case strings.HasPrefix(key, base.QueuePrefix):
stats.Enqueued += val
stats.Queues[strings.TrimPrefix(key, base.QueuePrefix)] = val
q := Queue{
Name: strings.TrimPrefix(key, base.QueuePrefix),
Size: val,
}
if _, exist := paused[key]; exist {
q.Paused = true
}
stats.Queues = append(stats.Queues, &q)
case key == base.InProgressQueue:
stats.InProgress = val
case key == base.ScheduledQueue:
@@ -169,6 +195,9 @@ func (r *RDB) CurrentStats() (*Stats, error) {
stats.Failed = val
}
}
sort.Slice(stats.Queues, func(i, j int) bool {
return stats.Queues[i].Name < stats.Queues[j].Name
})
return stats, nil
}
@@ -417,7 +446,7 @@ func (r *RDB) ListDead(pgn Pagination) ([]*DeadTask, error) {
// EnqueueDeadTask finds a task that matches the given id and score from dead queue
// and enqueues it for processing. If a task that matches the id and score
// does not exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueDeadTask(id xid.ID, score int64) error {
func (r *RDB) EnqueueDeadTask(id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.DeadQueue, id.String(), float64(score))
if err != nil {
return err
@@ -431,7 +460,7 @@ func (r *RDB) EnqueueDeadTask(id xid.ID, score int64) error {
// EnqueueRetryTask finds a task that matches the given id and score from retry queue
// and enqueues it for processing. If a task that matches the id and score
// does not exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueRetryTask(id xid.ID, score int64) error {
func (r *RDB) EnqueueRetryTask(id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.RetryQueue, id.String(), float64(score))
if err != nil {
return err
@@ -445,7 +474,7 @@ func (r *RDB) EnqueueRetryTask(id xid.ID, score int64) error {
// EnqueueScheduledTask finds a task that matches the given id and score from scheduled queue
// and enqueues it for processing. If a task that matches the id and score does not
// exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueScheduledTask(id xid.ID, score int64) error {
func (r *RDB) EnqueueScheduledTask(id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.ScheduledQueue, id.String(), float64(score))
if err != nil {
return err
@@ -524,7 +553,7 @@ func (r *RDB) removeAndEnqueueAll(zset string) (int64, error) {
// KillRetryTask finds a task that matches the given id and score from retry queue
// and moves it to dead queue. If a task that maches the id and score does not exist,
// it returns ErrTaskNotFound.
func (r *RDB) KillRetryTask(id xid.ID, score int64) error {
func (r *RDB) KillRetryTask(id uuid.UUID, score int64) error {
n, err := r.removeAndKill(base.RetryQueue, id.String(), float64(score))
if err != nil {
return err
@@ -538,7 +567,7 @@ func (r *RDB) KillRetryTask(id xid.ID, score int64) error {
// KillScheduledTask finds a task that matches the given id and score from scheduled queue
// and moves it to dead queue. If a task that maches the id and score does not exist,
// it returns ErrTaskNotFound.
func (r *RDB) KillScheduledTask(id xid.ID, score int64) error {
func (r *RDB) KillScheduledTask(id uuid.UUID, score int64) error {
n, err := r.removeAndKill(base.ScheduledQueue, id.String(), float64(score))
if err != nil {
return err
@@ -631,21 +660,21 @@ func (r *RDB) removeAndKillAll(zset string) (int64, error) {
// DeleteDeadTask finds a task that matches the given id and score from dead queue
// and deletes it. If a task that matches the id and score does not exist,
// it returns ErrTaskNotFound.
func (r *RDB) DeleteDeadTask(id xid.ID, score int64) error {
func (r *RDB) DeleteDeadTask(id uuid.UUID, score int64) error {
return r.deleteTask(base.DeadQueue, id.String(), float64(score))
}
// DeleteRetryTask finds a task that matches the given id and score from retry queue
// and deletes it. If a task that matches the id and score does not exist,
// it returns ErrTaskNotFound.
func (r *RDB) DeleteRetryTask(id xid.ID, score int64) error {
func (r *RDB) DeleteRetryTask(id uuid.UUID, score int64) error {
return r.deleteTask(base.RetryQueue, id.String(), float64(score))
}
// DeleteScheduledTask finds a task that matches the given id and score from
// scheduled queue and deletes it. If a task that matches the id and score
//does not exist, it returns ErrTaskNotFound.
func (r *RDB) DeleteScheduledTask(id xid.ID, score int64) error {
func (r *RDB) DeleteScheduledTask(id uuid.UUID, score int64) error {
return r.deleteTask(base.ScheduledQueue, id.String(), float64(score))
}
@@ -759,23 +788,23 @@ func (r *RDB) RemoveQueue(qname string, force bool) error {
}
// Note: Script also removes stale keys.
var listProcessesCmd = redis.NewScript(`
var listServersCmd = redis.NewScript(`
local res = {}
local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do
local ps = redis.call("GET", key)
if ps then
table.insert(res, ps)
local s = redis.call("GET", key)
if s then
table.insert(res, s)
end
end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`)
// ListProcesses returns the list of process statuses.
func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
res, err := listProcessesCmd.Run(r.client,
[]string{base.AllProcesses}, time.Now().UTC().Unix()).Result()
// ListServers returns the list of server info.
func (r *RDB) ListServers() ([]*base.ServerInfo, error) {
res, err := listServersCmd.Run(r.client,
[]string{base.AllServers}, time.Now().UTC().Unix()).Result()
if err != nil {
return nil, err
}
@@ -783,16 +812,16 @@ func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
if err != nil {
return nil, err
}
var processes []*base.ProcessInfo
var servers []*base.ServerInfo
for _, s := range data {
var ps base.ProcessInfo
err := json.Unmarshal([]byte(s), &ps)
var info base.ServerInfo
err := json.Unmarshal([]byte(s), &info)
if err != nil {
continue // skip bad data
}
processes = append(processes, &ps)
servers = append(servers, &info)
}
return processes, nil
return servers, nil
}
// Note: Script also removes stale keys.
@@ -830,3 +859,33 @@ func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) {
}
return workers, nil
}
// KEYS[1] -> asynq:paused
// ARGV[1] -> asynq:queues:<qname> - queue to pause
var pauseCmd = redis.NewScript(`
local ismem = redis.call("SISMEMBER", KEYS[1], ARGV[1])
if ismem == 1 then
return redis.error_reply("queue is already paused")
end
return redis.call("SADD", KEYS[1], ARGV[1])`)
// Pause pauses processing of tasks from the given queue.
func (r *RDB) Pause(qname string) error {
qkey := base.QueueKey(qname)
return pauseCmd.Run(r.client, []string{base.PausedQueues}, qkey).Err()
}
// KEYS[1] -> asynq:paused
// ARGV[1] -> asynq:queues:<qname> - queue to unpause
var unpauseCmd = redis.NewScript(`
local ismem = redis.call("SISMEMBER", KEYS[1], ARGV[1])
if ismem == 0 then
return redis.error_reply("queue is not paused")
end
return redis.call("SREM", KEYS[1], ARGV[1])`)
// Unpause resumes processing of tasks from the given queue.
func (r *RDB) Unpause(qname string) error {
qkey := base.QueueKey(qname)
return unpauseCmd.Run(r.client, []string{base.PausedQueues}, qkey).Err()
}

View File

@@ -12,9 +12,9 @@ import (
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
)
func TestCurrentStats(t *testing.T) {
@@ -38,6 +38,7 @@ func TestCurrentStats(t *testing.T) {
processed int
failed int
allQueues []interface{}
paused []string
want *Stats
}{
{
@@ -55,6 +56,7 @@ func TestCurrentStats(t *testing.T) {
processed: 120,
failed: 2,
allQueues: []interface{}{base.DefaultQueue, base.QueueKey("critical"), base.QueueKey("low")},
paused: []string{},
want: &Stats{
Enqueued: 3,
InProgress: 1,
@@ -64,7 +66,12 @@ func TestCurrentStats(t *testing.T) {
Processed: 120,
Failed: 2,
Timestamp: now,
Queues: map[string]int{base.DefaultQueueName: 1, "critical": 1, "low": 1},
// Queues should be sorted by name.
Queues: []*Queue{
{Name: "critical", Paused: false, Size: 1},
{Name: "default", Paused: false, Size: 1},
{Name: "low", Paused: false, Size: 1},
},
},
},
{
@@ -82,6 +89,7 @@ func TestCurrentStats(t *testing.T) {
processed: 90,
failed: 10,
allQueues: []interface{}{base.DefaultQueue},
paused: []string{},
want: &Stats{
Enqueued: 0,
InProgress: 0,
@@ -91,13 +99,52 @@ func TestCurrentStats(t *testing.T) {
Processed: 90,
Failed: 10,
Timestamp: now,
Queues: map[string]int{base.DefaultQueueName: 0},
Queues: []*Queue{
{Name: base.DefaultQueueName, Paused: false, Size: 0},
},
},
},
{
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {m1},
"critical": {m5},
"low": {m6},
},
inProgress: []*base.TaskMessage{m2},
scheduled: []h.ZSetEntry{
{Msg: m3, Score: float64(now.Add(time.Hour).Unix())},
{Msg: m4, Score: float64(now.Unix())}},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
processed: 120,
failed: 2,
allQueues: []interface{}{base.DefaultQueue, base.QueueKey("critical"), base.QueueKey("low")},
paused: []string{"critical", "low"},
want: &Stats{
Enqueued: 3,
InProgress: 1,
Scheduled: 2,
Retry: 0,
Dead: 0,
Processed: 120,
Failed: 2,
Timestamp: now,
Queues: []*Queue{
{Name: "critical", Paused: true, Size: 1},
{Name: "default", Paused: false, Size: 1},
{Name: "low", Paused: true, Size: 1},
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r.client) // clean up db before each test case
for _, qname := range tc.paused {
if err := r.Pause(qname); err != nil {
t.Fatal(err)
}
}
for qname, msgs := range tc.enqueued {
h.SeedEnqueuedQueue(t, r.client, msgs, qname)
}
@@ -136,7 +183,7 @@ func TestCurrentStatsWithoutData(t *testing.T) {
Processed: 0,
Failed: 0,
Timestamp: time.Now(),
Queues: map[string]int{},
Queues: make([]*Queue, 0),
}
got, err := r.CurrentStats()
@@ -571,7 +618,7 @@ func TestListScheduledPagination(t *testing.T) {
func TestListRetry(t *testing.T) {
r := setup(t)
m1 := &base.TaskMessage{
ID: xid.New(),
ID: uuid.New(),
Type: "send_email",
Queue: "default",
Payload: map[string]interface{}{"subject": "hello"},
@@ -580,7 +627,7 @@ func TestListRetry(t *testing.T) {
Retried: 10,
}
m2 := &base.TaskMessage{
ID: xid.New(),
ID: uuid.New(),
Type: "reindex",
Queue: "default",
Payload: nil,
@@ -658,12 +705,14 @@ func TestListRetry(t *testing.T) {
func TestListRetryPagination(t *testing.T) {
r := setup(t)
// create 100 tasks with an increasing number of wait time.
now := time.Now()
var seed []h.ZSetEntry
for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil)
if err := r.Retry(msg, time.Now().Add(time.Duration(i)*time.Second), "error"); err != nil {
t.Fatal(err)
}
processAt := now.Add(time.Duration(i) * time.Second)
seed = append(seed, h.ZSetEntry{Msg: msg, Score: float64(processAt.Unix())})
}
h.SeedRetryQueue(t, r.client, seed)
tests := []struct {
desc string
@@ -714,14 +763,14 @@ func TestListRetryPagination(t *testing.T) {
func TestListDead(t *testing.T) {
r := setup(t)
m1 := &base.TaskMessage{
ID: xid.New(),
ID: uuid.New(),
Type: "send_email",
Queue: "default",
Payload: map[string]interface{}{"subject": "hello"},
ErrorMsg: "email server not responding",
}
m2 := &base.TaskMessage{
ID: xid.New(),
ID: uuid.New(),
Type: "reindex",
Queue: "default",
Payload: nil,
@@ -858,7 +907,7 @@ func TestEnqueueDeadTask(t *testing.T) {
tests := []struct {
dead []h.ZSetEntry
score int64
id xid.ID
id uuid.UUID
want error // expected return value from calling EnqueueDeadTask
wantDead []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage
@@ -942,7 +991,7 @@ func TestEnqueueRetryTask(t *testing.T) {
tests := []struct {
retry []h.ZSetEntry
score int64
id xid.ID
id uuid.UUID
want error // expected return value from calling EnqueueRetryTask
wantRetry []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage
@@ -1026,7 +1075,7 @@ func TestEnqueueScheduledTask(t *testing.T) {
tests := []struct {
scheduled []h.ZSetEntry
score int64
id xid.ID
id uuid.UUID
want error // expected return value from calling EnqueueScheduledTask
wantScheduled []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage
@@ -1345,7 +1394,7 @@ func TestKillRetryTask(t *testing.T) {
tests := []struct {
retry []h.ZSetEntry
dead []h.ZSetEntry
id xid.ID
id uuid.UUID
score int64
want error
wantRetry []h.ZSetEntry
@@ -1422,7 +1471,7 @@ func TestKillScheduledTask(t *testing.T) {
tests := []struct {
scheduled []h.ZSetEntry
dead []h.ZSetEntry
id xid.ID
id uuid.UUID
score int64
want error
wantScheduled []h.ZSetEntry
@@ -1662,7 +1711,7 @@ func TestDeleteDeadTask(t *testing.T) {
tests := []struct {
dead []h.ZSetEntry
id xid.ID
id uuid.UUID
score int64
want error
wantDead []*base.TaskMessage
@@ -1722,7 +1771,7 @@ func TestDeleteRetryTask(t *testing.T) {
tests := []struct {
retry []h.ZSetEntry
id xid.ID
id uuid.UUID
score int64
want error
wantRetry []*base.TaskMessage
@@ -1774,7 +1823,7 @@ func TestDeleteScheduledTask(t *testing.T) {
tests := []struct {
scheduled []h.ZSetEntry
id xid.ID
id uuid.UUID
score int64
want error
wantScheduled []*base.TaskMessage
@@ -2051,74 +2100,63 @@ func TestRemoveQueueError(t *testing.T) {
}
}
func TestListProcesses(t *testing.T) {
func TestListServers(t *testing.T) {
r := setup(t)
started1 := time.Now().Add(-time.Hour)
ps1 := base.NewProcessState("do.droplet1", 1234, 10, map[string]int{"default": 1}, false)
ps1.SetStarted(started1)
ps1.SetStatus(base.StatusRunning)
info1 := &base.ProcessInfo{
Concurrency: 10,
Queues: map[string]int{"default": 1},
info1 := &base.ServerInfo{
Host: "do.droplet1",
PID: 1234,
ServerID: "server123",
Concurrency: 10,
Queues: map[string]int{"default": 1},
Status: "running",
Started: started1,
ActiveWorkerCount: 0,
}
started2 := time.Now().Add(-2 * time.Hour)
ps2 := base.NewProcessState("do.droplet2", 9876, 20, map[string]int{"email": 1}, false)
ps2.SetStarted(started2)
ps2.SetStatus(base.StatusStopped)
ps2.AddWorkerStats(h.NewTaskMessage("send_email", nil), time.Now())
info2 := &base.ProcessInfo{
Concurrency: 20,
Queues: map[string]int{"email": 1},
info2 := &base.ServerInfo{
Host: "do.droplet2",
PID: 9876,
ServerID: "server456",
Concurrency: 20,
Queues: map[string]int{"email": 1},
Status: "stopped",
Started: started2,
ActiveWorkerCount: 1,
}
tests := []struct {
processes []*base.ProcessState
want []*base.ProcessInfo
data []*base.ServerInfo
}{
{
processes: []*base.ProcessState{},
want: []*base.ProcessInfo{},
data: []*base.ServerInfo{},
},
{
processes: []*base.ProcessState{ps1},
want: []*base.ProcessInfo{info1},
data: []*base.ServerInfo{info1},
},
{
processes: []*base.ProcessState{ps1, ps2},
want: []*base.ProcessInfo{info1, info2},
data: []*base.ServerInfo{info1, info2},
},
}
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
for _, tc := range tests {
h.FlushDB(t, r.client)
for _, ps := range tc.processes {
if err := r.WriteProcessState(ps, 5*time.Second); err != nil {
for _, info := range tc.data {
if err := r.WriteServerState(info, []*base.WorkerInfo{}, 5*time.Second); err != nil {
t.Fatal(err)
}
}
got, err := r.ListProcesses()
got, err := r.ListServers()
if err != nil {
t.Errorf("r.ListProcesses returned an error: %v", err)
t.Errorf("r.ListServers returned an error: %v", err)
}
if diff := cmp.Diff(tc.want, got, h.SortProcessInfoOpt, ignoreOpt); diff != "" {
t.Errorf("r.ListProcesses returned %v, want %v; (-want,+got)\n%s",
got, tc.processes, diff)
if diff := cmp.Diff(tc.data, got, h.SortServerInfoOpt); diff != "" {
t.Errorf("r.ListServers returned %v, want %v; (-want,+got)\n%s",
got, tc.data, diff)
}
}
}
@@ -2126,37 +2164,23 @@ func TestListProcesses(t *testing.T) {
func TestListWorkers(t *testing.T) {
r := setup(t)
const (
var (
host = "127.0.0.1"
pid = 4567
m1 = h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "abc123"})
m2 = h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/image/file"})
m3 = h.NewTaskMessage("reindex", map[string]interface{}{})
)
m1 := h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "abc123"})
m2 := h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/image/file"})
m3 := h.NewTaskMessage("reindex", map[string]interface{}{})
t1 := time.Now().Add(-time.Second)
t2 := time.Now().Add(-10 * time.Second)
t3 := time.Now().Add(-time.Minute)
type workerStats struct {
msg *base.TaskMessage
started time.Time
}
tests := []struct {
workers []*workerStats
want []*base.WorkerInfo
data []*base.WorkerInfo
}{
{
workers: []*workerStats{
{m1, t1},
{m2, t2},
{m3, t3},
},
want: []*base.WorkerInfo{
{Host: host, PID: pid, ID: m1.ID, Type: m1.Type, Queue: m1.Queue, Payload: m1.Payload, Started: t1},
{Host: host, PID: pid, ID: m2.ID, Type: m2.Type, Queue: m2.Queue, Payload: m2.Payload, Started: t2},
{Host: host, PID: pid, ID: m3.ID, Type: m3.Type, Queue: m3.Queue, Payload: m3.Payload, Started: t3},
data: []*base.WorkerInfo{
{Host: host, PID: pid, ID: m1.ID.String(), Type: m1.Type, Queue: m1.Queue, Payload: m1.Payload, Started: time.Now().Add(-1 * time.Second)},
{Host: host, PID: pid, ID: m2.ID.String(), Type: m2.Type, Queue: m2.Queue, Payload: m2.Payload, Started: time.Now().Add(-5 * time.Second)},
{Host: host, PID: pid, ID: m3.ID.String(), Type: m3.Type, Queue: m3.Queue, Payload: m3.Payload, Started: time.Now().Add(-30 * time.Second)},
},
},
}
@@ -2164,15 +2188,9 @@ func TestListWorkers(t *testing.T) {
for _, tc := range tests {
h.FlushDB(t, r.client)
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false)
for _, w := range tc.workers {
ps.AddWorkerStats(w.msg, w.started)
}
err := r.WriteProcessState(ps, time.Minute)
err := r.WriteServerState(&base.ServerInfo{}, tc.data, time.Minute)
if err != nil {
t.Errorf("could not write process state to redis: %v", err)
t.Errorf("could not write server state to redis: %v", err)
continue
}
@@ -2182,8 +2200,165 @@ func TestListWorkers(t *testing.T) {
continue
}
if diff := cmp.Diff(tc.want, got, h.SortWorkerInfoOpt); diff != "" {
t.Errorf("(*RDB).ListWorkers() = %v, want = %v; (-want,+got)\n%s", got, tc.want, diff)
if diff := cmp.Diff(tc.data, got, h.SortWorkerInfoOpt); diff != "" {
t.Errorf("(*RDB).ListWorkers() = %v, want = %v; (-want,+got)\n%s", got, tc.data, diff)
}
}
}
func TestPause(t *testing.T) {
r := setup(t)
tests := []struct {
initial []string // initial keys in the paused set
qname string // name of the queue to pause
want []string // expected keys in the paused set
}{
{[]string{}, "default", []string{"asynq:queues:default"}},
{[]string{"asynq:queues:default"}, "critical", []string{"asynq:queues:default", "asynq:queues:critical"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Pause(tc.qname)
if err != nil {
t.Errorf("Pause(%q) returned error: %v", tc.qname, err)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%q has members %v, want %v; (-want,+got)\n%s",
base.PausedQueues, got, tc.want, diff)
}
}
}
func TestPauseError(t *testing.T) {
r := setup(t)
tests := []struct {
desc string // test case description
initial []string // initial keys in the paused set
qname string // name of the queue to pause
want []string // expected keys in the paused set
}{
{"queue already paused", []string{"asynq:queues:default"}, "default", []string{"asynq:queues:default"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Pause(tc.qname)
if err == nil {
t.Errorf("%s; Pause(%q) returned nil: want error", tc.desc, tc.qname)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%s; %q has members %v, want %v; (-want,+got)\n%s",
tc.desc, base.PausedQueues, got, tc.want, diff)
}
}
}
func TestUnpause(t *testing.T) {
r := setup(t)
tests := []struct {
initial []string // initial keys in the paused set
qname string // name of the queue to unpause
want []string // expected keys in the paused set
}{
{[]string{"asynq:queues:default"}, "default", []string{}},
{[]string{"asynq:queues:default", "asynq:queues:low"}, "low", []string{"asynq:queues:default"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Unpause(tc.qname)
if err != nil {
t.Errorf("Unpause(%q) returned error: %v", tc.qname, err)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%q has members %v, want %v; (-want,+got)\n%s",
base.PausedQueues, got, tc.want, diff)
}
}
}
func TestUnpauseError(t *testing.T) {
r := setup(t)
tests := []struct {
desc string // test case description
initial []string // initial keys in the paused set
qname string // name of the queue to unpause
want []string // expected keys in the paused set
}{
{"set is empty", []string{}, "default", []string{}},
{"queue is not in the set", []string{"asynq:queues:default"}, "low", []string{"asynq:queues:default"}},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
// Set up initial state.
for _, qkey := range tc.initial {
if err := r.client.SAdd(base.PausedQueues, qkey).Err(); err != nil {
t.Fatal(err)
}
}
err := r.Unpause(tc.qname)
if err == nil {
t.Errorf("%s; Unpause(%q) returned nil: want error", tc.desc, tc.qname)
}
got, err := r.client.SMembers(base.PausedQueues).Result()
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(tc.want, got, h.SortStringSliceOpt); diff != "" {
t.Errorf("%s; %q has members %v, want %v; (-want,+got)\n%s",
tc.desc, base.PausedQueues, got, tc.want, diff)
}
}
}

View File

@@ -9,6 +9,7 @@ import (
"encoding/json"
"errors"
"fmt"
"strconv"
"time"
"github.com/go-redis/redis/v7"
@@ -22,6 +23,9 @@ var (
// ErrTaskNotFound indicates that a task that matches the given identifier was not found.
ErrTaskNotFound = errors.New("could not find a task")
// ErrDuplicateTask indicates that another task with the same unique key holds the uniqueness lock.
ErrDuplicateTask = errors.New("task already exists")
)
const statsTTL = 90 * 24 * time.Hour // 90 days
@@ -51,89 +55,166 @@ return 1`)
// Enqueue inserts the given task to the tail of the queue.
func (r *RDB) Enqueue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
key := base.QueueKey(msg.Queue)
return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, bytes).Err()
return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, encoded).Err()
}
// Dequeue queries given queues in order and pops a task message if there is one and returns it.
// KEYS[1] -> unique key in the form <type>:<payload>:<qname>
// KEYS[2] -> asynq:queues:<qname>
// KEYS[2] -> asynq:queues
// ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> task message data
var enqueueUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then
return 0
end
redis.call("LPUSH", KEYS[2], ARGV[3])
redis.call("SADD", KEYS[3], KEYS[2])
return 1
`)
// EnqueueUnique inserts the given task if the task's uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
key := base.QueueKey(msg.Queue)
res, err := enqueueUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, key, base.AllQueues},
msg.ID.String(), int(ttl.Seconds()), encoded).Result()
if err != nil {
return err
}
n, ok := res.(int64)
if !ok {
return fmt.Errorf("could not cast %v to int64", res)
}
if n == 0 {
return ErrDuplicateTask
}
return nil
}
// Dequeue queries given queues in order and pops a task message
// off a queue if one exists and returns the message and deadline.
// Dequeue skips a queue if the queue is paused.
// If all queues are empty, ErrNoProcessableTask error is returned.
func (r *RDB) Dequeue(qnames ...string) (*base.TaskMessage, error) {
var data string
var err error
if len(qnames) == 1 {
data, err = r.dequeueSingle(base.QueueKey(qnames[0]))
} else {
// TODO(hibiken): Take keys are argument and don't compute every time
var keys []string
for _, q := range qnames {
keys = append(keys, base.QueueKey(q))
}
data, err = r.dequeue(keys...)
func (r *RDB) Dequeue(qnames ...string) (msg *base.TaskMessage, deadline time.Time, err error) {
var qkeys []interface{}
for _, q := range qnames {
qkeys = append(qkeys, base.QueueKey(q))
}
data, d, err := r.dequeue(qkeys...)
if err == redis.Nil {
return nil, ErrNoProcessableTask
return nil, time.Time{}, ErrNoProcessableTask
}
if err != nil {
return nil, err
return nil, time.Time{}, err
}
var msg base.TaskMessage
err = json.Unmarshal([]byte(data), &msg)
if err != nil {
return nil, err
if msg, err = base.DecodeMessage(data); err != nil {
return nil, time.Time{}, err
}
return &msg, nil
return msg, time.Unix(d, 0), nil
}
func (r *RDB) dequeueSingle(queue string) (data string, err error) {
// timeout needed to avoid blocking forever
return r.client.BRPopLPush(queue, base.InProgressQueue, time.Second).Result()
}
// KEYS[1] -> asynq:in_progress
// ARGV -> List of queues to query in order
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:paused
// KEYS[3] -> asynq:deadlines
// ARGV[1] -> current time in Unix time
// ARGV[2:] -> List of queues to query in order
//
// dequeueCmd checks whether a queue is paused first, before
// calling RPOPLPUSH to pop a task from the queue.
// It computes the task deadline by inspecting Timout and Deadline fields,
// and inserts the task with deadlines set.
var dequeueCmd = redis.NewScript(`
local res
for _, qkey in ipairs(ARGV) do
res = redis.call("RPOPLPUSH", qkey, KEYS[1])
if res then
return res
for i = 2, table.getn(ARGV) do
local qkey = ARGV[i]
if redis.call("SISMEMBER", KEYS[2], qkey) == 0 then
local msg = redis.call("RPOPLPUSH", qkey, KEYS[1])
if msg then
local decoded = cjson.decode(msg)
local timeout = decoded["Timeout"]
local deadline = decoded["Deadline"]
local score
if timeout ~= 0 and deadline ~= 0 then
score = math.min(ARGV[1]+timeout, deadline)
elseif timeout ~= 0 then
score = ARGV[1] + timeout
elseif deadline ~= 0 then
score = deadline
else
return redis.error_reply("asynq internal error: both timeout and deadline are not set")
end
redis.call("ZADD", KEYS[3], score, msg)
return {msg, score}
end
end
end
return res`)
return nil`)
func (r *RDB) dequeue(queues ...string) (data string, err error) {
func (r *RDB) dequeue(qkeys ...interface{}) (msgjson string, deadline int64, err error) {
var args []interface{}
for _, qkey := range queues {
args = append(args, qkey)
}
res, err := dequeueCmd.Run(r.client, []string{base.InProgressQueue}, args...).Result()
args = append(args, time.Now().Unix())
args = append(args, qkeys...)
res, err := dequeueCmd.Run(r.client,
[]string{base.InProgressQueue, base.PausedQueues, base.KeyDeadlines}, args...).Result()
if err != nil {
return "", err
return "", 0, err
}
return cast.ToStringE(res)
data, err := cast.ToSliceE(res)
if err != nil {
return "", 0, err
}
if len(data) != 2 {
return "", 0, fmt.Errorf("asynq: internal error: dequeue command returned %d values", len(data))
}
if msgjson, err = cast.ToStringE(data[0]); err != nil {
return "", 0, err
}
if deadline, err = cast.ToInt64E(data[1]); err != nil {
return "", 0, err
}
return msgjson, deadline, nil
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:processed:<yyyy-mm-dd>
// KEYS[2] -> asynq:deadlines
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> unique key in the format <type>:<payload>:<qname>
// ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp
// ARGV[3] -> task ID
// Note: LREM count ZERO means "remove all elements equal to val"
var doneCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
local n = redis.call("INCR", KEYS[2])
if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[2], ARGV[2])
redis.call("EXPIREAT", KEYS[3], ARGV[2])
end
if string.len(KEYS[4]) > 0 and redis.call("GET", KEYS[4]) == ARGV[3] then
redis.call("DEL", KEYS[4])
end
return redis.status_reply("OK")
`)
// Done removes the task from in-progress queue to mark the task as done.
// It removes a uniqueness lock acquired by the task, if any.
func (r *RDB) Done(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
@@ -141,73 +222,141 @@ func (r *RDB) Done(msg *base.TaskMessage) error {
processedKey := base.ProcessedKey(now)
expireAt := now.Add(statsTTL)
return doneCmd.Run(r.client,
[]string{base.InProgressQueue, processedKey},
bytes, expireAt.Unix()).Err()
[]string{base.InProgressQueue, base.KeyDeadlines, processedKey, msg.UniqueKey},
encoded, expireAt.Unix(), msg.ID.String()).Err()
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:queues:<qname>
// KEYS[2] -> asynq:deadlines
// KEYS[3] -> asynq:queues:<qname>
// ARGV[1] -> base.TaskMessage value
// Note: Use RPUSH to push to the head of the queue.
var requeueCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("RPUSH", KEYS[2], ARGV[1])
if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
redis.call("RPUSH", KEYS[3], ARGV[1])
return redis.status_reply("OK")`)
// Requeue moves the task from in-progress queue to the specified queue.
func (r *RDB) Requeue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
return requeueCmd.Run(r.client,
[]string{base.InProgressQueue, base.QueueKey(msg.Queue)},
string(bytes)).Err()
[]string{base.InProgressQueue, base.KeyDeadlines, base.QueueKey(msg.Queue)},
encoded).Err()
}
// KEYS[1] -> asynq:scheduled
// KEYS[2] -> asynq:queues
// ARGV[1] -> score (process_at timestamp)
// ARGV[2] -> task message
// ARGV[3] -> queue key
var scheduleCmd = redis.NewScript(`
redis.call("ZADD", KEYS[1], ARGV[1], ARGV[2])
redis.call("SADD", KEYS[2], ARGV[3])
return 1
`)
// Schedule adds the task to the backlog queue to be processed in the future.
func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error {
bytes, err := json.Marshal(msg)
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
qkey := base.QueueKey(msg.Queue)
score := float64(processAt.Unix())
return r.client.ZAdd(base.ScheduledQueue,
&redis.Z{Member: string(bytes), Score: score}).Err()
return scheduleCmd.Run(r.client,
[]string{base.ScheduledQueue, base.AllQueues},
score, encoded, qkey).Err()
}
// KEYS[1] -> unique key in the format <type>:<payload>:<qname>
// KEYS[2] -> asynq:scheduled
// KEYS[3] -> asynq:queues
// ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> score (process_at timestamp)
// ARGV[4] -> task message
// ARGV[5] -> queue key
var scheduleUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then
return 0
end
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[4])
redis.call("SADD", KEYS[3], ARGV[5])
return 1
`)
// ScheduleUnique adds the task to the backlog queue to be processed in the future if the uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
qkey := base.QueueKey(msg.Queue)
score := float64(processAt.Unix())
res, err := scheduleUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, base.ScheduledQueue, base.AllQueues},
msg.ID.String(), int(ttl.Seconds()), score, encoded, qkey).Result()
if err != nil {
return err
}
n, ok := res.(int64)
if !ok {
return fmt.Errorf("could not cast %v to int64", res)
}
if n == 0 {
return ErrDuplicateTask
}
return nil
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:retry
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq:failure:<yyyy-mm-dd>
// KEYS[2] -> asynq:deadlines
// KEYS[3] -> asynq:retry
// KEYS[4] -> asynq:processed:<yyyy-mm-dd>
// KEYS[5] -> asynq:failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp
var retryCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[4])
if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[4])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[4])
end
local m = redis.call("INCR", KEYS[5])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[5], ARGV[4])
end
return redis.status_reply("OK")`)
// Retry moves the task from in-progress to retry queue, incrementing retry count
// and assigning error message to the task message.
func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
bytesToRemove, err := json.Marshal(msg)
msgToRemove, err := base.EncodeMessage(msg)
if err != nil {
return err
}
modified := *msg
modified.Retried++
modified.ErrorMsg = errMsg
bytesToAdd, err := json.Marshal(&modified)
msgToAdd, err := base.EncodeMessage(&modified)
if err != nil {
return err
}
@@ -216,8 +365,8 @@ func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) e
failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL)
return retryCmd.Run(r.client,
[]string{base.InProgressQueue, base.RetryQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), processAt.Unix(), expireAt.Unix()).Err()
[]string{base.InProgressQueue, base.KeyDeadlines, base.RetryQueue, processedKey, failureKey},
msgToRemove, msgToAdd, processAt.Unix(), expireAt.Unix()).Err()
}
const (
@@ -226,9 +375,10 @@ const (
)
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:dead
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq.failure:<yyyy-mm-dd>
// KEYS[2] -> asynq:deadlines
// KEYS[3] -> asynq:dead
// KEYS[4] -> asynq:processed:<yyyy-mm-dd>
// KEYS[5] -> asynq.failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Dead queue
// ARGV[3] -> died_at UNIX timestamp
@@ -236,31 +386,36 @@ const (
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
// ARGV[6] -> stats expiration timestamp
var killCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[6])
if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[3], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[3], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[4])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[6])
end
local m = redis.call("INCR", KEYS[5])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[5], ARGV[6])
end
return redis.status_reply("OK")`)
// Kill sends the task to "dead" queue from in-progress queue, assigning
// the error message to the task.
// It also trims the set by timestamp and set size.
func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
bytesToRemove, err := json.Marshal(msg)
msgToRemove, err := base.EncodeMessage(msg)
if err != nil {
return err
}
modified := *msg
modified.ErrorMsg = errMsg
bytesToAdd, err := json.Marshal(&modified)
msgToAdd, err := base.EncodeMessage(&modified)
if err != nil {
return err
}
@@ -270,51 +425,21 @@ func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL)
return killCmd.Run(r.client,
[]string{base.InProgressQueue, base.DeadQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err()
[]string{base.InProgressQueue, base.KeyDeadlines, base.DeadQueue, processedKey, failureKey},
msgToRemove, msgToAdd, now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err()
}
// KEYS[1] -> asynq:in_progress
// ARGV[1] -> queue prefix
var requeueAllCmd = redis.NewScript(`
local msgs = redis.call("LRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[1] .. decoded["Queue"]
redis.call("RPUSH", qkey, msg)
redis.call("LREM", KEYS[1], 0, msg)
end
return table.getn(msgs)`)
// RequeueAll moves all tasks from in-progress list to the queue
// and reports the number of tasks restored.
func (r *RDB) RequeueAll() (int64, error) {
res, err := requeueAllCmd.Run(r.client, []string{base.InProgressQueue}, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
n, ok := res.(int64)
if !ok {
return 0, fmt.Errorf("could not cast %v to int64", res)
}
return n, nil
}
// CheckAndEnqueue checks for all scheduled tasks and enqueues any tasks that
// have to be processed.
//
// qnames specifies to which queues to send tasks.
func (r *RDB) CheckAndEnqueue(qnames ...string) error {
// CheckAndEnqueue checks for all scheduled/retry tasks and enqueues any tasks that
// are ready to be processed.
func (r *RDB) CheckAndEnqueue() (err error) {
delayed := []string{base.ScheduledQueue, base.RetryQueue}
for _, zset := range delayed {
var err error
if len(qnames) == 1 {
err = r.forwardSingle(zset, base.QueueKey(qnames[0]))
} else {
err = r.forward(zset)
}
if err != nil {
return err
n := 1
for n != 0 {
n, err = r.forward(zset)
if err != nil {
return err
}
}
}
return nil
@@ -323,53 +448,61 @@ func (r *RDB) CheckAndEnqueue(qnames ...string) error {
// KEYS[1] -> source queue (e.g. scheduled or retry queue)
// ARGV[1] -> current unix time
// ARGV[2] -> queue prefix
// Note: Script moves tasks up to 100 at a time to keep the runtime of script short.
var forwardCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1], "LIMIT", 0, 100)
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[2] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg)
end
return msgs`)
return table.getn(msgs)`)
// forward moves all tasks with a score less than the current unix time
// from the src zset.
func (r *RDB) forward(src string) error {
// forward moves tasks with a score less than the current unix time
// from the src zset. It returns the number of tasks moved.
func (r *RDB) forward(src string) (int, error) {
now := float64(time.Now().Unix())
return forwardCmd.Run(r.client,
[]string{src}, now, base.QueuePrefix).Err()
res, err := forwardCmd.Run(r.client,
[]string{src}, now, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
return cast.ToInt(res), nil
}
// KEYS[1] -> source queue (e.g. scheduled or retry queue)
// KEYS[2] -> destination queue
var forwardSingleCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
redis.call("LPUSH", KEYS[2], msg)
redis.call("ZREM", KEYS[1], msg)
end
return msgs`)
// forwardSingle moves all tasks with a score less than the current unix time
// from the src zset to dst list.
func (r *RDB) forwardSingle(src, dst string) error {
now := float64(time.Now().Unix())
return forwardSingleCmd.Run(r.client,
[]string{src, dst}, now).Err()
// ListDeadlineExceeded returns a list of task messages that have exceeded the given deadline.
func (r *RDB) ListDeadlineExceeded(deadline time.Time) ([]*base.TaskMessage, error) {
var msgs []*base.TaskMessage
opt := &redis.ZRangeBy{
Min: "-inf",
Max: strconv.FormatInt(deadline.Unix(), 10),
}
res, err := r.client.ZRangeByScore(base.KeyDeadlines, opt).Result()
if err != nil {
return nil, err
}
for _, s := range res {
msg, err := base.DecodeMessage(s)
if err != nil {
return nil, err
}
msgs = append(msgs, msg)
}
return msgs, nil
}
// KEYS[1] -> asynq:ps:<host:pid>
// KEYS[2] -> asynq:ps
// KEYS[3] -> asynq:workers<host:pid>
// keys[4] -> asynq:workers
// KEYS[1] -> asynq:servers:<host:pid:sid>
// KEYS[2] -> asynq:servers
// KEYS[3] -> asynq:workers<host:pid:sid>
// KEYS[4] -> asynq:workers
// ARGV[1] -> expiration time
// ARGV[2] -> TTL in seconds
// ARGV[3] -> process info
// ARGV[3] -> server info
// ARGV[4:] -> alternate key-value pair of (worker id, worker data)
// Note: Add key to ZSET with expiration time as score.
// ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996
var writeProcessInfoCmd = redis.NewScript(`
var writeServerStateCmd = redis.NewScript(`
redis.call("SETEX", KEYS[1], ARGV[2], ARGV[3])
redis.call("ZADD", KEYS[2], ARGV[1], KEYS[1])
redis.call("DEL", KEYS[3])
@@ -380,50 +513,45 @@ redis.call("EXPIRE", KEYS[3], ARGV[2])
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
return redis.status_reply("OK")`)
// WriteProcessState writes process state data to redis with expiration set to the value ttl.
func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error {
info := ps.Get()
// WriteServerState writes server state data to redis with expiration set to the value ttl.
func (r *RDB) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
bytes, err := json.Marshal(info)
if err != nil {
return err
}
var args []interface{} // args to the lua script
exp := time.Now().Add(ttl).UTC()
workers := ps.GetWorkers()
args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes)
args := []interface{}{float64(exp.Unix()), ttl.Seconds(), bytes} // args to the lua script
for _, w := range workers {
bytes, err := json.Marshal(w)
if err != nil {
continue // skip bad data
}
args = append(args, w.ID.String(), bytes)
args = append(args, w.ID, bytes)
}
pkey := base.ProcessInfoKey(info.Host, info.PID)
wkey := base.WorkersKey(info.Host, info.PID)
return writeProcessInfoCmd.Run(r.client,
[]string{pkey, base.AllProcesses, wkey, base.AllWorkers},
skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
return writeServerStateCmd.Run(r.client,
[]string{skey, base.AllServers, wkey, base.AllWorkers},
args...).Err()
}
// KEYS[1] -> asynq:ps
// KEYS[2] -> asynq:ps:<host:pid>
// KEYS[1] -> asynq:servers
// KEYS[2] -> asynq:servers:<host:pid:sid>
// KEYS[3] -> asynq:workers
// KEYS[4] -> asynq:workers<host:pid>
var clearProcessInfoCmd = redis.NewScript(`
// KEYS[4] -> asynq:workers<host:pid:sid>
var clearServerStateCmd = redis.NewScript(`
redis.call("ZREM", KEYS[1], KEYS[2])
redis.call("DEL", KEYS[2])
redis.call("ZREM", KEYS[3], KEYS[4])
redis.call("DEL", KEYS[4])
return redis.status_reply("OK")`)
// ClearProcessState deletes process state data from redis.
func (r *RDB) ClearProcessState(ps *base.ProcessState) error {
info := ps.Get()
host, pid := info.Host, info.PID
pkey := base.ProcessInfoKey(host, pid)
wkey := base.WorkersKey(host, pid)
return clearProcessInfoCmd.Run(r.client,
[]string{base.AllProcesses, pkey, base.AllWorkers, wkey}).Err()
// ClearServerState deletes server state data from redis.
func (r *RDB) ClearServerState(host string, pid int, serverID string) error {
skey := base.ServerInfoKey(host, pid, serverID)
wkey := base.WorkersKey(host, pid, serverID)
return clearServerStateCmd.Run(r.client,
[]string{base.AllServers, skey, base.AllWorkers, wkey}).Err()
}
// CancelationPubSub returns a pubsub for cancelation messages.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,190 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package testbroker exports a broker implementation that should be used in package testing.
package testbroker
import (
"errors"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
)
var errRedisDown = errors.New("asynqtest: redis is down")
// TestBroker is a broker implementation which enables
// to simulate Redis failure in tests.
type TestBroker struct {
mu sync.Mutex
sleeping bool
// real broker
real base.Broker
}
// Make sure TestBroker implements Broker interface at compile time.
var _ base.Broker = (*TestBroker)(nil)
func NewTestBroker(b base.Broker) *TestBroker {
return &TestBroker{real: b}
}
func (tb *TestBroker) Sleep() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = true
}
func (tb *TestBroker) Wakeup() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = false
}
func (tb *TestBroker) Enqueue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Enqueue(msg)
}
func (tb *TestBroker) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.EnqueueUnique(msg, ttl)
}
func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, time.Time{}, errRedisDown
}
return tb.real.Dequeue(qnames...)
}
func (tb *TestBroker) Done(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Done(msg)
}
func (tb *TestBroker) Requeue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Requeue(msg)
}
func (tb *TestBroker) Schedule(msg *base.TaskMessage, processAt time.Time) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Schedule(msg, processAt)
}
func (tb *TestBroker) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ScheduleUnique(msg, processAt, ttl)
}
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Retry(msg, processAt, errMsg)
}
func (tb *TestBroker) Kill(msg *base.TaskMessage, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Kill(msg, errMsg)
}
func (tb *TestBroker) CheckAndEnqueue() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.CheckAndEnqueue()
}
func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time) ([]*base.TaskMessage, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.ListDeadlineExceeded(deadline)
}
func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.WriteServerState(info, workers, ttl)
}
func (tb *TestBroker) ClearServerState(host string, pid int, serverID string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ClearServerState(host, pid, serverID)
}
func (tb *TestBroker) CancelationPubSub() (*redis.PubSub, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.CancelationPubSub()
}
func (tb *TestBroker) PublishCancelation(id string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.PublishCancelation(id)
}
func (tb *TestBroker) Close() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Close()
}

View File

@@ -1,35 +0,0 @@
package asynq
import (
"io"
"log"
"os"
)
// global logger used in asynq package.
var logger = newLogger(os.Stderr)
func newLogger(out io.Writer) *asynqLogger {
return &asynqLogger{
log.New(out, "", log.Ldate|log.Ltime|log.Lmicroseconds|log.LUTC),
}
}
type asynqLogger struct {
*log.Logger
}
func (l *asynqLogger) info(format string, args ...interface{}) {
format = "INFO: " + format
l.Printf(format, args...)
}
func (l *asynqLogger) warn(format string, args ...interface{}) {
format = "WARN: " + format
l.Printf(format, args...)
}
func (l *asynqLogger) error(format string, args ...interface{}) {
format = "ERROR: " + format
l.Printf(format, args...)
}

View File

@@ -1,117 +0,0 @@
package asynq
import (
"bytes"
"fmt"
"regexp"
"testing"
)
// regexp for timestamps
const (
rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]`
rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]`
rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]`
)
type tester struct {
desc string
message string
wantPattern string // regexp that log output must match
}
func TestLoggerInfo(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := newLogger(&buf)
logger.info(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerWarn(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := newLogger(&buf)
logger.warn(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerError(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := newLogger(&buf)
logger.error(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}

View File

@@ -5,6 +5,7 @@
package asynq
import (
"encoding/json"
"fmt"
"time"
@@ -30,6 +31,19 @@ func (p Payload) Has(key string) bool {
return ok
}
func toInt(v interface{}) (int, error) {
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return int(val), nil
default:
return cast.ToIntE(v)
}
}
// GetString returns a string value if a string type is associated with
// the key, otherwise reports an error.
func (p Payload) GetString(key string) (string, error) {
@@ -47,7 +61,7 @@ func (p Payload) GetInt(key string) (int, error) {
if !ok {
return 0, &errKeyNotFound{key}
}
return cast.ToIntE(v)
return toInt(v)
}
// GetFloat64 returns a float64 value if a numeric type is associated with
@@ -57,7 +71,12 @@ func (p Payload) GetFloat64(key string) (float64, error) {
if !ok {
return 0, &errKeyNotFound{key}
}
return cast.ToFloat64E(v)
switch v := v.(type) {
case json.Number:
return v.Float64()
default:
return cast.ToFloat64E(v)
}
}
// GetBool returns a boolean value if a boolean type is associated with
@@ -87,7 +106,20 @@ func (p Payload) GetIntSlice(key string) ([]int, error) {
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToIntSliceE(v)
switch v := v.(type) {
case []interface{}:
var res []int
for _, elem := range v {
val, err := toInt(elem)
if err != nil {
return nil, err
}
res = append(res, int(val))
}
return res, nil
default:
return cast.ToIntSliceE(v)
}
}
// GetStringMap returns a map of string to empty interface
@@ -131,7 +163,20 @@ func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapIntE(v)
switch v := v.(type) {
case map[string]interface{}:
res := make(map[string]int)
for key, val := range v {
ival, err := toInt(val)
if err != nil {
return nil, err
}
res[key] = ival
}
return res, nil
default:
return cast.ToStringMapIntE(v)
}
}
// GetStringMapBool returns a map of string to boolean
@@ -162,5 +207,14 @@ func (p Payload) GetDuration(key string) (time.Duration, error) {
if !ok {
return 0, &errKeyNotFound{key}
}
return cast.ToDurationE(v)
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return time.Duration(val), nil
default:
return cast.ToDurationE(v)
}
}

View File

@@ -10,337 +10,626 @@ import (
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
)
func TestPayloadGet(t *testing.T) {
names := []string{"luke", "anakin", "rey"}
primes := []int{2, 3, 5, 7, 11, 13, 17}
user := map[string]interface{}{"name": "Ken", "score": 3.14}
location := map[string]string{"address": "123 Main St.", "state": "NY", "zipcode": "10002"}
favs := map[string][]string{
"movies": []string{"forrest gump", "star wars"},
"tv_shows": []string{"game of thrones", "HIMYM", "breaking bad"},
type payloadTest struct {
data map[string]interface{}
key string
nonkey string
}
func TestPayloadString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"name": "gopher"},
key: "name",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetString(tc.nonkey)
if err == nil || got != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
tc.key, got, err)
}
}
}
func TestPayloadInt(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user_id": 42},
key: "user_id",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetInt(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadFloat64(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"pi": 3.14},
key: "pi",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetFloat64(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadBool(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"enabled": true},
key: "enabled",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetBool(tc.nonkey)
if err == nil || got != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
tc.key, got, err)
}
}
}
func TestPayloadStringSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"names": []string{"luke", "rey", "anakin"}},
key: "names",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadIntSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"nums": []int{9, 8, 7}},
key: "nums",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetIntSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetIntSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetIntSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMap(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user": map[string]interface{}{"name": "Jon Doe", "score": 2.2}},
key: "user",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMap(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMap(tc.key)
ignoreOpt := cmpopts.IgnoreMapEntries(func(key string, val interface{}) bool {
switch val.(type) {
case json.Number:
return true
default:
return false
}
})
diff = cmp.Diff(got, tc.data[tc.key], ignoreOpt)
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil;(-want,+got)\n%s",
tc.key, got, err, tc.data[tc.key], diff)
}
// access non-existent key.
got, err = payload.GetStringMap(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"address": map[string]string{"line": "123 Main St", "city": "San Francisco", "state": "CA"}},
key: "address",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapString(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapString(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapString(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapStringSlice(t *testing.T) {
favs := map[string][]string{
"movies": {"forrest gump", "star wars"},
"tv_shows": {"game of thrones", "HIMYM", "breaking bad"},
}
tests := []payloadTest{
{
data: map[string]interface{}{"favorites": favs},
key: "favorites",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapInt(t *testing.T) {
counter := map[string]int{
"a": 1,
"b": 101,
"c": 42,
}
tests := []payloadTest{
{
data: map[string]interface{}{"counts": counter},
key: "counts",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapInt(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapInt(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapInt(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapBool(t *testing.T) {
features := map[string]bool{
"A": false,
"B": true,
"C": true,
}
now := time.Now()
duration := 15 * time.Minute
data := map[string]interface{}{
"greeting": "Hello",
"user_id": 9876,
"pi": 3.1415,
"enabled": false,
"names": names,
"primes": primes,
"user": user,
"location": location,
"favs": favs,
"counter": counter,
"features": features,
"timestamp": now,
"duration": duration,
}
payload := Payload{data}
gotStr, err := payload.GetString("greeting")
if gotStr != "Hello" || err != nil {
t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil",
"greeting", gotStr, err, "Hello")
tests := []payloadTest{
{
data: map[string]interface{}{"features": features},
key: "features",
nonkey: "unknown",
},
}
gotInt, err := payload.GetInt("user_id")
if gotInt != 9876 || err != nil {
t.Errorf("Payload.GetInt(%q) = %v, %v, want, %v, nil",
"user_id", gotInt, err, 9876)
}
for _, tc := range tests {
payload := Payload{tc.data}
gotFloat, err := payload.GetFloat64("pi")
if gotFloat != 3.1415 || err != nil {
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want, %v, nil",
"pi", gotFloat, err, 3.141592)
}
got, err := payload.GetStringMapBool(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotBool, err := payload.GetBool("enabled")
if gotBool != false || err != nil {
t.Errorf("Payload.GetBool(%q) = %v, %v, want, %v, nil",
"enabled", gotBool, err, false)
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapBool(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotStrSlice, err := payload.GetStringSlice("names")
if diff := cmp.Diff(gotStrSlice, names); diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"names", gotStrSlice, err, names, diff)
}
gotIntSlice, err := payload.GetIntSlice("primes")
if diff := cmp.Diff(gotIntSlice, primes); diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"primes", gotIntSlice, err, primes, diff)
}
gotStrMap, err := payload.GetStringMap("user")
if diff := cmp.Diff(gotStrMap, user); diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"user", gotStrMap, err, user, diff)
}
gotStrMapStr, err := payload.GetStringMapString("location")
if diff := cmp.Diff(gotStrMapStr, location); diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"location", gotStrMapStr, err, location, diff)
}
gotStrMapStrSlice, err := payload.GetStringMapStringSlice("favs")
if diff := cmp.Diff(gotStrMapStrSlice, favs); diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"favs", gotStrMapStrSlice, err, favs, diff)
}
gotStrMapInt, err := payload.GetStringMapInt("counter")
if diff := cmp.Diff(gotStrMapInt, counter); diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"counter", gotStrMapInt, err, counter, diff)
}
gotStrMapBool, err := payload.GetStringMapBool("features")
if diff := cmp.Diff(gotStrMapBool, features); diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"features", gotStrMapBool, err, features, diff)
}
gotTime, err := payload.GetTime("timestamp")
if !gotTime.Equal(now) {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
"timestamp", gotTime, err, now)
}
gotDuration, err := payload.GetDuration("duration")
if gotDuration != duration {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
"duration", gotDuration, err, duration)
// access non-existent key.
got, err = payload.GetStringMapBool(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadGetWithMarshaling(t *testing.T) {
names := []string{"luke", "anakin", "rey"}
primes := []int{2, 3, 5, 7, 11, 13, 17}
user := map[string]interface{}{"name": "Ken", "score": 3.14}
location := map[string]string{"address": "123 Main St.", "state": "NY", "zipcode": "10002"}
favs := map[string][]string{
"movies": []string{"forrest gump", "star wars"},
"tv_shows": []string{"game of throwns", "HIMYM", "breaking bad"},
}
counter := map[string]int{
"a": 1,
"b": 101,
"c": 42,
}
features := map[string]bool{
"A": false,
"B": true,
"C": true,
}
now := time.Now()
duration := 15 * time.Minute
in := Payload{map[string]interface{}{
"subject": "Hello",
"recipient_id": 9876,
"pi": 3.14,
"enabled": true,
"names": names,
"primes": primes,
"user": user,
"location": location,
"favs": favs,
"counter": counter,
"features": features,
"timestamp": now,
"duration": duration,
}}
// encode and then decode task messsage
inMsg := h.NewTaskMessage("testing", in.data)
data, err := json.Marshal(inMsg)
if err != nil {
t.Fatal(err)
}
var outMsg base.TaskMessage
err = json.Unmarshal(data, &outMsg)
if err != nil {
t.Fatal(err)
}
out := Payload{outMsg.Payload}
gotStr, err := out.GetString("subject")
if gotStr != "Hello" || err != nil {
t.Errorf("Payload.GetString(%q) = %v, %v; want %q, nil",
"subject", gotStr, err, "Hello")
func TestPayloadTime(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"current": time.Now()},
key: "current",
nonkey: "unknown",
},
}
gotInt, err := out.GetInt("recipient_id")
if gotInt != 9876 || err != nil {
t.Errorf("Payload.GetInt(%q) = %v, %v; want %v, nil",
"recipient_id", gotInt, err, 9876)
}
for _, tc := range tests {
payload := Payload{tc.data}
gotFloat, err := out.GetFloat64("pi")
if gotFloat != 3.14 || err != nil {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want %v, nil",
"pi", gotFloat, err, 3.14)
}
got, err := payload.GetTime(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotBool, err := out.GetBool("enabled")
if gotBool != true || err != nil {
t.Errorf("Payload.GetBool(%q) = %v, %v; want %v, nil",
"enabled", gotBool, err, true)
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetTime(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotStrSlice, err := out.GetStringSlice("names")
if diff := cmp.Diff(gotStrSlice, names); diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"names", gotStrSlice, err, names, diff)
}
gotIntSlice, err := out.GetIntSlice("primes")
if diff := cmp.Diff(gotIntSlice, primes); diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"primes", gotIntSlice, err, primes, diff)
}
gotStrMap, err := out.GetStringMap("user")
if diff := cmp.Diff(gotStrMap, user); diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"user", gotStrMap, err, user, diff)
}
gotStrMapStr, err := out.GetStringMapString("location")
if diff := cmp.Diff(gotStrMapStr, location); diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"location", gotStrMapStr, err, location, diff)
}
gotStrMapStrSlice, err := out.GetStringMapStringSlice("favs")
if diff := cmp.Diff(gotStrMapStrSlice, favs); diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"favs", gotStrMapStrSlice, err, favs, diff)
}
gotStrMapInt, err := out.GetStringMapInt("counter")
if diff := cmp.Diff(gotStrMapInt, counter); diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"counter", gotStrMapInt, err, counter, diff)
}
gotStrMapBool, err := out.GetStringMapBool("features")
if diff := cmp.Diff(gotStrMapBool, features); diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"features", gotStrMapBool, err, features, diff)
}
gotTime, err := out.GetTime("timestamp")
if !gotTime.Equal(now) {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
"timestamp", gotTime, err, now)
}
gotDuration, err := out.GetDuration("duration")
if gotDuration != duration {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
"duration", gotDuration, err, duration)
// access non-existent key.
got, err = payload.GetTime(tc.nonkey)
if err == nil || !got.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Time{})
}
}
}
func TestPayloadKeyNotFound(t *testing.T) {
payload := Payload{nil}
key := "something"
gotStr, err := payload.GetString(key)
if err == nil || gotStr != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
key, gotStr, err)
func TestPayloadDuration(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"duration": 15 * time.Minute},
key: "duration",
nonkey: "unknown",
},
}
gotInt, err := payload.GetInt(key)
if err == nil || gotInt != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
key, gotInt, err)
}
for _, tc := range tests {
payload := Payload{tc.data}
gotFloat, err := payload.GetFloat64(key)
if err == nil || gotFloat != 0 {
t.Errorf("Payload.GetFloat64(%q = %v, %v; want 0, error",
key, gotFloat, err)
}
got, err := payload.GetDuration(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotBool, err := payload.GetBool(key)
if err == nil || gotBool != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
key, gotBool, err)
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetDuration(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotStrSlice, err := payload.GetStringSlice(key)
if err == nil || gotStrSlice != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
key, gotStrSlice, err)
}
gotIntSlice, err := payload.GetIntSlice(key)
if err == nil || gotIntSlice != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
key, gotIntSlice, err)
}
gotStrMap, err := payload.GetStringMap(key)
if err == nil || gotStrMap != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
key, gotStrMap, err)
}
gotStrMapStr, err := payload.GetStringMapString(key)
if err == nil || gotStrMapStr != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
key, gotStrMapStr, err)
}
gotStrMapStrSlice, err := payload.GetStringMapStringSlice(key)
if err == nil || gotStrMapStrSlice != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
key, gotStrMapStrSlice, err)
}
gotStrMapInt, err := payload.GetStringMapInt(key)
if err == nil || gotStrMapInt != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want nil, error",
key, gotStrMapInt, err)
}
gotStrMapBool, err := payload.GetStringMapBool(key)
if err == nil || gotStrMapBool != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want nil, error",
key, gotStrMapBool, err)
}
gotTime, err := payload.GetTime(key)
if err == nil || !gotTime.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, error",
key, gotTime, err, time.Time{})
}
gotDuration, err := payload.GetDuration(key)
if err == nil || gotDuration != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want 0, error",
key, gotDuration, err)
// access non-existent key.
got, err = payload.GetDuration(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Duration(0))
}
}
}

View File

@@ -13,14 +13,14 @@ import (
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
"golang.org/x/time/rate"
)
type processor struct {
rdb *rdb.RDB
ps *base.ProcessState
logger *log.Logger
broker base.Broker
handler Handler
@@ -33,6 +33,8 @@ type processor struct {
errHandler ErrorHandler
shutdownTimeout time.Duration
// channel via which to send sync requests to syncer.
syncRequestCh chan<- *syncRequest
@@ -48,42 +50,60 @@ type processor struct {
done chan struct{}
once sync.Once
// abort channel is closed when the shutdown of the "processor" goroutine starts.
abort chan struct{}
// quit channel communicates to the in-flight worker goroutines to stop.
// quit channel is closed when the shutdown of the "processor" goroutine starts.
quit chan struct{}
// abort channel communicates to the in-flight worker goroutines to stop.
abort chan struct{}
// cancelations is a set of cancel functions for all in-progress tasks.
cancelations *base.Cancelations
starting chan<- *base.TaskMessage
finished chan<- *base.TaskMessage
}
type retryDelayFunc func(n int, err error, task *Task) time.Duration
type processorParams struct {
logger *log.Logger
broker base.Broker
retryDelayFunc retryDelayFunc
syncCh chan<- *syncRequest
cancelations *base.Cancelations
concurrency int
queues map[string]int
strictPriority bool
errHandler ErrorHandler
shutdownTimeout time.Duration
starting chan<- *base.TaskMessage
finished chan<- *base.TaskMessage
}
// newProcessor constructs a new processor.
func newProcessor(r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc,
syncCh chan<- *syncRequest, c *base.Cancelations, errHandler ErrorHandler) *processor {
info := ps.Get()
qcfg := normalizeQueueCfg(info.Queues)
func newProcessor(params processorParams) *processor {
queues := normalizeQueues(params.queues)
orderedQueues := []string(nil)
if info.StrictPriority {
orderedQueues = sortByPriority(qcfg)
if params.strictPriority {
orderedQueues = sortByPriority(queues)
}
return &processor{
rdb: r,
ps: ps,
queueConfig: qcfg,
logger: params.logger,
broker: params.broker,
queueConfig: queues,
orderedQueues: orderedQueues,
retryDelayFunc: fn,
syncRequestCh: syncCh,
cancelations: c,
retryDelayFunc: params.retryDelayFunc,
syncRequestCh: params.syncCh,
cancelations: params.cancelations,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, info.Concurrency),
sema: make(chan struct{}, params.concurrency),
done: make(chan struct{}),
abort: make(chan struct{}),
quit: make(chan struct{}),
errHandler: errHandler,
abort: make(chan struct{}),
errHandler: params.errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
starting: params.starting,
finished: params.finished,
}
}
@@ -91,9 +111,9 @@ func newProcessor(r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc,
// It's safe to call this method multiple times.
func (p *processor) stop() {
p.once.Do(func() {
logger.info("Processor shutting down...")
p.logger.Debug("Processor shutting down...")
// Unblock if processor is waiting for sema token.
close(p.abort)
close(p.quit)
// Signal the processor goroutine to stop processing tasks
// from the queue.
p.done <- struct{}{}
@@ -104,35 +124,24 @@ func (p *processor) stop() {
func (p *processor) terminate() {
p.stop()
// IDEA: Allow user to customize this timeout value.
const timeout = 8 * time.Second
time.AfterFunc(timeout, func() { close(p.quit) })
logger.info("Waiting for all workers to finish...")
// send cancellation signal to all in-progress task handlers
for _, cancel := range p.cancelations.GetAll() {
cancel()
}
time.AfterFunc(p.shutdownTimeout, func() { close(p.abort) })
p.logger.Info("Waiting for all workers to finish...")
// block until all workers have released the token
for i := 0; i < cap(p.sema); i++ {
p.sema <- struct{}{}
}
logger.info("All workers have finished")
p.restore() // move any unfinished tasks back to the queue.
p.logger.Info("All workers have finished")
}
func (p *processor) start(wg *sync.WaitGroup) {
// NOTE: The call to "restore" needs to complete before starting
// the processor goroutine.
p.restore()
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-p.done:
logger.info("Processor done")
p.logger.Debug("Processor done")
return
default:
p.exec()
@@ -144,51 +153,66 @@ func (p *processor) start(wg *sync.WaitGroup) {
// exec pulls a task out of the queue and starts a worker goroutine to
// process the task.
func (p *processor) exec() {
qnames := p.queues()
msg, err := p.rdb.Dequeue(qnames...)
if err == rdb.ErrNoProcessableTask {
// queues are empty, this is a normal behavior.
if len(p.queueConfig) > 1 {
// sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: With multiple queues, we are not using blocking pop operation and
// polling queues instead. This adds significant load to redis.
time.Sleep(time.Second)
}
return
}
if err != nil {
if p.errLogLimiter.Allow() {
logger.error("Dequeue error: %v", err)
}
return
}
select {
case <-p.abort:
// shutdown is starting, return immediately after requeuing the message.
p.requeue(msg)
case <-p.quit:
return
case p.sema <- struct{}{}: // acquire token
p.ps.AddWorkerStats(msg, time.Now())
qnames := p.queues()
msg, deadline, err := p.broker.Dequeue(qnames...)
switch {
case err == rdb.ErrNoProcessableTask:
p.logger.Debug("All queues are empty")
// Queues are empty, this is a normal behavior.
// Sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: We are not using blocking pop operation and polling queues instead.
// This adds significant load to redis.
time.Sleep(time.Second)
<-p.sema // release token
return
case err != nil:
if p.errLogLimiter.Allow() {
p.logger.Errorf("Dequeue error: %v", err)
}
<-p.sema // release token
return
}
p.starting <- msg
go func() {
defer func() {
p.ps.DeleteWorkerStats(msg)
<-p.sema /* release token */
p.finished <- msg
<-p.sema // release token
}()
resCh := make(chan error, 1)
task := NewTask(msg.Type, msg.Payload)
ctx, cancel := createContext(msg)
ctx, cancel := createContext(msg, deadline)
p.cancelations.Add(msg.ID.String(), cancel)
go func() {
resCh <- perform(ctx, task, p.handler)
defer func() {
cancel()
p.cancelations.Delete(msg.ID.String())
}()
// check context before starting a worker goroutine.
select {
case <-p.quit:
// time is up, quit this worker goroutine.
logger.warn("Quitting worker to process task id=%s", msg.ID)
case <-ctx.Done():
// already canceled (e.g. deadline exceeded).
p.retryOrKill(ctx, msg, ctx.Err())
return
default:
}
resCh := make(chan error, 1)
go func() {
resCh <- perform(ctx, NewTask(msg.Type, msg.Payload), p.handler)
}()
select {
case <-p.abort:
// time is up, push the message back to queue and quit this worker goroutine.
p.logger.Warnf("Quitting worker. task id=%s", msg.ID)
p.requeue(msg)
return
case <-ctx.Done():
p.retryOrKill(ctx, msg, ctx.Err())
return
case resErr := <-resCh:
// Note: One of three things should happen.
@@ -196,82 +220,91 @@ func (p *processor) exec() {
// 2) Retry -> Removes the message from InProgress & Adds the message to Retry
// 3) Kill -> Removes the message from InProgress & Adds the message to Dead
if resErr != nil {
if p.errHandler != nil {
p.errHandler.HandleError(task, resErr, msg.Retried, msg.Retry)
}
if msg.Retried >= msg.Retry {
p.kill(msg, resErr)
} else {
p.retry(msg, resErr)
}
p.retryOrKill(ctx, msg, resErr)
return
}
p.markAsDone(msg)
p.markAsDone(ctx, msg)
}
}()
}
}
// restore moves all tasks from "in-progress" back to queue
// to restore all unfinished tasks.
func (p *processor) restore() {
n, err := p.rdb.RequeueAll()
if err != nil {
logger.error("Could not restore unfinished tasks: %v", err)
}
if n > 0 {
logger.info("Restored %d unfinished tasks back to queue", n)
}
}
func (p *processor) requeue(msg *base.TaskMessage) {
err := p.rdb.Requeue(msg)
err := p.broker.Requeue(msg)
if err != nil {
logger.error("Could not push task id=%s back to queue: %v", msg.ID, err)
p.logger.Errorf("Could not push task id=%s back to queue: %v", msg.ID, err)
} else {
p.logger.Infof("Pushed task id=%s back to queue", msg.ID)
}
}
func (p *processor) markAsDone(msg *base.TaskMessage) {
err := p.rdb.Done(msg)
func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) {
err := p.broker.Done(msg)
if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue)
logger.warn("%s; Will retry syncing", errMsg)
errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.InProgressQueue, err)
deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Done(msg)
return p.broker.Done(msg)
},
errMsg: errMsg,
errMsg: errMsg,
deadline: deadline,
}
}
}
func (p *processor) retry(msg *base.TaskMessage, e error) {
func (p *processor) retryOrKill(ctx context.Context, msg *base.TaskMessage, err error) {
if p.errHandler != nil {
p.errHandler.HandleError(ctx, NewTask(msg.Type, msg.Payload), err)
}
if msg.Retried >= msg.Retry {
p.logger.Warnf("Retry exhausted for task id=%s", msg.ID)
p.kill(ctx, msg, err)
} else {
p.retry(ctx, msg, err)
}
}
func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error) {
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(d)
err := p.rdb.Retry(msg, retryAt, e.Error())
err := p.broker.Retry(msg, retryAt, e.Error())
if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue)
logger.warn("%s; Will retry syncing", errMsg)
deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Retry(msg, retryAt, e.Error())
return p.broker.Retry(msg, retryAt, e.Error())
},
errMsg: errMsg,
errMsg: errMsg,
deadline: deadline,
}
}
}
func (p *processor) kill(msg *base.TaskMessage, e error) {
logger.warn("Retry exhausted for task id=%s", msg.ID)
err := p.rdb.Kill(msg, e.Error())
func (p *processor) kill(ctx context.Context, msg *base.TaskMessage, e error) {
err := p.broker.Kill(msg, e.Error())
if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue)
logger.warn("%s; Will retry syncing", errMsg)
deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Kill(msg, e.Error())
return p.broker.Kill(msg, e.Error())
},
errMsg: errMsg,
errMsg: errMsg,
deadline: deadline,
}
}
}
@@ -294,7 +327,7 @@ func (p *processor) queues() []string {
}
var names []string
for qname, priority := range p.queueConfig {
for i := 0; i < int(priority); i++ {
for i := 0; i < priority; i++ {
names = append(names, qname)
}
}
@@ -358,16 +391,15 @@ func (x byPriority) Len() int { return len(x) }
func (x byPriority) Less(i, j int) bool { return x[i].priority < x[j].priority }
func (x byPriority) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
// normalizeQueueCfg divides priority numbers by their
// greatest common divisor.
func normalizeQueueCfg(queueCfg map[string]int) map[string]int {
// normalizeQueues divides priority numbers by their greatest common divisor.
func normalizeQueues(queues map[string]int) map[string]int {
var xs []int
for _, x := range queueCfg {
for _, x := range queues {
xs = append(xs, x)
}
d := gcd(xs...)
res := make(map[string]int)
for q, x := range queueCfg {
for q, x := range queues {
res[q] = x / d
}
return res
@@ -389,16 +421,3 @@ func gcd(xs ...int) int {
}
return res
}
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage) (context.Context, context.CancelFunc) {
timeout, err := time.ParseDuration(msg.Timeout)
if err != nil {
logger.error("cannot parse timeout duration for %+v", msg)
return context.WithCancel(context.Background())
}
if timeout == 0 {
return context.WithCancel(context.Background())
}
return context.WithTimeout(context.Background(), timeout)
}

View File

@@ -19,6 +19,29 @@ import (
"github.com/hibiken/asynq/internal/rdb"
)
// fakeHeartbeater receives from starting and finished channels and do nothing.
func fakeHeartbeater(starting, finished <-chan *base.TaskMessage, done <-chan struct{}) {
for {
select {
case <-starting:
case <-finished:
case <-done:
return
}
}
}
// fakeSyncer receives from sync channel and do nothing.
func fakeSyncer(syncCh <-chan *syncRequest, done <-chan struct{}) {
for {
select {
case <-syncCh:
case <-done:
return
}
}
}
func TestProcessorSuccess(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
@@ -36,19 +59,16 @@ func TestProcessorSuccess(t *testing.T) {
tests := []struct {
enqueued []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run
wait time.Duration // wait duration between starting and stopping processor for this test case
wantProcessed []*Task // tasks to be processed at the end
}{
{
enqueued: []*base.TaskMessage{m1},
incoming: []*base.TaskMessage{m2, m3, m4},
wait: time.Second,
wantProcessed: []*Task{t1, t2, t3, t4},
},
{
enqueued: []*base.TaskMessage{},
incoming: []*base.TaskMessage{m1},
wait: time.Second,
wantProcessed: []*Task{t1},
},
}
@@ -66,13 +86,30 @@ func TestProcessorSuccess(t *testing.T) {
processed = append(processed, task)
return nil
}
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations()
p := newProcessor(rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: defaultDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler)
var wg sync.WaitGroup
p.start(&wg)
p.start(&sync.WaitGroup{})
for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg)
if err != nil {
@@ -80,16 +117,90 @@ func TestProcessorSuccess(t *testing.T) {
t.Fatal(err)
}
}
time.Sleep(tc.wait)
p.terminate()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
time.Sleep(2 * time.Second) // wait for two second to allow all enqueued tasks to be processed.
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
mu.Unlock()
}
}
// https://github.com/hibiken/asynq/issues/166
func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("large_number", map[string]interface{}{"data": 111111111111111111})
t1 := NewTask(m1.Type, m1.Payload)
tests := []struct {
enqueued []*base.TaskMessage // initial default queue state
wantProcessed []*Task // tasks to be processed at the end
}{
{
enqueued: []*base.TaskMessage{m1},
wantProcessed: []*Task{t1},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue.
var mu sync.Mutex
var processed []*Task
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
if data, err := task.Payload.GetInt("data"); err != nil {
t.Errorf("coult not get data from payload: %v", err)
} else {
t.Logf("data == %d", data)
}
processed = append(processed, task)
return nil
}
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: defaultDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler)
p.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second) // wait for two second to allow all enqueued tasks to be processed.
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmpopts.IgnoreUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
mu.Unlock()
}
}
@@ -104,19 +215,6 @@ func TestProcessorRetry(t *testing.T) {
m4 := h.NewTaskMessage("sync", nil)
errMsg := "something went wrong"
// r* is m* after retry
r1 := *m1
r1.ErrorMsg = errMsg
r2 := *m2
r2.ErrorMsg = errMsg
r2.Retried = m2.Retried + 1
r3 := *m3
r3.ErrorMsg = errMsg
r3.Retried = m3.Retried + 1
r4 := *m4
r4.ErrorMsg = errMsg
r4.Retried = m4.Retried + 1
now := time.Now()
tests := []struct {
@@ -136,13 +234,13 @@ func TestProcessorRetry(t *testing.T) {
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return fmt.Errorf(errMsg)
}),
wait: time.Second,
wait: 2 * time.Second,
wantRetry: []h.ZSetEntry{
{Msg: &r2, Score: float64(now.Add(time.Minute).Unix())},
{Msg: &r3, Score: float64(now.Add(time.Minute).Unix())},
{Msg: &r4, Score: float64(now.Add(time.Minute).Unix())},
{Msg: h.TaskMessageAfterRetry(*m2, errMsg), Score: float64(now.Add(time.Minute).Unix())},
{Msg: h.TaskMessageAfterRetry(*m3, errMsg), Score: float64(now.Add(time.Minute).Unix())},
{Msg: h.TaskMessageAfterRetry(*m4, errMsg), Score: float64(now.Add(time.Minute).Unix())},
},
wantDead: []*base.TaskMessage{&r1},
wantDead: []*base.TaskMessage{h.TaskMessageWithError(*m1, errMsg)},
wantErrCount: 4,
},
}
@@ -159,18 +257,33 @@ func TestProcessorRetry(t *testing.T) {
mu sync.Mutex // guards n
n int // number of times error handler is called
)
errHandler := func(t *Task, err error, retried, maxRetry int) {
errHandler := func(ctx context.Context, t *Task, err error) {
mu.Lock()
defer mu.Unlock()
n++
}
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations()
p := newProcessor(rdbClient, ps, delayFunc, nil, cancelations, ErrorHandlerFunc(errHandler))
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: delayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: ErrorHandlerFunc(errHandler),
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = tc.handler
var wg sync.WaitGroup
p.start(&wg)
p.start(&sync.WaitGroup{})
for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg)
if err != nil {
@@ -178,10 +291,10 @@ func TestProcessorRetry(t *testing.T) {
t.Fatal(err)
}
}
time.Sleep(tc.wait)
time.Sleep(tc.wait) // FIXME: This makes test flaky.
p.terminate()
cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to second difference in zset score
cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to a second difference in zset score
gotRetry := h.GetRetryEntries(t, r)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff)
@@ -230,9 +343,25 @@ func TestProcessorQueues(t *testing.T) {
}
for _, tc := range tests {
cancelations := base.NewCancelations()
ps := base.NewProcessState("localhost", 1234, 10, tc.queueCfg, false)
p := newProcessor(nil, ps, defaultDelayFunc, nil, cancelations, nil)
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: nil,
retryDelayFunc: defaultDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: tc.queueCfg,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
got := p.queues()
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s",
@@ -297,14 +426,28 @@ func TestProcessorWithStrictPriority(t *testing.T) {
base.DefaultQueueName: 2,
"low": 1,
}
// Note: Set concurrency to 1 to make sure tasks are processed one at a time.
cancelations := base.NewCancelations()
ps := base.NewProcessState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/)
p := newProcessor(rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: defaultDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time.
queues: queueCfg,
strictPriority: true,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler)
var wg sync.WaitGroup
p.start(&wg)
p.start(&sync.WaitGroup{})
time.Sleep(tc.wait)
p.terminate()
@@ -363,3 +506,83 @@ func TestPerform(t *testing.T) {
}
}
}
func TestGCD(t *testing.T) {
tests := []struct {
input []int
want int
}{
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
got := gcd(tc.input...)
if got != tc.want {
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
}
}
}
func TestNormalizeQueues(t *testing.T) {
tests := []struct {
input map[string]int
want map[string]int
}{
{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
}
for _, tc := range tests {
got := normalizeQueues(tc.input)
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("normalizeQueues(%v) = %v, want %v; (-want, +got):\n%s",
tc.input, got, tc.want, diff)
}
}
}

96
recoverer.go Normal file
View File

@@ -0,0 +1,96 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"fmt"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
type recoverer struct {
logger *log.Logger
broker base.Broker
retryDelayFunc retryDelayFunc
// channel to communicate back to the long running "recoverer" goroutine.
done chan struct{}
// poll interval.
interval time.Duration
}
type recovererParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
retryDelayFunc retryDelayFunc
}
func newRecoverer(params recovererParams) *recoverer {
return &recoverer{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
interval: params.interval,
retryDelayFunc: params.retryDelayFunc,
}
}
func (r *recoverer) terminate() {
r.logger.Debug("Recoverer shutting down...")
// Signal the recoverer goroutine to stop polling.
r.done <- struct{}{}
}
func (r *recoverer) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
timer := time.NewTimer(r.interval)
for {
select {
case <-r.done:
r.logger.Debug("Recoverer done")
timer.Stop()
return
case <-timer.C:
// Get all tasks which have expired 30 seconds ago or earlier.
deadline := time.Now().Add(-30 * time.Second)
msgs, err := r.broker.ListDeadlineExceeded(deadline)
if err != nil {
r.logger.Warn("recoverer: could not list deadline exceeded tasks")
continue
}
const errMsg = "deadline exceeded" // TODO: better error message
for _, msg := range msgs {
if msg.Retried >= msg.Retry {
r.kill(msg, errMsg)
} else {
r.retry(msg, errMsg)
}
}
}
}
}()
}
func (r *recoverer) retry(msg *base.TaskMessage, errMsg string) {
delay := r.retryDelayFunc(msg.Retried, fmt.Errorf(errMsg), NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(delay)
if err := r.broker.Retry(msg, retryAt, errMsg); err != nil {
r.logger.Warnf("recoverer: could not retry deadline exceeded task: %v", err)
}
}
func (r *recoverer) kill(msg *base.TaskMessage, errMsg string) {
if err := r.broker.Kill(msg, errMsg); err != nil {
r.logger.Warnf("recoverer: could not move task to dead queue: %v", err)
}
}

162
recoverer_test.go Normal file
View File

@@ -0,0 +1,162 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestRecoverer(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
t1 := h.NewTaskMessage("task1", nil)
t2 := h.NewTaskMessage("task2", nil)
t3 := h.NewTaskMessageWithQueue("task3", nil, "critical")
t4 := h.NewTaskMessage("task4", nil)
t4.Retried = t4.Retry // t4 has reached its max retry count
now := time.Now()
oneHourFromNow := now.Add(1 * time.Hour)
fiveMinutesFromNow := now.Add(5 * time.Minute)
fiveMinutesAgo := now.Add(-5 * time.Minute)
oneHourAgo := now.Add(-1 * time.Hour)
tests := []struct {
desc string
inProgress []*base.TaskMessage
deadlines []h.ZSetEntry
retry []h.ZSetEntry
dead []h.ZSetEntry
wantInProgress []*base.TaskMessage
wantDeadlines []h.ZSetEntry
wantRetry []*base.TaskMessage
wantDead []*base.TaskMessage
}{
{
desc: "with one task in-progress",
inProgress: []*base.TaskMessage{t1},
deadlines: []h.ZSetEntry{
{Msg: t1, Score: float64(fiveMinutesAgo.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{},
wantDeadlines: []h.ZSetEntry{},
wantRetry: []*base.TaskMessage{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"),
},
wantDead: []*base.TaskMessage{},
},
{
desc: "with a task with max-retry reached",
inProgress: []*base.TaskMessage{t4},
deadlines: []h.ZSetEntry{
{Msg: t4, Score: float64(fiveMinutesAgo.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{},
wantDeadlines: []h.ZSetEntry{},
wantRetry: []*base.TaskMessage{},
wantDead: []*base.TaskMessage{h.TaskMessageWithError(*t4, "deadline exceeded")},
},
{
desc: "with multiple tasks in-progress, and one expired",
inProgress: []*base.TaskMessage{t1, t2, t3},
deadlines: []h.ZSetEntry{
{Msg: t1, Score: float64(oneHourAgo.Unix())},
{Msg: t2, Score: float64(fiveMinutesFromNow.Unix())},
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{t2, t3},
wantDeadlines: []h.ZSetEntry{
{Msg: t2, Score: float64(fiveMinutesFromNow.Unix())},
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
wantRetry: []*base.TaskMessage{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"),
},
wantDead: []*base.TaskMessage{},
},
{
desc: "with multiple expired tasks in-progress",
inProgress: []*base.TaskMessage{t1, t2, t3},
deadlines: []h.ZSetEntry{
{Msg: t1, Score: float64(oneHourAgo.Unix())},
{Msg: t2, Score: float64(fiveMinutesAgo.Unix())},
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{t3},
wantDeadlines: []h.ZSetEntry{
{Msg: t3, Score: float64(oneHourFromNow.Unix())},
},
wantRetry: []*base.TaskMessage{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"),
h.TaskMessageAfterRetry(*t2, "deadline exceeded"),
},
wantDead: []*base.TaskMessage{},
},
{
desc: "with empty in-progress queue",
inProgress: []*base.TaskMessage{},
deadlines: []h.ZSetEntry{},
retry: []h.ZSetEntry{},
dead: []h.ZSetEntry{},
wantInProgress: []*base.TaskMessage{},
wantDeadlines: []h.ZSetEntry{},
wantRetry: []*base.TaskMessage{},
wantDead: []*base.TaskMessage{},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedInProgressQueue(t, r, tc.inProgress)
h.SeedDeadlines(t, r, tc.deadlines)
h.SeedRetryQueue(t, r, tc.retry)
h.SeedDeadQueue(t, r, tc.dead)
recoverer := newRecoverer(recovererParams{
logger: testLogger,
broker: rdbClient,
interval: 1 * time.Second,
retryDelayFunc: func(n int, err error, task *Task) time.Duration { return 30 * time.Second },
})
var wg sync.WaitGroup
recoverer.start(&wg)
time.Sleep(2 * time.Second)
recoverer.terminate()
gotInProgress := h.GetInProgressMessages(t, r)
if diff := cmp.Diff(tc.wantInProgress, gotInProgress, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.InProgressQueue, diff)
}
gotDeadlines := h.GetDeadlinesEntries(t, r)
if diff := cmp.Diff(tc.wantDeadlines, gotDeadlines, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.KeyDeadlines, diff)
}
gotRetry := h.GetRetryMessages(t, r)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryQueue, diff)
}
gotDead := h.GetDeadMessages(t, r)
if diff := cmp.Diff(tc.wantDead, gotDead, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.DeadQueue, diff)
}
}
}

View File

@@ -8,37 +8,38 @@ import (
"sync"
"time"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
type scheduler struct {
rdb *rdb.RDB
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "scheduler" goroutine.
done chan struct{}
// poll interval on average
avgInterval time.Duration
// list of queues to move the tasks into.
qnames []string
}
func newScheduler(r *rdb.RDB, avgInterval time.Duration, qcfg map[string]int) *scheduler {
var qnames []string
for q := range qcfg {
qnames = append(qnames, q)
}
type schedulerParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
}
func newScheduler(params schedulerParams) *scheduler {
return &scheduler{
rdb: r,
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
avgInterval: avgInterval,
qnames: qnames,
avgInterval: params.interval,
}
}
func (s *scheduler) terminate() {
logger.info("Scheduler shutting down...")
s.logger.Debug("Scheduler shutting down...")
// Signal the scheduler goroutine to stop polling.
s.done <- struct{}{}
}
@@ -51,7 +52,7 @@ func (s *scheduler) start(wg *sync.WaitGroup) {
for {
select {
case <-s.done:
logger.info("Scheduler done")
s.logger.Debug("Scheduler done")
return
case <-time.After(s.avgInterval):
s.exec()
@@ -61,7 +62,7 @@ func (s *scheduler) start(wg *sync.WaitGroup) {
}
func (s *scheduler) exec() {
if err := s.rdb.CheckAndEnqueue(s.qnames...); err != nil {
logger.error("Could not enqueue scheduled tasks: %v", err)
if err := s.broker.CheckAndEnqueue(); err != nil {
s.logger.Errorf("Could not enqueue scheduled tasks: %v", err)
}
}

View File

@@ -19,7 +19,11 @@ func TestScheduler(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newScheduler(rdbClient, pollInterval, defaultQueueConfig)
s := newScheduler(schedulerParams{
logger: testLogger,
broker: rdbClient,
interval: pollInterval,
})
t1 := h.NewTaskMessage("gen_thumbnail", nil)
t2 := h.NewTaskMessage("send_email", nil)
t3 := h.NewTaskMessage("reindex", nil)

View File

@@ -15,7 +15,7 @@ import (
// ServeMux is a multiplexer for asynchronous tasks.
// It matches the type of each task against a list of registered patterns
// and calls the handler for the pattern that most closely matches the
// taks's type name.
// task's type name.
//
// Longer patterns take precedence over shorter ones, so that if there are
// handlers registered for both "images" and "images:thumbnails",
@@ -23,9 +23,10 @@ import (
// "images:thumbnails" and the former will receive tasks with type name beginning
// with "images".
type ServeMux struct {
mu sync.RWMutex
m map[string]muxEntry
es []muxEntry // slice of entries sorted from longest to shortest.
mu sync.RWMutex
m map[string]muxEntry
es []muxEntry // slice of entries sorted from longest to shortest.
mws []MiddlewareFunc
}
type muxEntry struct {
@@ -33,6 +34,11 @@ type muxEntry struct {
pattern string
}
// MiddlewareFunc is a function which receives an asynq.Handler and returns another asynq.Handler.
// Typically, the returned handler is a closure which does something with the context and task passed
// to it, and then calls the handler passed as parameter to the MiddlewareFunc.
type MiddlewareFunc func(Handler) Handler
// NewServeMux allocates and returns a new ServeMux.
func NewServeMux() *ServeMux {
return new(ServeMux)
@@ -60,6 +66,9 @@ func (mux *ServeMux) Handler(t *Task) (h Handler, pattern string) {
if h == nil {
h, pattern = NotFoundHandler(), ""
}
for i := len(mux.mws) - 1; i >= 0; i-- {
h = mux.mws[i](h)
}
return h, pattern
}
@@ -130,6 +139,16 @@ func (mux *ServeMux) HandleFunc(pattern string, handler func(context.Context, *T
mux.Handle(pattern, HandlerFunc(handler))
}
// Use appends a MiddlewareFunc to the chain.
// Middlewares are executed in the order that they are applied to the ServeMux.
func (mux *ServeMux) Use(mws ...MiddlewareFunc) {
mux.mu.Lock()
defer mux.mu.Unlock()
for _, fn := range mws {
mux.mws = append(mux.mws, fn)
}
}
// NotFound returns an error indicating that the handler was not found for the given task.
func NotFound(ctx context.Context, task *Task) error {
return fmt.Errorf("handler not found for task %q", task.Type)

View File

@@ -7,9 +7,12 @@ package asynq
import (
"context"
"testing"
"github.com/google/go-cmp/cmp"
)
var called string
var called string // identity of the handler that was called.
var invoked []string // list of middlewares in the order they were invoked.
// makeFakeHandler returns a handler that updates the global called variable
// to the given identity.
@@ -20,6 +23,17 @@ func makeFakeHandler(identity string) Handler {
})
}
// makeFakeMiddleware returns a middleware function that appends the given identity
//to the global invoked slice.
func makeFakeMiddleware(identity string) MiddlewareFunc {
return func(next Handler) Handler {
return HandlerFunc(func(ctx context.Context, t *Task) error {
invoked = append(invoked, identity)
return next.ProcessTask(ctx, t)
})
}
}
// A list of pattern, handler pair that is registered with mux.
var serveMuxRegister = []struct {
pattern string
@@ -114,3 +128,43 @@ func TestServeMuxNotFound(t *testing.T) {
}
}
}
var middlewareTests = []struct {
typename string // task's type name
middlewares []string // middlewares to use. They should be called in this order.
want string // identifier of the handler that should be called
}{
{"email:signup", []string{"logging", "expiration"}, "signup email handler"},
{"csv:export", []string{}, "csv export handler"},
{"email:daily", []string{"expiration", "logging"}, "default email handler"},
}
func TestServeMuxMiddlewares(t *testing.T) {
for _, tc := range middlewareTests {
mux := NewServeMux()
for _, e := range serveMuxRegister {
mux.Handle(e.pattern, e.h)
}
var mws []MiddlewareFunc
for _, s := range tc.middlewares {
mws = append(mws, makeFakeMiddleware(s))
}
mux.Use(mws...)
invoked = []string{} // reset to empty slice
called = "" // reset to zero value
task := NewTask(tc.typename, nil)
if err := mux.ProcessTask(context.Background(), task); err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(invoked, tc.middlewares); diff != "" {
t.Errorf("invoked middlewares were %v, want %v", invoked, tc.middlewares)
}
if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
}
}
}

462
server.go Normal file
View File

@@ -0,0 +1,462 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"errors"
"fmt"
"math"
"math/rand"
"runtime"
"strings"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
)
// Server is responsible for managing the background-task processing.
//
// Server pulls tasks off queues and processes them.
// If the processing of a task is unsuccessful, server will
// schedule it for a retry.
// A task will be retried until either the task gets processed successfully
// or until it reaches its max retry count.
//
// If a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met
// (e.g., queue size reaches a certain limit, or the task has been in the
// queue for a certain amount of time).
type Server struct {
logger *log.Logger
broker base.Broker
status *base.ServerStatus
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
recoverer *recoverer
}
// Config specifies the server's background-task processing behavior.
type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to a zero or negative value, NewServer will overwrite the value
// to the number of CPUs usable by the currennt process.
Concurrency int
// Function to calculate retry delay for a failed task.
//
// By default, it uses exponential backoff algorithm to calculate the delay.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the server will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
// If set to true, tasks in the queue with the highest priority is processed first.
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
// Logger specifies the logger used by the server instance.
//
// If unset, default logger is used.
Logger Logger
// LogLevel specifies the minimum log level to enable.
//
// If unset, InfoLevel is used by default.
LogLevel LogLevel
// ShutdownTimeout specifies the duration to wait to let workers finish their tasks
// before forcing them to abort when stopping the server.
//
// If unset or zero, default timeout of 8 seconds is used.
ShutdownTimeout time.Duration
}
// An ErrorHandler handles an error occured during task processing.
type ErrorHandler interface {
HandleError(ctx context.Context, task *Task, err error)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(ctx context.Context, task *Task, err error)
// HandleError calls fn(ctx, task, err)
func (fn ErrorHandlerFunc) HandleError(ctx context.Context, task *Task, err error) {
fn(ctx, task, err)
}
// Logger supports logging at various log levels.
type Logger interface {
// Debug logs a message at Debug level.
Debug(args ...interface{})
// Info logs a message at Info level.
Info(args ...interface{})
// Warn logs a message at Warning level.
Warn(args ...interface{})
// Error logs a message at Error level.
Error(args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(args ...interface{})
}
// LogLevel represents logging level.
//
// It satisfies flag.Value interface.
type LogLevel int32
const (
// Note: reserving value zero to differentiate unspecified case.
level_unspecified LogLevel = iota
// DebugLevel is the lowest level of logging.
// Debug logs are intended for debugging and development purposes.
DebugLevel
// InfoLevel is used for general informational log messages.
InfoLevel
// WarnLevel is used for undesired but relatively expected events,
// which may indicate a problem.
WarnLevel
// ErrorLevel is used for undesired and unexpected events that
// the program can recover from.
ErrorLevel
// FatalLevel is used for undesired and unexpected events that
// the program cannot recover from.
FatalLevel
)
// String is part of the flag.Value interface.
func (l *LogLevel) String() string {
switch *l {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warn"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
}
panic(fmt.Sprintf("asynq: unexpected log level: %v", *l))
}
// Set is part of the flag.Value interface.
func (l *LogLevel) Set(val string) error {
switch strings.ToLower(val) {
case "debug":
*l = DebugLevel
case "info":
*l = InfoLevel
case "warn", "warning":
*l = WarnLevel
case "error":
*l = ErrorLevel
case "fatal":
*l = FatalLevel
default:
return fmt.Errorf("asynq: unsupported log level %q", val)
}
return nil
}
func toInternalLogLevel(l LogLevel) log.Level {
switch l {
case DebugLevel:
return log.DebugLevel
case InfoLevel:
return log.InfoLevel
case WarnLevel:
return log.WarnLevel
case ErrorLevel:
return log.ErrorLevel
case FatalLevel:
return log.FatalLevel
}
panic(fmt.Sprintf("asynq: unexpected log level: %v", l))
}
// Formula taken from https://github.com/mperham/sidekiq.
func defaultDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1))
return time.Duration(s) * time.Second
}
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
const defaultShutdownTimeout = 8 * time.Second
// NewServer returns a new Server given a redis connection option
// and background processing configuration.
func NewServer(r RedisConnOpt, cfg Config) *Server {
n := cfg.Concurrency
if n < 1 {
n = runtime.NumCPU()
}
delayFunc := cfg.RetryDelayFunc
if delayFunc == nil {
delayFunc = defaultDelayFunc
}
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
shutdownTimeout := cfg.ShutdownTimeout
if shutdownTimeout == 0 {
shutdownTimeout = defaultShutdownTimeout
}
logger := log.NewLogger(cfg.Logger)
loglevel := cfg.LogLevel
if loglevel == level_unspecified {
loglevel = InfoLevel
}
logger.SetLevel(toInternalLogLevel(loglevel))
rdb := rdb.NewRDB(createRedisClient(r))
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
status := base.NewServerStatus(base.StatusIdle)
cancels := base.NewCancelations()
syncer := newSyncer(syncerParams{
logger: logger,
requestsCh: syncCh,
interval: 5 * time.Second,
})
heartbeater := newHeartbeater(heartbeaterParams{
logger: logger,
broker: rdb,
interval: 5 * time.Second,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
status: status,
starting: starting,
finished: finished,
})
scheduler := newScheduler(schedulerParams{
logger: logger,
broker: rdb,
interval: 5 * time.Second,
})
subscriber := newSubscriber(subscriberParams{
logger: logger,
broker: rdb,
cancelations: cancels,
})
processor := newProcessor(processorParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
syncCh: syncCh,
cancelations: cancels,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
errHandler: cfg.ErrorHandler,
shutdownTimeout: shutdownTimeout,
starting: starting,
finished: finished,
})
recoverer := newRecoverer(recovererParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
interval: 1 * time.Minute,
})
return &Server{
logger: logger,
broker: rdb,
status: status,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
recoverer: recoverer,
}
}
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
type Handler interface {
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped.
var ErrServerStopped = errors.New("asynq: the server has been stopped")
// Run starts the background-task processing and blocks until
// an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all active workers and other
// goroutines to process the tasks.
//
// Run returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Run(handler Handler) error {
if err := srv.Start(handler); err != nil {
return err
}
srv.waitForSignals()
srv.Stop()
return nil
}
// Start starts the worker server. Once the server has started,
// it pulls tasks off queues and starts a worker goroutine for each task.
// Tasks are processed concurrently by the workers up to the number of
// concurrency specified at the initialization time.
//
// Start returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Start(handler Handler) error {
if handler == nil {
return fmt.Errorf("asynq: server cannot run with nil handler")
}
switch srv.status.Get() {
case base.StatusRunning:
return fmt.Errorf("asynq: the server is already running")
case base.StatusStopped:
return ErrServerStopped
}
srv.status.Set(base.StatusRunning)
srv.processor.handler = handler
srv.logger.Info("Starting processing")
srv.heartbeater.start(&srv.wg)
srv.subscriber.start(&srv.wg)
srv.syncer.start(&srv.wg)
srv.recoverer.start(&srv.wg)
srv.scheduler.start(&srv.wg)
srv.processor.start(&srv.wg)
return nil
}
// Stop stops the worker server.
// It gracefully closes all active workers. The server will wait for
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
func (srv *Server) Stop() {
switch srv.status.Get() {
case base.StatusIdle, base.StatusStopped:
// server is not running, do nothing and return.
return
}
srv.logger.Info("Starting graceful shutdown")
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
// processor -> syncer (via syncCh)
// processor -> heartbeater (via starting, finished channels)
srv.scheduler.terminate()
srv.processor.terminate()
srv.recoverer.terminate()
srv.syncer.terminate()
srv.subscriber.terminate()
srv.heartbeater.terminate()
srv.wg.Wait()
srv.broker.Close()
srv.status.Set(base.StatusStopped)
srv.logger.Info("Exiting")
}
// Quiet signals the server to stop pulling new tasks off queues.
// Quiet should be used before stopping the server.
func (srv *Server) Quiet() {
srv.logger.Info("Stopping processor")
srv.processor.stop()
srv.status.Set(base.StatusQuiet)
srv.logger.Info("Processor stopped")
}

240
server_test.go Normal file
View File

@@ -0,0 +1,240 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"syscall"
"testing"
"time"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
"go.uber.org/goleak"
)
func TestServer(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
c := NewClient(r)
srv := NewServer(r, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
_, err = c.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
srv.Stop()
}
func TestServerRun(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
done := make(chan struct{})
// Make sure server exits when receiving TERM signal.
go func() {
time.Sleep(2 * time.Second)
syscall.Kill(syscall.Getpid(), syscall.SIGTERM)
done <- struct{}{}
}()
go func() {
select {
case <-time.After(10 * time.Second):
t.Fatal("server did not stop after receiving TERM signal")
case <-done:
}
}()
mux := NewServeMux()
if err := srv.Run(mux); err != nil {
t.Fatal(err)
}
}
func TestServerErrServerStopped(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
srv.Stop()
err := srv.Start(handler)
if err != ErrServerStopped {
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err)
}
}
func TestServerErrNilHandler(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
err := srv.Start(nil)
if err == nil {
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
srv.Stop()
}
}
func TestServerErrServerRunning(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
err := srv.Start(handler)
if err == nil {
t.Error("Calling (*Server).Start(handler) on already running server did not return error")
}
srv.Stop()
}
func TestServerWithRedisDown(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
testBroker.Sleep()
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
time.Sleep(3 * time.Second)
srv.Stop()
}
func TestServerWithFlakyBroker(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: redisAddr, DB: redisDB}, Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
h := func(ctx context.Context, task *Task) error {
// force task retry.
if task.Type == "bad_task" {
return fmt.Errorf("could not process %q", task.Type)
}
time.Sleep(2 * time.Second)
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
for i := 0; i < 10; i++ {
_, err := c.Enqueue(NewTask("enqueued", nil), MaxRetry(i))
if err != nil {
t.Fatal(err)
}
_, err = c.Enqueue(NewTask("bad_task", nil))
if err != nil {
t.Fatal(err)
}
_, err = c.EnqueueIn(time.Duration(i)*time.Second, NewTask("scheduled", nil))
if err != nil {
t.Fatal(err)
}
}
// simulate redis going down.
testBroker.Sleep()
time.Sleep(3 * time.Second)
// simulate redis comes back online.
testBroker.Wakeup()
time.Sleep(3 * time.Second)
srv.Stop()
}
func TestLogLevel(t *testing.T) {
tests := []struct {
flagVal string
want LogLevel
wantStr string
}{
{"debug", DebugLevel, "debug"},
{"Info", InfoLevel, "info"},
{"WARN", WarnLevel, "warn"},
{"warning", WarnLevel, "warn"},
{"Error", ErrorLevel, "error"},
{"fatal", FatalLevel, "fatal"},
}
for _, tc := range tests {
level := new(LogLevel)
if err := level.Set(tc.flagVal); err != nil {
t.Fatal(err)
}
if *level != tc.want {
t.Errorf("Set(%q): got %v, want %v", tc.flagVal, level, &tc.want)
continue
}
if got := level.String(); got != tc.wantStr {
t.Errorf("String() returned %q, want %q", got, tc.wantStr)
}
}
}

30
signals_unix.go Normal file
View File

@@ -0,0 +1,30 @@
// +build linux bsd darwin
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/unix"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM, SIGINT, and SIGTSTP.
// SIGTERM and SIGINT will signal the process to exit.
// SIGTSTP will signal the process to stop processing new tasks.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TSTP to stop processing new tasks")
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
for {
sig := <-sigs
if sig == unix.SIGTSTP {
srv.Quiet()
continue
}
break
}
}

22
signals_windows.go Normal file
View File

@@ -0,0 +1,22 @@
// +build windows
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/windows"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM and SIGINT.
// SIGTERM and SIGINT will signal the process to exit.
//
// Note: Currently SIGTSTP is not supported for windows build.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}

View File

@@ -6,50 +6,78 @@ package asynq
import (
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/log"
)
type subscriber struct {
rdb *rdb.RDB
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "subscriber" goroutine.
done chan struct{}
// cancelations hold cancel functions for all in-progress tasks.
cancelations *base.Cancelations
// time to wait before retrying to connect to redis.
retryTimeout time.Duration
}
func newSubscriber(rdb *rdb.RDB, cancelations *base.Cancelations) *subscriber {
type subscriberParams struct {
logger *log.Logger
broker base.Broker
cancelations *base.Cancelations
}
func newSubscriber(params subscriberParams) *subscriber {
return &subscriber{
rdb: rdb,
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
cancelations: cancelations,
cancelations: params.cancelations,
retryTimeout: 5 * time.Second,
}
}
func (s *subscriber) terminate() {
logger.info("Subscriber shutting down...")
s.logger.Debug("Subscriber shutting down...")
// Signal the subscriber goroutine to stop.
s.done <- struct{}{}
}
func (s *subscriber) start(wg *sync.WaitGroup) {
pubsub, err := s.rdb.CancelationPubSub()
cancelCh := pubsub.Channel()
if err != nil {
logger.error("cannot subscribe to cancelation channel: %v", err)
return
}
wg.Add(1)
go func() {
defer wg.Done()
var (
pubsub *redis.PubSub
err error
)
// Try until successfully connect to Redis.
for {
pubsub, err = s.broker.CancelationPubSub()
if err != nil {
s.logger.Errorf("cannot subscribe to cancelation channel: %v", err)
select {
case <-time.After(s.retryTimeout):
continue
case <-s.done:
s.logger.Debug("Subscriber done")
return
}
}
break
}
cancelCh := pubsub.Channel()
for {
select {
case <-s.done:
pubsub.Close()
logger.info("Subscriber done")
s.logger.Debug("Subscriber done")
return
case msg := <-cancelCh:
cancel, ok := s.cancelations.Get(msg.Payload)

View File

@@ -11,6 +11,7 @@ import (
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
)
func TestSubscriber(t *testing.T) {
@@ -37,16 +38,23 @@ func TestSubscriber(t *testing.T) {
cancelations := base.NewCancelations()
cancelations.Add(tc.registeredID, fakeCancelFunc)
subscriber := newSubscriber(rdbClient, cancelations)
subscriber := newSubscriber(subscriberParams{
logger: testLogger,
broker: rdbClient,
cancelations: cancelations,
})
var wg sync.WaitGroup
subscriber.start(&wg)
defer subscriber.terminate()
// wait for subscriber to establish connection to pubsub channel
time.Sleep(time.Second)
if err := rdbClient.PublishCancelation(tc.publishID); err != nil {
subscriber.terminate()
t.Fatalf("could not publish cancelation message: %v", err)
}
// allow for redis to publish message
// wait for redis to publish message
time.Sleep(time.Second)
mu.Lock()
@@ -58,7 +66,57 @@ func TestSubscriber(t *testing.T) {
}
}
mu.Unlock()
subscriber.terminate()
}
}
func TestSubscriberWithRedisDown(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
cancelations := base.NewCancelations()
subscriber := newSubscriber(subscriberParams{
logger: testLogger,
broker: testBroker,
cancelations: cancelations,
})
subscriber.retryTimeout = 1 * time.Second // set shorter retry timeout for testing purpose.
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
var wg sync.WaitGroup
subscriber.start(&wg)
defer subscriber.terminate()
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.
testBroker.Wakeup() // simulate a situation where redis server is back online.
time.Sleep(2 * time.Second) // allow subscriber to establish pubsub channel.
const id = "test"
var (
mu sync.Mutex
called bool
)
cancelations.Add(id, func() {
mu.Lock()
defer mu.Unlock()
called = true
})
if err := r.PublishCancelation(id); err != nil {
t.Fatalf("could not publish cancelation message: %v", err)
}
time.Sleep(time.Second) // wait for redis to publish message.
mu.Lock()
if !called {
t.Errorf("cancel function was not called")
}
mu.Unlock()
}

View File

@@ -7,11 +7,15 @@ package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/log"
)
// syncer is responsible for queuing up failed requests to redis and retry
// those requests to sync state between the background process and redis.
type syncer struct {
logger *log.Logger
requestsCh <-chan *syncRequest
// channel to communicate back to the long running "syncer" goroutine.
@@ -22,20 +26,28 @@ type syncer struct {
}
type syncRequest struct {
fn func() error // sync operation
errMsg string // error message
fn func() error // sync operation
errMsg string // error message
deadline time.Time // request should be dropped if deadline has been exceeded
}
func newSyncer(requestsCh <-chan *syncRequest, interval time.Duration) *syncer {
type syncerParams struct {
logger *log.Logger
requestsCh <-chan *syncRequest
interval time.Duration
}
func newSyncer(params syncerParams) *syncer {
return &syncer{
requestsCh: requestsCh,
logger: params.logger,
requestsCh: params.requestsCh,
done: make(chan struct{}),
interval: interval,
interval: params.interval,
}
}
func (s *syncer) terminate() {
logger.info("Syncer shutting down...")
s.logger.Debug("Syncer shutting down...")
// Signal the syncer goroutine to stop.
s.done <- struct{}{}
}
@@ -51,16 +63,19 @@ func (s *syncer) start(wg *sync.WaitGroup) {
// Try sync one last time before shutting down.
for _, req := range requests {
if err := req.fn(); err != nil {
logger.error(req.errMsg)
s.logger.Error(req.errMsg)
}
}
logger.info("Syncer done")
s.logger.Debug("Syncer done")
return
case req := <-s.requestsCh:
requests = append(requests, req)
case <-time.After(s.interval):
var temp []*syncRequest
for _, req := range requests {
if req.deadline.Before(time.Now()) {
continue // drop stale request
}
if err := req.fn(); err != nil {
temp = append(temp, req)
}

View File

@@ -27,7 +27,11 @@ func TestSyncer(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncRequestCh, interval)
syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup
syncer.start(&wg)
defer syncer.terminate()
@@ -38,6 +42,7 @@ func TestSyncer(t *testing.T) {
fn: func() error {
return rdbClient.Done(m)
},
deadline: time.Now().Add(5 * time.Minute),
}
}
@@ -52,7 +57,11 @@ func TestSyncer(t *testing.T) {
func TestSyncerRetry(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncRequestCh, interval)
syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup
syncer.start(&wg)
@@ -77,8 +86,9 @@ func TestSyncerRetry(t *testing.T) {
}
syncRequestCh <- &syncRequest{
fn: requestFunc,
errMsg: "error",
fn: requestFunc,
errMsg: "error",
deadline: time.Now().Add(5 * time.Minute),
}
// allow syncer to retry
@@ -90,3 +100,41 @@ func TestSyncerRetry(t *testing.T) {
}
mu.Unlock()
}
func TestSyncerDropsStaleRequests(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup
syncer.start(&wg)
var (
mu sync.Mutex
n int // number of times request has been processed
)
for i := 0; i < 10; i++ {
syncRequestCh <- &syncRequest{
fn: func() error {
mu.Lock()
n++
mu.Unlock()
return nil
},
deadline: time.Now().Add(time.Duration(-i) * time.Second), // already exceeded deadline
}
}
time.Sleep(2 * interval) // ensure that syncer runs at least once
syncer.terminate()
mu.Lock()
if n != 0 {
t.Errorf("requests has been processed %d times, want 0", n)
}
mu.Unlock()
}

View File

@@ -1,6 +1,6 @@
# Asynqmon
# Asynq CLI
Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
Asynq CLI is a command line tool to monitor the tasks managed by `asynq` package.
## Table of Contents
@@ -8,31 +8,32 @@ Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
- [Quick Start](#quick-start)
- [Stats](#stats)
- [History](#history)
- [Process Status](#process-status)
- [Servers](#servers)
- [List](#list)
- [Enqueue](#enqueue)
- [Delete](#delete)
- [Kill](#kill)
- [Cancel](#cancel)
- [Pause](#pause)
- [Config File](#config-file)
## Installation
In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynqmon
go get github.com/hibiken/asynq/tools/asynq
This will create the asynqmon executable under your `$GOPATH/bin` directory.
This will create the asynq executable under your `$GOPATH/bin` directory.
## Quickstart
The tool has a few commands to inspect the state of tasks and queues.
Run `asynqmon help` to see all the available commands.
Run `asynq help` to see all the available commands.
Asynqmon needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
By default, Asynqmon will try to connect to a redis server running at `localhost:6379`.
By default, CLI will try to connect to a redis server running at `localhost:6379`.
### Stats
@@ -40,11 +41,11 @@ Stats command gives the overview of the current state of tasks and queues. You c
Example:
watch -n 3 asynqmon stats
watch -n 3 asynq stats
This will run `asynqmon stats` command every 3 seconds.
This will run `asynq stats` command every 3 seconds.
![Gif](/docs/assets/asynqmon_stats.gif)
![Gif](/docs/assets/asynq_stats.gif)
### History
@@ -54,19 +55,17 @@ By default, it shows the stats from the last 10 days. Use `--days` to specify th
Example:
asynqmon history --days=30
asynq history --days=30
![Gif](/docs/assets/asynqmon_history.gif)
![Gif](/docs/assets/asynq_history.gif)
### Process Status
### Servers
PS (ProcessStatus) command shows the list of running worker processes.
Servers command shows the list of running worker servers pulling tasks from the given redis instance.
Example:
asynqmon ps
![Gif](/docs/assets/asynqmon_ps.gif)
asynq servers
### List
@@ -74,11 +73,11 @@ List command shows all tasks in the specified state in a table format
Example:
asynqmon ls retry
asynqmon ls scheduled
asynqmon ls dead
asynqmon ls enqueued:default
asynqmon ls inprogress
asynq ls retry
asynq ls scheduled
asynq ls dead
asynq ls enqueued:default
asynq ls inprogress
### Enqueue
@@ -88,13 +87,13 @@ Command `enq` takes a task ID and moves the task to **Enqueued** state. You can
Example:
asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g
asynq enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example:
asynqmon enqall retry
asynq enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state.
@@ -106,13 +105,13 @@ Command `del` takes a task ID and deletes the task. You can obtain the task ID b
Example:
asynqmon del r:1575732274:bnogo8gt6toe23vhef0g
asynq del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state.
Example:
asynqmon delall retry
asynq delall retry
Running the above command will delete all **Retry** tasks.
@@ -124,13 +123,13 @@ Command `kill` takes a task ID and kills the task. You can obtain the task ID by
Example:
asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g
asynq kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state.
Example:
asynqmon killall retry
asynq killall retry
Running the above command will move all **Retry** tasks to **Dead** state.
@@ -144,15 +143,26 @@ Handler implementation needs to be context aware in order to actually stop proce
Example:
asynqmon cancel bnogo8gt6toe23vhef0g
asynq cancel bnogo8gt6toe23vhef0g
### Pause
Command `pause` pauses the spcified queue. Tasks in paused queues are not processed by servers.
To resume processing from the queue, use `unpause` command.
To see which queues are currently paused, use `stats` command.
Example:
asynq pause email
asynq unpause email
## Config File
You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server.
By default, `asynqmon` will try to read config file located in
`$HOME/.asynqmon.(yaml|json)`. You can specify the file location via `--config` flag.
By default, `asynq` will try to read config file located in
`$HOME/.asynq.(yaml|json)`. You can specify the file location via `--config` flag.
Config file example:

View File

@@ -18,17 +18,17 @@ import (
var cancelCmd = &cobra.Command{
Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynqmon cancel) will send a cancelation signal to the goroutine processing
Long: `Cancel (asynq cancel) will send a cancelation signal to the goroutine processing
the specified task.
The command takes one argument which specifies the task to cancel.
The task should be in in-progress state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Identifier for a task should be obtained by running "asynq ls" command.
Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing.
Example: asynqmon cancel bnogo8gt6toe23vhef0g`,
Example: asynq cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: cancel,
}

View File

@@ -18,13 +18,13 @@ import (
var delCmd = &cobra.Command{
Use: "del [task id]",
Short: "Deletes a task given an identifier",
Long: `Del (asynqmon del) will delete a task given an identifier.
Long: `Del (asynq del) will delete a task given an identifier.
The command takes one argument which specifies the task to delete.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Identifier for a task should be obtained by running "asynq ls" command.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`,
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: del,
}

View File

@@ -20,11 +20,11 @@ var delallValidArgs = []string{"scheduled", "retry", "dead"}
var delallCmd = &cobra.Command{
Use: "delall [state]",
Short: "Deletes all tasks in the specified state",
Long: `Delall (asynqmon delall) will delete all tasks in the specified state.
Long: `Delall (asynq delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
Example: asynqmon delall dead -> Deletes all dead tasks`,
Example: asynq delall dead -> Deletes all dead tasks`,
ValidArgs: delallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: delall,
@@ -60,7 +60,7 @@ func delall(cmd *cobra.Command, args []string) {
case "dead":
err = r.DeleteAllDeadTasks()
default:
fmt.Printf("error: `asynqmon delall [state]` only accepts %v as the argument.\n", delallValidArgs)
fmt.Printf("error: `asynq delall [state]` only accepts %v as the argument.\n", delallValidArgs)
os.Exit(1)
}
if err != nil {

View File

@@ -18,16 +18,16 @@ import (
var enqCmd = &cobra.Command{
Use: "enq [task id]",
Short: "Enqueues a task given an identifier",
Long: `Enq (asynqmon enq) will enqueue a task given an identifier.
Long: `Enq (asynq enq) will enqueue a task given an identifier.
The command takes one argument which specifies the task to enqueue.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Identifier for a task should be obtained by running "asynq ls" command.
The task enqueued by this command will be processed as soon as the task
gets dequeued by a processor.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`,
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: enq,
}

View File

@@ -20,14 +20,14 @@ var enqallValidArgs = []string{"scheduled", "retry", "dead"}
var enqallCmd = &cobra.Command{
Use: "enqall [state]",
Short: "Enqueues all tasks in the specified state",
Long: `Enqall (asynqmon enqall) will enqueue all tasks in the specified state.
Long: `Enqall (asynq enqall) will enqueue all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
The tasks enqueued by this command will be processed as soon as it
gets dequeued by a processor.
Example: asynqmon enqall dead -> Enqueues all dead tasks`,
Example: asynq enqall dead -> Enqueues all dead tasks`,
ValidArgs: enqallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: enqall,
@@ -64,7 +64,7 @@ func enqall(cmd *cobra.Command, args []string) {
case "dead":
n, err = r.EnqueueAllDeadTasks()
default:
fmt.Printf("error: `asynqmon enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
fmt.Printf("error: `asynq enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
os.Exit(1)
}
if err != nil {

View File

@@ -22,12 +22,12 @@ var days int
var historyCmd = &cobra.Command{
Use: "history",
Short: "Shows historical aggregate data",
Long: `History (asynqmon history) will show the number of processed and failed tasks
Long: `History (asynq history) will show the number of processed and failed tasks
from the last x days.
By default, it will show the data from the last 10 days.
Example: asynqmon history -x=30 -> Shows stats from the last 30 days`,
Example: asynq history -x=30 -> Shows stats from the last 30 days`,
Args: cobra.NoArgs,
Run: history,
}

View File

@@ -18,13 +18,13 @@ import (
var killCmd = &cobra.Command{
Use: "kill [task id]",
Short: "Kills a task given an identifier",
Long: `Kill (asynqmon kill) will put a task in dead state given an identifier.
Long: `Kill (asynq kill) will put a task in dead state given an identifier.
The command takes one argument which specifies the task to kill.
The task should be in either scheduled or retry state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Identifier for a task should be obtained by running "asynq ls" command.
Example: asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g`,
Example: asynq kill r:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: kill,
}

View File

@@ -20,11 +20,11 @@ var killallValidArgs = []string{"scheduled", "retry"}
var killallCmd = &cobra.Command{
Use: "killall [state]",
Short: "Kills all tasks in the specified state",
Long: `Killall (asynqmon killall) will update all tasks from the specified state to dead state.
Long: `Killall (asynq killall) will update all tasks from the specified state to dead state.
The argument should be either "scheduled" or "retry".
Example: asynqmon killall retry -> Update all retry tasks to dead tasks`,
Example: asynq killall retry -> Update all retry tasks to dead tasks`,
ValidArgs: killallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: killall,
@@ -59,7 +59,7 @@ func killall(cmd *cobra.Command, args []string) {
case "retry":
n, err = r.KillAllRetryTasks()
default:
fmt.Printf("error: `asynqmon killall [state]` only accepts %v as the argument.\n", killallValidArgs)
fmt.Printf("error: `asynq killall [state]` only accepts %v as the argument.\n", killallValidArgs)
os.Exit(1)
}
if err != nil {

View File

@@ -13,8 +13,8 @@ import (
"time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
@@ -25,19 +25,19 @@ var lsValidArgs = []string{"enqueued", "inprogress", "scheduled", "retry", "dead
var lsCmd = &cobra.Command{
Use: "ls [state]",
Short: "Lists tasks in the specified state",
Long: `Ls (asynqmon ls) will list all tasks in the specified state in a table format.
Long: `Ls (asynq ls) will list all tasks in the specified state in a table format.
The command takes one argument which specifies the state of tasks.
The argument value should be one of "enqueued", "inprogress", "scheduled",
"retry", or "dead".
Example:
asynqmon ls dead -> Lists all tasks in dead state
asynq ls dead -> Lists all tasks in dead state
Enqueued tasks requires a queue name after ":"
Example:
asynqmon ls enqueued:default -> List tasks from default queue
asynqmon ls enqueued:critical -> List tasks from critical queue
asynq ls enqueued:default -> List tasks from default queue
asynq ls enqueued:critical -> List tasks from critical queue
`,
Args: cobra.ExactValidArgs(1),
Run: ls,
@@ -72,7 +72,7 @@ func ls(cmd *cobra.Command, args []string) {
switch parts[0] {
case "enqueued":
if len(parts) != 2 {
fmt.Printf("error: Missing queue name\n`asynqmon ls enqueued:[queue name]`\n")
fmt.Printf("error: Missing queue name\n`asynq ls enqueued:[queue name]`\n")
os.Exit(1)
}
listEnqueued(r, parts[1])
@@ -85,7 +85,7 @@ func ls(cmd *cobra.Command, args []string) {
case "dead":
listDead(r)
default:
fmt.Printf("error: `asynqmon ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
fmt.Printf("error: `asynq ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
os.Exit(1)
}
}
@@ -93,7 +93,7 @@ func ls(cmd *cobra.Command, args []string) {
// queryID returns an identifier used for "enq" command.
// score is the zset score and queryType should be one
// of "s", "r" or "d" (scheduled, retry, dead respectively).
func queryID(id xid.ID, score int64, qtype string) string {
func queryID(id uuid.UUID, score int64, qtype string) string {
const format = "%v:%v:%v"
return fmt.Sprintf(format, qtype, score, id)
}
@@ -101,22 +101,22 @@ func queryID(id xid.ID, score int64, qtype string) string {
// parseQueryID is a reverse operation of queryID function.
// It takes a queryID and return each part of id with proper
// type if valid, otherwise it reports an error.
func parseQueryID(queryID string) (id xid.ID, score int64, qtype string, err error) {
func parseQueryID(queryID string) (id uuid.UUID, score int64, qtype string, err error) {
parts := strings.Split(queryID, ":")
if len(parts) != 3 {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
return uuid.Nil, 0, "", fmt.Errorf("invalid id")
}
id, err = xid.FromString(parts[2])
id, err = uuid.Parse(parts[2])
if err != nil {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
return uuid.Nil, 0, "", fmt.Errorf("invalid id")
}
score, err = strconv.ParseInt(parts[1], 10, 64)
if err != nil {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
return uuid.Nil, 0, "", fmt.Errorf("invalid id")
}
qtype = parts[0]
if len(qtype) != 1 || !strings.Contains("srd", qtype) {
return xid.NilID(), 0, "", fmt.Errorf("invalid id")
return uuid.Nil, 0, "", fmt.Errorf("invalid id")
}
return id, score, qtype, nil
}

212
tools/asynq/cmd/migrate.go Normal file
View File

@@ -0,0 +1,212 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"encoding/json"
"fmt"
"os"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/spf13/cast"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// migrateCmd represents the migrate command
var migrateCmd = &cobra.Command{
Use: "migrate",
Short: fmt.Sprintf("Migrate all tasks to be compatible with asynq@%s", base.Version),
Long: fmt.Sprintf("Migrate (asynq migrate) will convert all tasks in redis to be compatible with asynq@%s.", base.Version),
Run: migrate,
}
func init() {
rootCmd.AddCommand(migrateCmd)
}
func migrate(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
lists := []string{base.InProgressQueue}
allQueues, err := c.SMembers(base.AllQueues).Result()
if err != nil {
fmt.Printf("error: could not read all queues: %v", err)
os.Exit(1)
}
lists = append(lists, allQueues...)
for _, key := range lists {
if err := migrateList(c, key); err != nil {
fmt.Printf("error: %v", err)
os.Exit(1)
}
}
zsets := []string{base.ScheduledQueue, base.RetryQueue, base.DeadQueue}
for _, key := range zsets {
if err := migrateZSet(c, key); err != nil {
fmt.Printf("error: %v", err)
os.Exit(1)
}
}
}
type oldTaskMessage struct {
// Unchanged
Type string
Payload map[string]interface{}
ID uuid.UUID
Queue string
Retry int
Retried int
ErrorMsg string
UniqueKey string
// Following fields have changed.
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Timeout string
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Deadline string
}
var defaultTimeout = 30 * time.Minute
func convertMessage(old *oldTaskMessage) (*base.TaskMessage, error) {
timeout, err := time.ParseDuration(old.Timeout)
if err != nil {
return nil, fmt.Errorf("could not parse Timeout field of %+v", old)
}
deadline, err := time.Parse(time.RFC3339, old.Deadline)
if err != nil {
return nil, fmt.Errorf("could not parse Deadline field of %+v", old)
}
if timeout == 0 && deadline.IsZero() {
timeout = defaultTimeout
}
if deadline.IsZero() {
// Zero value used to be time.Time{},
// in the new schema zero value is represented by
// zero in Unix time.
deadline = time.Unix(0, 0)
}
return &base.TaskMessage{
Type: old.Type,
Payload: old.Payload,
ID: uuid.New(),
Queue: old.Queue,
Retry: old.Retry,
Retried: old.Retried,
ErrorMsg: old.ErrorMsg,
UniqueKey: old.UniqueKey,
Timeout: int64(timeout.Seconds()),
Deadline: deadline.Unix(),
}, nil
}
func deserialize(s string) (*base.TaskMessage, error) {
// Try deserializing as old message.
d := json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var old *oldTaskMessage
if err := d.Decode(&old); err != nil {
// Try deserializing as new message.
d = json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg *base.TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, fmt.Errorf("could not deserialize %s into task message: %v", s, err)
}
return msg, nil
}
return convertMessage(old)
}
func migrateZSet(c *redis.Client, key string) error {
if c.Exists(key).Val() == 0 {
// skip if key doesn't exist.
return nil
}
res, err := c.ZRangeWithScores(key, 0, -1).Result()
if err != nil {
return err
}
var msgs []*redis.Z
for _, z := range res {
s, err := cast.ToStringE(z.Member)
if err != nil {
return fmt.Errorf("could not cast to string: %v", err)
}
msg, err := deserialize(s)
if err != nil {
return err
}
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, &redis.Z{Score: z.Score, Member: encoded})
}
if err := c.Rename(key, key+":backup").Err(); err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
if err := c.ZAdd(key, msgs...).Err(); err != nil {
return fmt.Errorf("could not write new messages to %q: %v", key, err)
}
if err := c.Del(key + ":backup").Err(); err != nil {
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
}
return nil
}
func migrateList(c *redis.Client, key string) error {
if c.Exists(key).Val() == 0 {
// skip if key doesn't exist.
return nil
}
res, err := c.LRange(key, 0, -1).Result()
if err != nil {
return err
}
var msgs []interface{}
for _, s := range res {
msg, err := deserialize(s)
if err != nil {
return err
}
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, encoded)
}
if err := c.Rename(key, key+":backup").Err(); err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
if err := c.LPush(key, msgs...).Err(); err != nil {
return fmt.Errorf("could not write new messages to %q: %v", key, err)
}
if err := c.Del(key + ":backup").Err(); err != nil {
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
}
return nil
}

47
tools/asynq/cmd/pause.go Normal file
View File

@@ -0,0 +1,47 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// pauseCmd represents the pause command
var pauseCmd = &cobra.Command{
Use: "pause [queue name]",
Short: "Pauses the specified queue",
Long: `Pause (asynq pause) will pause the specified queue.
Asynq servers will not process tasks from paused queues.
Use the "unpause" command to resume a paused queue.
Example: asynq pause default -> Pause the "default" queue`,
Args: cobra.ExactValidArgs(1),
Run: pause,
}
func init() {
rootCmd.AddCommand(pauseCmd)
}
func pause(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.Pause(args[0])
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully paused queue %q\n", args[0])
}

View File

@@ -18,11 +18,11 @@ import (
var rmqCmd = &cobra.Command{
Use: "rmq [queue name]",
Short: "Removes the specified queue",
Long: `Rmq (asynqmon rmq) will remove the specified queue.
Long: `Rmq (asynq rmq) will remove the specified queue.
By default, it will remove the queue only if it's empty.
Use --force option to override this behavior.
Example: asynqmon rmq low -> Removes "low" queue`,
Example: asynq rmq low -> Removes "low" queue`,
Args: cobra.ExactValidArgs(1),
Run: rmq,
}
@@ -44,7 +44,7 @@ func rmq(cmd *cobra.Command, args []string) {
err := r.RemoveQueue(args[0], rmqForce)
if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynqmon rmq --force %s'\n", err, args[0])
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq rmq --force %s'\n", err, args[0])
os.Exit(1)
}
fmt.Printf("error: %v", err)

View File

@@ -11,6 +11,7 @@ import (
"strings"
"text/tabwriter"
"github.com/hibiken/asynq/internal/base"
"github.com/spf13/cobra"
homedir "github.com/mitchellh/go-homedir"
@@ -26,9 +27,20 @@ var password string
// rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{
Use: "asynqmon",
Short: "A monitoring tool for asynq queues",
Long: `Asynqmon is a montoring CLI to inspect tasks and queues managed by asynq.`,
Use: "asynq",
Short: "A monitoring tool for asynq queues",
Long: `Asynq is a montoring CLI to inspect tasks and queues managed by asynq.`,
Version: base.Version,
}
var versionOutput = fmt.Sprintf("asynq version %s\n", base.Version)
var versionCmd = &cobra.Command{
Use: "version",
Hidden: true,
Run: func(cmd *cobra.Command, args []string) {
fmt.Print(versionOutput)
},
}
// Execute adds all child commands to the root command and sets flags appropriately.
@@ -43,7 +55,10 @@ func Execute() {
func init() {
cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynqmon.yaml)")
rootCmd.AddCommand(versionCmd)
rootCmd.SetVersionTemplate(versionOutput)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynq.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
@@ -65,9 +80,9 @@ func initConfig() {
os.Exit(1)
}
// Search config in home directory with name ".asynqmon" (without extension).
// Search config in home directory with name ".asynq" (without extension).
viper.AddConfigPath(home)
viper.SetConfigName(".asynqmon")
viper.SetConfigName(".asynq")
}
viper.AutomaticEnv() // read in environment variables that match

View File

@@ -18,64 +18,64 @@ import (
"github.com/spf13/viper"
)
// psCmd represents the ps command
var psCmd = &cobra.Command{
Use: "ps",
Short: "Shows all background worker processes",
Long: `Ps (asynqmon ps) will show all background worker processes
backed by the specified redis instance.
// serversCmd represents the servers command
var serversCmd = &cobra.Command{
Use: "servers",
Short: "Shows all running worker servers",
Long: `Servers (asynq servers) will show all running worker servers
pulling tasks from the specified redis instance.
The command shows the following for each process:
* Host and PID of the process
The command shows the following for each server:
* Host and PID of the process in which the server is running
* Number of active workers out of worker pool
* Queue configuration
* State of the worker process ("running" | "stopped")
* Time the process was started
* State of the worker server ("running" | "quiet")
* Time the server was started
A "running" process is processing tasks in queues.
A "stopped" process is no longer processing new tasks.`,
A "running" server is pulling tasks from queues and processing them.
A "quiet" server is no longer pulling new tasks from queues`,
Args: cobra.NoArgs,
Run: ps,
Run: servers,
}
func init() {
rootCmd.AddCommand(psCmd)
rootCmd.AddCommand(serversCmd)
}
func ps(cmd *cobra.Command, args []string) {
func servers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
processes, err := r.ListProcesses()
servers, err := r.ListServers()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(processes) == 0 {
fmt.Println("No processes")
if len(servers) == 0 {
fmt.Println("No running servers")
return
}
// sort by hostname and pid
sort.Slice(processes, func(i, j int) bool {
x, y := processes[i], processes[j]
sort.Slice(servers, func(i, j int) bool {
x, y := servers[i], servers[j]
if x.Host != y.Host {
return x.Host < y.Host
}
return x.PID < y.PID
})
// print processes
// print server info
cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"}
printRows := func(w io.Writer, tmpl string) {
for _, ps := range processes {
for _, info := range servers {
fmt.Fprintf(w, tmpl,
ps.Host, ps.PID, ps.Status,
fmt.Sprintf("%d/%d", ps.ActiveWorkerCount, ps.Concurrency),
formatQueues(ps.Queues), timeAgo(ps.Started))
info.Host, info.PID, info.Status,
fmt.Sprintf("%d/%d", info.ActiveWorkerCount, info.Concurrency),
formatQueues(info.Queues), timeAgo(info.Started))
}
}
printTable(cols, printRows)

View File

@@ -7,7 +7,6 @@ package cmd
import (
"fmt"
"os"
"sort"
"strconv"
"strings"
"text/tabwriter"
@@ -33,7 +32,7 @@ Specifically, the command shows the following:
To monitor the tasks continuously, it's recommended that you run this
command in conjunction with the watch command.
Example: watch -n 3 asynqmon stats -> Shows current state of tasks every three seconds`,
Example: watch -n 3 asynq stats -> Shows current state of tasks every three seconds`,
Args: cobra.NoArgs,
Run: stats,
}
@@ -96,24 +95,31 @@ func printStates(s *rdb.Stats) {
tw.Flush()
}
func printQueues(queues map[string]int) {
var qnames, seps, counts []string
for q := range queues {
qnames = append(qnames, strings.Title(q))
func printQueues(queues []*rdb.Queue) {
var headers, seps, counts []string
for _, q := range queues {
title := queueTitle(q)
headers = append(headers, title)
seps = append(seps, strings.Repeat("-", len(title)))
counts = append(counts, strconv.Itoa(q.Size))
}
sort.Strings(qnames) // sort for stable order
for _, q := range qnames {
seps = append(seps, strings.Repeat("-", len(q)))
counts = append(counts, strconv.Itoa(queues[strings.ToLower(q)]))
}
format := strings.Repeat("%v\t", len(qnames)) + "\n"
format := strings.Repeat("%v\t", len(headers)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, toInterfaceSlice(qnames)...)
fmt.Fprintf(tw, format, toInterfaceSlice(headers)...)
fmt.Fprintf(tw, format, toInterfaceSlice(seps)...)
fmt.Fprintf(tw, format, toInterfaceSlice(counts)...)
tw.Flush()
}
func queueTitle(q *rdb.Queue) string {
var b strings.Builder
b.WriteString(strings.Title(q.Name))
if q.Paused {
b.WriteString(" (Paused)")
}
return b.String()
}
func printStats(s *rdb.Stats) {
format := strings.Repeat("%v\t", 3) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)

View File

@@ -0,0 +1,46 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// unpauseCmd represents the unpause command
var unpauseCmd = &cobra.Command{
Use: "unpause [queue name]",
Short: "Unpauses the specified queue",
Long: `Unpause (asynq unpause) will unpause the specified queue.
Asynq servers will process tasks from unpaused/resumed queues.
Example: asynq unpause default -> Resume the "default" queue`,
Args: cobra.ExactValidArgs(1),
Run: unpause,
}
func init() {
rootCmd.AddCommand(unpauseCmd)
}
func unpause(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.Unpause(args[0])
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully resumed queue %q\n", args[0])
}

View File

@@ -20,9 +20,9 @@ import (
var workersCmd = &cobra.Command{
Use: "workers",
Short: "Shows all running workers information",
Long: `Workers (asynqmon workers) will show all running workers information.
Long: `Workers (asynq workers) will show all running workers information.
The command shows the follwoing for each worker:
The command shows the following for each worker:
* Process in which the worker is running
* ID of the task worker is processing
* Type of the task worker is processing
@@ -61,7 +61,7 @@ func workers(cmd *cobra.Command, args []string) {
if x.Started != y.Started {
return x.Started.Before(y.Started)
}
return x.ID.String() < y.ID.String()
return x.ID < y.ID
})
cols := []string{"Process", "ID", "Type", "Payload", "Queue", "Started"}

View File

@@ -4,7 +4,7 @@
package main
import "github.com/hibiken/asynq/tools/asynqmon/cmd"
import "github.com/hibiken/asynq/tools/asynq/cmd"
func main() {
cmd.Execute()

View File

@@ -4,9 +4,10 @@ go 1.13
require (
github.com/go-redis/redis/v7 v7.2.0
github.com/google/uuid v1.1.1
github.com/hibiken/asynq v0.4.0
github.com/mitchellh/go-homedir v1.1.0
github.com/rs/xid v1.2.1
github.com/spf13/cast v1.3.1
github.com/spf13/cobra v0.0.5
github.com/spf13/viper v1.6.2
)

View File

@@ -1,4 +1,5 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
@@ -15,6 +16,7 @@ github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3Ee
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
@@ -24,10 +26,6 @@ github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeME
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis v6.15.7+incompatible h1:3skhDh95XQMpnqeqNftPkQD9jL9e5e36z/1SUm6dy1U=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.1.0 h1:I4C4a8UGbFejiVjtYVTRVOiMIJ5pm5Yru6ibvDX/OS0=
github.com/go-redis/redis/v7 v7.1.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
@@ -38,10 +36,15 @@ github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4er
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
@@ -49,19 +52,22 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgf
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hibiken/asynq v0.4.0 h1:NvAfYX0DRe04WgGMKRg5oX7bs6ktv2fu9YwB6O356FI=
github.com/hibiken/asynq v0.4.0/go.mod h1:dtrVkxCsGPVhVNHMDXAH7lFq64kbj43+G6lt4FQZfW4=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
@@ -74,15 +80,14 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
@@ -94,18 +99,16 @@ github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
@@ -113,17 +116,13 @@ github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.6.0/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E=
github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
@@ -148,6 +147,7 @@ golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -172,6 +172,7 @@ golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
@@ -180,11 +181,14 @@ google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ij
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=