Compare commits

...

105 Commits

Author SHA1 Message Date
Ken Hibino
b6486716b4 v0.6.0 2020-03-01 15:54:59 -08:00
Ken Hibino
742ed6546f Add ServeMux type
Allow user to use ServeMux type to be used as a Handler.
ServeMux API is design to be similar to net/http.ServeMux API.
2020-03-01 15:53:18 -08:00
Ken Hibino
897ab4e28b Add ErrorHandler type to changelog 2020-02-29 22:09:13 -08:00
Ken Hibino
a4e4c0b1d5 Call error handler when task was not processed successfully 2020-02-29 22:09:13 -08:00
Ken Hibino
95b7dcaad4 Clean up processor test 2020-02-29 22:09:13 -08:00
Ken Hibino
8d3248e850 Add ErrorHandler type and add it to Config 2020-02-29 22:09:13 -08:00
Ken Hibino
e69264dc04 Run travis build with go v1.14.x 2020-02-27 08:28:35 -08:00
Ken Hibino
93ff4e4149 v0.5.0 2020-02-23 20:44:44 -08:00
Ken Hibino
3ee97f327e [ci skip] Update readme 2020-02-23 20:44:44 -08:00
Ken Hibino
49c117f4d1 Change Client APIs
Use `Enqueue`, `EnqueueAt`, and `EnqueueIn` to enqueue and schedule
tasks.
`Schedule` method was removed.
2020-02-23 20:40:40 -08:00
Ken Hibino
5b53a2aee9 Update redis package to v7.2.0 2020-02-22 21:21:55 -08:00
Ken Hibino
26e399bc9c Add workers command to asynqmon 2020-02-22 21:21:55 -08:00
Ken Hibino
e21fe3bd79 Add ListWorkers to RDB 2020-02-22 21:21:55 -08:00
Ken Hibino
37c6c73d9b Add test for cancelation pubsub 2020-02-22 15:16:24 -08:00
Ken Hibino
5775a5818d Update WriteProcessState to write both ProcessInfo and WorkerInfo to
redis
2020-02-22 08:56:51 -08:00
Ken Hibino
e3c9aae8c7 Include worker stats to ProcessState 2020-02-22 08:56:51 -08:00
Ken Hibino
9e02a91808 Add workers key in base package 2020-02-22 08:56:51 -08:00
Ken Hibino
830020eb39 Add ProcessState type to base package 2020-02-22 08:56:51 -08:00
Ken Hibino
f9a6c6156f [ci skip] Add codecov badge in readme 2020-02-19 21:37:54 -08:00
Ken Hibino
8540172306 Update travis.yml to upload coverage report to codecov 2020-02-19 21:27:40 -08:00
Ken Hibino
b44908fefd Move tools to its own go module 2020-02-17 21:55:55 -08:00
Ken Hibino
e78d5e4171 Enable the race detector in CI builds 2020-02-16 21:25:10 -08:00
Ken Hibino
0cc486a975 Fix data race in syncer test 2020-02-16 21:25:10 -08:00
Ken Hibino
906f231e6c Fix data race in subscriber test 2020-02-16 18:22:59 -08:00
Ken Hibino
39f237899b Clean up heartbeater test 2020-02-16 18:22:59 -08:00
Ken Hibino
3d9a222bb3 Use sync.WaitGroup for shutdown 2020-02-16 18:22:59 -08:00
Ken Hibino
2bcaea52ce Make heartbeater goroutine a monitor goroutine for process info. 2020-02-16 18:22:59 -08:00
Ken Hibino
0856ef32e0 v0.4.0 2020-02-13 06:25:27 -08:00
Ken Hibino
70efb7b455 [ci skip] Update changelog 2020-02-13 06:23:51 -08:00
Ken Hibino
d33ca98648 Use int as priority value type.
Do not use unsigned int for merely non-negative quantities as it
complicates simple arithmetic.
2020-02-13 06:23:51 -08:00
Ken Hibino
bf31fcc3ec [ci skip] Update readme and changelog 2020-02-13 06:23:51 -08:00
Ken Hibino
cd55a0e316 Add cancel command to asynqmon cli 2020-02-13 06:23:51 -08:00
Ken Hibino
6685827147 Add redis pubsub subscriber for cancelation 2020-02-13 06:23:51 -08:00
Ken Hibino
133bb6c2c6 Update docs for context and timeout 2020-02-13 06:23:51 -08:00
Ken Hibino
39459b4412 Allow user to specify timeout per task 2020-02-13 06:23:51 -08:00
Ken Hibino
6e14062325 Send cancellation signal to all in-progress handlers on shutdown 2020-02-13 06:23:51 -08:00
Ken Hibino
1b41c721b4 Change Handler interface to take context.Context 2020-02-13 06:23:51 -08:00
Ken Hibino
79a04e52a3 [cip skip] Add link to intro video in readme 2020-02-09 10:59:32 -08:00
Ken Hibino
d9327cf24b Avoid creating script struct on every rdb method invocation 2020-02-09 09:34:41 -08:00
Ken Hibino
7e33c9ebe0 [ci skip] Add contribution doc 2020-02-08 09:48:04 -08:00
Ken Hibino
a6b79eb7a0 [ci skip] Update readme 2020-02-07 06:45:36 -08:00
Ken Hibino
7af69c8d3c [ci skip] Fix comments 2020-02-06 20:46:59 -08:00
Ken Hibino
6dd4c688f4 [ci skip] Update readme 2020-02-05 22:12:26 -08:00
Ken Hibino
024cfa9fe6 Update output text for task's next retry value 2020-02-05 22:12:26 -08:00
Ken Hibino
461a6f2587 v0.3.0 2020-02-04 06:29:57 -08:00
Ken Hibino
ba9b42f428 Clean up cli description text 2020-02-04 06:16:29 -08:00
Ken Hibino
3744072e9b Sort processes by host and pid in ps output 2020-02-03 06:59:02 -08:00
Ken Hibino
eadfd5f8b4 [ci skip] Update changelog 2020-02-02 20:59:00 -08:00
Ken Hibino
96f06ac89b Add ps command to asynqmon 2020-02-02 20:56:33 -08:00
Ken Hibino
d03fa34eaf Add hearbeater 2020-02-01 09:35:49 -08:00
Ken Hibino
489e695433 [ci skip] Change font used in demo.gif 2020-01-30 21:47:08 -08:00
Ken Hibino
9ae4be8184 Run and compare benchmarks after successful ci-build 2020-01-30 21:38:16 -08:00
Ken Hibino
36af486303 [ci skip] Add prettier gif for demo 2020-01-29 21:36:17 -08:00
Ken Hibino
a46eb97e6f [ci skip] Shorten readme 2020-01-26 20:06:52 -08:00
Ken Hibino
c934ef115b v0.2.2 2020-01-26 16:07:44 -08:00
Ken Hibino
6fbaa2ed6c (fix): RestoreUnfinished to select correct queue 2020-01-26 16:05:46 -08:00
Ken Hibino
166497748b (fix): Requeue to select correct queue 2020-01-26 16:05:46 -08:00
Ken Hibino
31123fd42a Paginate tasks with asynqmon ls command
Changes:
* Added --page and --size flags to ls command
* By default, the command will show first 30 tasks from the specified
queue
2020-01-26 13:12:01 -08:00
Ken Hibino
3ed155b45b [ci skip] Update readme 2020-01-25 08:08:13 -08:00
Ken Hibino
58d2ed94e7 [ci skip] Fix typo 2020-01-23 06:05:18 -08:00
Ken Hibino
e253211a60 v0.2.1 2020-01-22 06:45:16 -08:00
Ken Hibino
44c657bec6 Rate limit error logs 2020-01-22 06:36:18 -08:00
Ken Hibino
db8e9d05c3 Add custom logger 2020-01-22 06:02:53 -08:00
Ken Hibino
b02e4e6b09 [ci skip] Update readme 2020-01-21 17:48:55 -08:00
Ken Hibino
f43c51ce8b v0.2.0 2020-01-19 15:28:24 -08:00
Ken Hibino
04983dc00f [ci skip] Clarify terminology around 'state' and 'queue' 2020-01-19 15:24:02 -08:00
Ken Hibino
9e872a4cb4 Change asynqmon history command to take no arguments 2020-01-19 15:24:02 -08:00
Ken Hibino
959c9fd01a [ci skip] Allow config file to set default values for flags 2020-01-19 09:10:48 -08:00
Ken Hibino
d37f2a09ab [ci skip] Add readme for asynqmon tool 2020-01-18 21:00:33 -08:00
Ken Hibino
207a6d2d1a Fix benchmark tests 2020-01-18 15:07:15 -08:00
Ken Hibino
c29200b1fc Add syncer to retry failed redis commands 2020-01-18 15:07:15 -08:00
Ken Hibino
5c806676de [ci skip] Add link to gitter chat room 2020-01-16 21:35:03 -08:00
Ken Hibino
fd8eb51440 [ci skip] Add badges to readme 2020-01-16 21:13:34 -08:00
Ken Hibino
f66a65d6ca Fix tests 2020-01-16 21:04:46 -08:00
Ken Hibino
d1f516d8f1 [ci skip] Update changelog 2020-01-16 21:04:46 -08:00
Ken Hibino
0c2591ad7e [ci skip] Update docs 2020-01-16 21:04:46 -08:00
Ken Hibino
43d7591250 Remove redis.Client type from asynq package API 2020-01-16 21:04:46 -08:00
Ken Hibino
cb2ebf18ac [performance] Skip the overhead of json decoding when scheduling to one
queue
2020-01-14 20:46:47 -08:00
Ken Hibino
5a6f737589 [performance] Use BRPOPLPUSH if one queue is used 2020-01-14 20:46:47 -08:00
Ken Hibino
f0251be5d2 [ci skip] Update changelog 2020-01-14 06:05:21 -08:00
Ken Hibino
858b0325bd Add rmq command to asynqmon 2020-01-14 06:02:00 -08:00
Ken Hibino
874d8e8843 Add RDB.RemoveQueue method 2020-01-14 06:02:00 -08:00
Ken Hibino
84eef4ed0b Add strict-priority option 2020-01-12 18:39:57 -08:00
Ken Hibino
97316d6766 Fix flaky tests
Some tests were failing due to mismatch in Score in ZSetEntry.
Changed ZSetEntry Score to float64 type so that we can use
cmpopts.EquateApprox to allow for margin when comparing.
2020-01-11 10:09:15 -08:00
Ken Hibino
2631672575 Allow filtering results of asynqmon ls enqueued by providing queue
name
2020-01-11 10:09:15 -08:00
Ken Hibino
cf78a12866 Add "Queue" column to the output of asynqmon ls 2020-01-11 10:09:15 -08:00
Ken Hibino
c5b215e3b9 Update RDB.ListEnqueued to list tasks from all queues 2020-01-11 10:09:15 -08:00
Ken Hibino
2ff847d520 Include each queue counts in stats command output 2020-01-11 10:09:15 -08:00
Ken Hibino
89843ac565 Change RDB.CurrentStats to be multi-queue aware 2020-01-11 10:09:15 -08:00
Ken Hibino
67f381269a Maintain a set of queue names in redis set 2020-01-11 10:09:15 -08:00
Ken Hibino
390eb13149 Remove stale field in processor struct 2020-01-11 10:09:15 -08:00
Ken Hibino
718336ff44 Update RDB.Enqueue* methods to be multi queue aware 2020-01-11 10:09:15 -08:00
Ken Hibino
8ff5c5101e [ci skip] Update changelog 2020-01-07 21:55:18 -08:00
Ken Hibino
4f5d115b3e [ci skip] Upgrade github.com/google/go-cmp to v0.4.0 2020-01-07 21:55:18 -08:00
Ken Hibino
24bb45b36b [ci skip] Normalize queue priority numbers 2020-01-07 21:55:18 -08:00
Ken Hibino
8d9a2d1313 Update processor to query queues based on priority 2020-01-07 21:55:18 -08:00
Ken Hibino
53d0902808 Change RDB.Dequeue to query multiple queues 2020-01-07 21:55:18 -08:00
Ken Hibino
2af9eb2c88 Delete stale code 2020-01-07 21:55:18 -08:00
Ken Hibino
28d698c24e Update CheckAndEnqueue to enqueue tasks to specified queue 2020-01-07 21:55:18 -08:00
Ken Hibino
1d99d99692 Update comment 2020-01-07 21:55:18 -08:00
Ken Hibino
03cb6eef09 Add Queues field to Config 2020-01-07 21:55:18 -08:00
Ken Hibino
ca78b92078 Add Queue option to allow user to specify queue from client
Added base.QueueKey method to get redis key for given queue name.
Changed asynqtest.GetEnqueuedMessages to optionally take queue name.
2020-01-07 21:55:18 -08:00
Ken Hibino
29ad70c36a [ci skip] Update readme 2020-01-05 09:55:39 -08:00
Ken Hibino
00b03e1287 Add test for payload key not exist 2020-01-05 09:55:39 -08:00
Ken Hibino
f3a23b9b12 Make Task type immutable
This change makes it impossible to mutate payload within Handler or
RetryDelayFunc.
2020-01-05 09:55:39 -08:00
60 changed files with 5983 additions and 1558 deletions

5
.gitignore vendored
View File

@@ -15,4 +15,7 @@
/examples
# Ignore command binary
/tools/asynqmon/asynqmon
/tools/asynqmon/asynqmon
# Ignore asynqmon config file
.asynqmon.*

View File

@@ -4,6 +4,11 @@ git:
depth: 1
env:
- GO111MODULE=on # go modules are the default
go: [1.12.x, 1.13.x]
go: [1.12.x, 1.13.x, 1.14.x]
script:
- go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
services:
- redis-server
after_success:
- bash ./.travis/benchcmp.sh
- bash <(curl -s https://codecov.io/bash)

15
.travis/benchcmp.sh Executable file
View File

@@ -0,0 +1,15 @@
if [ "${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}" != "master" ]; then
REMOTE_URL="$(git config --get remote.origin.url)";
cd ${TRAVIS_BUILD_DIR}/.. && \
git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \
cd "${TRAVIS_REPO_SLUG}-bench" && \
# Benchmark master
git checkout master && \
go test -run=XXX -bench=. ./... > master.txt && \
# Benchmark feature branch
git checkout ${TRAVIS_COMMIT} && \
go test -run=XXX -bench=. ./... > feature.txt && \
go get -u golang.org/x/tools/cmd/benchcmp && \
# compare two benchmarks
benchcmp master.txt feature.txt;
fi

View File

@@ -7,6 +7,79 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.6.0] - 2020-03-01
### Added
- Added `ServeMux` type to make it easy for users to implement Handler interface.
- `ErrorHandler` type was added. Allow users to specify error handling function (e.g. Report error to error reporting service such as Honeybadger, Bugsnag, etc)
## [0.5.0] - 2020-02-23
### Changed
- `Client` API has changed. Use `Enqueue`, `EnqueueAt` and `EnqueueIn` to enqueue and schedule tasks.
### Added
- `asynqmon workers` was added to list all running workers information
## [0.4.0] - 2020-02-13
### Changed
- `Handler` interface has changed. `ProcessTask` method takes two arguments `context.Context` and `*asynq.Task`
- `Queues` field in `Config` has change from `map[string]uint` to `map[string]int`
### Added
- `Client` can optionally schedule task with `asynq.Timeout(duration)` to specify timeout duration for task. Default is no timeout.
- `asynqmon cancel [task id]` will send a cancelation signal to the goroutine processing the speicified task.
## [0.3.0] - 2020-02-04
### Added
- `asynqmon ps` was added to list all background worker processes
## [0.2.2] - 2020-01-26
### Fixed
- Fixed restoring unfinished tasks back to correct queues.
### Changed
- `asynqmon ls` command is now paginated (default 30 tasks from first page)
- `asynqmon ls enqueued:[queue name]` requires queue name to be specified
## [0.2.1] - 2020-01-22
### Fixed
- More structured log messages
- Prevent spamming logs with a bunch of errors when Redis connection is lost
- Fixed and updated README doc
## [0.2.0] - 2020-01-19
### Added
- NewTask constructor
- `Queues` option in `Config` to specify mutiple queues with priority level
- `Client` can schedule a task with `asynq.Queue(name)` to specify which queue to use
- `StrictPriority` option in `Config` to specify whether the priority should be followed strictly
- `RedisConnOpt` to abstract away redis client implementation
- [CLI] `asynqmon rmq` command to remove queue
### Changed
- `Client` and `Background` constructors take `RedisConnOpt` as their first argument.
- `asynqmon stats` now shows the total of all enqueued tasks under "Enqueued"
- `asynqmon stats` now shows each queue's task count
- `asynqmon history` now doesn't take any arguments and shows data from the last 10 days by default (use `--days` flag to change the number of days)
- Task type is now immutable (i.e., Payload is read-only)
## [0.1.0] - 2020-01-04
### Added

51
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,51 @@
# Contributing
Thanks for your interest in contributing to Asynq!
We are open to, and grateful for, any contributions made by the community.
## Reporting Bugs
Have a look at our [issue tracker](https://github.com/hibiken/asynq/issues). If you can't find an issue (open or closed)
describing your problem (or a very similar one) there, please open a new issue with
the following details:
- Which versions of Go and Redis are you using?
- What are you trying to accomplish?
- What is the full error you are seeing?
- How can we reproduce this?
- Please quote as much of your code as needed to reproduce (best link to a
public repository or Gist)
## Getting Help
We run a [Gitter
channel](https://gitter.im/go-asynq/community) where you can ask questions and
get help. Feel free to ask there before opening a GitHub issue.
## Submitting Feature Requests
If you can't find an issue (open or closed) describing your idea on our [issue
tracker](https://github.com/hibiken/asynq/issues), open an issue. Adding answers to the following
questions in your description is +1:
- What do you want to do, and how do you expect Asynq to support you with that?
- How might this be added to Asynq?
- What are possible alternatives?
- Are there any disadvantages?
Thank you! We'll try to respond as quickly as possible.
## Contributing Code
1. Fork this repo
2. Download your fork `git clone https://github.com/your-username/asynq && cd asynq`
3. Create your branch `git checkout -b your-branch-name`
4. Make and commit your changes
5. Push the branch `git push origin your-branch-name`
6. Create a new pull request
Please try to keep your pull request focused in scope and avoid including unrelated commits.
After you have submitted your pull request, we'll try to get back to you as soon as possible. We may suggest some changes or improvements.
Thank you for contributing!

210
README.md
View File

@@ -1,152 +1,158 @@
# Asynq&nbsp;[![Build Status](https://travis-ci.com/hibiken/asynq.svg?token=paqzfpSkF4p23s5Ux39b&branch=master)](https://travis-ci.com/hibiken/asynq)
# Asynq
Simple, efficent asynchronous task processing library in Go.
[![Build Status](https://travis-ci.com/hibiken/asynq.svg?token=paqzfpSkF4p23s5Ux39b&branch=master)](https://travis-ci.com/hibiken/asynq)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)
[![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq)
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq)
## Table of Contents
Asynq is a simple Go library for queueing tasks and processing them in the background with workers.
It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
- [Overview](#overview)
- [Requirements](#requirements)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [License](#license)
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before v1.0.0 release.
## Overview
Asynq provides a simple interface to asynchronous task processing.
Asynq also ships with a CLI to monitor the queues and take manual actions if needed.
Asynq provides:
- Clear separation of task producer and consumer
- Ability to schedule task processing in the future
- Automatic retry of failed tasks with exponential backoff
- Ability to configure max retry count per task
- Ability to configure max number of worker goroutines to process tasks
- Unix signal handling to safely shutdown background processing
- Enhanced reliability TODO(hibiken): link to wiki page describing this.
- CLI to query and mutate queues state for mointoring and administrative purposes
## Requirements
| Dependency | Version |
| -------------------------------------------------------------- | ------- |
| [Redis](https://redis.io/) | v2.6+ |
| [Go](https://golang.org/) | v1.12+ |
| [github.com/go-redis/redis](https://github.com/go-redis/redis) | v.7.0+ |
![Gif](/docs/assets/demo.gif)
## Installation
```
go get github.com/hibiken/asynq
To install `asynq` library, run the following command:
```sh
go get -u github.com/hibiken/asynq
```
## Getting Started
## Quickstart
1. Import `asynq` in your file.
First, make sure you are running a Redis server locally.
```go
import "github.com/hibiken/asynq"
```sh
$ redis-server
```
2. Create a `Client` instance to create tasks.
To create and schedule tasks, use `Client` and provide a task and when to enqueue the task.
```go
func main() {
r := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
r := &asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
}
client := asynq.NewClient(r)
t1 := asynq.Task{
Type: "send_welcome_email",
Payload: map[string]interface{}{
"recipient_id": 1234,
},
}
// Create a task with task type and payload
t1 := asynq.NewTask("email:signup", map[string]interface{}{"user_id": 42})
t2 := asynq.Task{
Type: "send_reminder_email",
Payload: map[string]interface{}{
"recipient_id": 1234,
},
}
t2 := asynq.NewTask("email:reminder", map[string]interface{}{"user_id": 42})
// process the task immediately.
err := client.Schedule(&t1, time.Now())
// Process immediately
err := client.Enqueue(t1)
// process the task 24 hours later.
err = client.Schedule(&t2, time.Now().Add(24 * time.Hour))
// Process 24 hrs later
err = client.EnqueueIn(24*time.Hour, t2)
// specify the max number of retry (default: 25)
err = client.Schedule(&t1, time.Now(), asynq.MaxRetry(1))
// Process at specified time.
target := time.Date(2020, time.March, 6, 10, 0, 0, 0, time.UTC)
err = client.EnqueueAt(target, t2)
// Pass options to specify processing behavior for a given task.
//
// MaxRetry specifies the maximum number of times this task will be retried (Default is 25).
// Queue specifies which queue to enqueue this task to (Default is "default").
// Timeout specifies the the timeout for the task's context (Default is no timeout).
err = client.Enqueue(t1, asynq.MaxRetry(10), asynq.Queue("critical"), asynq.Timeout(time.Minute))
}
```
3. Create a `Background` instance to process tasks.
To start the background workers, use `Background` and provide your `Handler` to process the tasks.
```go
func main() {
r := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
}
bg := asynq.NewBackground(r, &asynq.Config{
Concurrency: 20,
})
// Blocks until signal TERM or INT is received.
// For graceful shutdown, send signal TSTP to stop processing more tasks
// before sending TERM or INT signal.
bg.Run(handler)
}
```
The argument to `(*asynq.Background).Run` is an interface `asynq.Handler` which has one method `ProcessTask`.
`Handler` is an interface with one method `ProcessTask` with the following signature.
```go
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried.
// will be retried after delay.
type Handler interface {
ProcessTask(*Task) error
ProcessTask(context.Context, *asynq.Task) error
}
```
The simplest way to implement a handler is to define a function with the same signature and use `asynq.HandlerFunc` adapter type when passing it to `Run`.
You can optionally use `ServeMux` to create a handler, just as you would with `"net/http"` Handler.
```go
func handler(t *asynq.Task) error {
switch t.Type {
case "send_welcome_email":
id, err := t.Payload.GetInt("recipient_id")
if err != nil {
return err
}
fmt.Printf("Send Welcome Email to %d\n", id)
// ... handle other types ...
default:
return fmt.Errorf("unexpected task type: %s", t.Type)
}
return nil
}
func main() {
r := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
r := &asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
}
bg := asynq.NewBackground(r, &asynq.Config{
Concurrency: 20,
// Specify how many concurrent workers to use
Concurrency: 10,
// Optionally specify multiple queues with different priority.
Queues: map[string]int{
"critical": 6,
"default": 3,
"low": 1,
},
// See the godoc for other configuration options
})
// Use asynq.HandlerFunc adapter for a handler function
bg.Run(asynq.HandlerFunc(handler))
mux := asynq.NewServeMux()
mux.HandleFunc("email:signup", signupEmailHandler)
mux.HandleFunc("email:reminder", reminderEmailHandler)
// ...register other handlers...
bg.Run(mux)
}
// function with the same signature as the ProcessTask method for the Handler interface.
func signupEmailHandler(ctx context.Context, t *asynq.Task) error {
id, err := t.Payload.GetInt("user_id")
if err != nil {
return err
}
fmt.Printf("Send welcome email to user %d\n", id)
// ...your email sending logic...
return nil
}
```
For a more detailed walk-through of the library, see our [Getting Started Guide](https://github.com/hibiken/asynq/wiki/Getting-Started).
To Learn more about `asynq` features and APIs, see our [Wiki pages](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq).
## Requirements
| Dependency | Version |
| -------------------------- | ------- |
| [Redis](https://redis.io/) | v2.8+ |
| [Go](https://golang.org/) | v1.12+ |
## Command Line Tool
Asynq ships with a command line tool to inspect the state of queues and tasks.
To install, run the following command:
```sh
go get -u github.com/hibiken/asynq/tools/asynqmon
```
For details on how to use the tool, refer to the tool's [README](/tools/asynqmon/README.md).
## Contributing
We are open to, and grateful for, any contributions (Github issues/pull-requests, feedback on Gitter channel, etc) made by the community.
Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
## Acknowledgements
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynqmon CLI is built with cobra
## License
Asynq is released under the MIT license. See [LICENSE](https://github.com/hibiken/asynq/blob/master/LICENSE).

138
asynq.go
View File

@@ -4,16 +4,140 @@
package asynq
/*
TODOs:
- [P0] Go docs
*/
import (
"crypto/tls"
"fmt"
// Task represents a task to be performed.
"github.com/go-redis/redis/v7"
)
// Task represents a unit of work to be performed.
type Task struct {
// Type indicates the kind of the task to be performed.
// Type indicates the type of task to be performed.
Type string
// Payload holds data needed to process the task.
// Payload holds data needed to perform the task.
Payload Payload
}
// NewTask returns a new Task given a type name and payload data.
//
// The payload values must be serializable.
func NewTask(typename string, payload map[string]interface{}) *Task {
return &Task{
Type: typename,
Payload: Payload{payload},
}
}
// RedisConnOpt is a discriminated union of types that represent Redis connection configuration option.
//
// RedisConnOpt represents a sum of following types:
//
// RedisClientOpt | *RedisClientOpt | RedisFailoverClientOpt | *RedisFailoverClientOpt
type RedisConnOpt interface{}
// RedisClientOpt is used to create a redis client that connects
// to a redis server directly.
type RedisClientOpt struct {
// Network type to use, either tcp or unix.
// Default is tcp.
Network string
// Redis server address in "host:port" format.
Addr string
// Redis server password.
Password string
// Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select.
DB int
// Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int
// TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set.
TLSConfig *tls.Config
}
// RedisFailoverClientOpt is used to creates a redis client that talks
// to redis sentinels for service discovery and has an automatic failover
// capability.
type RedisFailoverClientOpt struct {
// Redis master name that monitored by sentinels.
MasterName string
// Addresses of sentinels in "host:port" format.
// Use at least three sentinels to avoid problems described in
// https://redis.io/topics/sentinel.
SentinelAddrs []string
// Redis sentinel password.
SentinelPassword string
// Redis server password.
Password string
// Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select.
DB int
// Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int
// TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set.
TLSConfig *tls.Config
}
// createRedisClient returns a redis client given a redis connection configuration.
//
// Passing an unexpected type as a RedisConnOpt argument will cause panic.
func createRedisClient(r RedisConnOpt) *redis.Client {
switch r := r.(type) {
case *RedisClientOpt:
return redis.NewClient(&redis.Options{
Network: r.Network,
Addr: r.Addr,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case RedisClientOpt:
return redis.NewClient(&redis.Options{
Network: r.Network,
Addr: r.Addr,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case *RedisFailoverClientOpt:
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case RedisFailoverClientOpt:
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
default:
panic(fmt.Sprintf("asynq: unexpected type %T for RedisConnOpt", r))
}
}

View File

@@ -16,11 +16,17 @@ import (
// This file defines test helper functions used by
// other test files.
// redis used for package testing.
const (
redisAddr = "localhost:6379"
redisDB = 14
)
func setup(tb testing.TB) *redis.Client {
tb.Helper()
r := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
DB: 14,
Addr: redisAddr,
DB: redisDB,
})
// Start each test with a clean slate.
h.FlushDB(tb, r)

View File

@@ -5,8 +5,8 @@
package asynq
import (
"context"
"fmt"
"log"
"math"
"math/rand"
"os"
@@ -15,16 +15,16 @@ import (
"syscall"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
// Background is responsible for managing the background-task processing.
//
// Background manages background queues to process tasks and retry if
// necessary. If the processing of a task is unsuccessful, background will
// schedule it for a retry with an exponential backoff until either the task
// gets processed successfully or it exhausts its max retry count.
// Background manages task queues to process tasks.
// If the processing of a task is unsuccessful, background will
// schedule it for a retry until either the task gets processed successfully
// or it exhausts its max retry count.
//
// Once a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met
@@ -34,16 +34,24 @@ type Background struct {
mu sync.Mutex
running bool
rdb *rdb.RDB
scheduler *scheduler
processor *processor
ps *base.ProcessState
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
rdb *rdb.RDB
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
}
// Config specifies the background-task processing behavior.
type Config struct {
// Maximum number of concurrent workers to process tasks.
// Maximum number of concurrent processing of tasks.
//
// If set to zero or negative value, NewBackground will overwrite the value to one.
// If set to a zero or negative value, NewBackground will overwrite the value to one.
Concurrency int
// Function to calculate retry delay for a failed task.
@@ -52,8 +60,64 @@ type Config struct {
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question. t is read-only, the function should not mutate t.
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the background will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
// If set to true, tasks in the queue with the highest priority is processed first.
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
}
// An ErrorHandler handles errors returned by the task handler.
type ErrorHandler interface {
HandleError(task *Task, err error, retried, maxRetry int)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(task *Task, err error, retried, maxRetry int)
// HandleError calls fn(task, err, retried, maxRetry)
func (fn ErrorHandlerFunc) HandleError(task *Task, err error, retried, maxRetry int) {
fn(task, err, retried, maxRetry)
}
// Formula taken from https://github.com/mperham/sidekiq.
@@ -63,9 +127,13 @@ func defaultDelayFunc(n int, e error, t *Task) time.Duration {
return time.Duration(s) * time.Second
}
// NewBackground returns a new Background instance given a redis client
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
// NewBackground returns a new Background given a redis connection option
// and background processing configuration.
func NewBackground(r *redis.Client, cfg *Config) *Background {
func NewBackground(r RedisConnOpt, cfg *Config) *Background {
n := cfg.Concurrency
if n < 1 {
n = 1
@@ -74,39 +142,62 @@ func NewBackground(r *redis.Client, cfg *Config) *Background {
if delayFunc == nil {
delayFunc = defaultDelayFunc
}
rdb := rdb.NewRDB(r)
scheduler := newScheduler(rdb, 5*time.Second)
processor := newProcessor(rdb, n, delayFunc)
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
pid := os.Getpid()
rdb := rdb.NewRDB(createRedisClient(r))
ps := base.NewProcessState(host, pid, n, queues, cfg.StrictPriority)
syncCh := make(chan *syncRequest)
cancels := base.NewCancelations()
syncer := newSyncer(syncCh, 5*time.Second)
heartbeater := newHeartbeater(rdb, ps, 5*time.Second)
scheduler := newScheduler(rdb, 5*time.Second, queues)
processor := newProcessor(rdb, ps, delayFunc, syncCh, cancels, cfg.ErrorHandler)
subscriber := newSubscriber(rdb, cancels)
return &Background{
rdb: rdb,
scheduler: scheduler,
processor: processor,
rdb: rdb,
ps: ps,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
}
}
// A Handler processes a task.
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
//
// Note: The argument task is ready only, ProcessTask should
// not mutate the task.
type Handler interface {
ProcessTask(*Task) error
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(*Task) error
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(task)
func (fn HandlerFunc) ProcessTask(task *Task) error {
return fn(task)
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// Run starts the background-task processing and blocks until
@@ -114,9 +205,15 @@ func (fn HandlerFunc) ProcessTask(task *Task) error {
// a signal, it gracefully shuts down all pending workers and other
// goroutines to process the tasks.
func (bg *Background) Run(handler Handler) {
logger.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid()))
logger.info("Starting processing")
bg.start(handler)
defer bg.stop()
logger.info("Send signal TSTP to stop processing new tasks")
logger.info("Send signal TERM or INT to terminate the process")
// Wait for a signal to terminate.
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGTSTP)
@@ -124,12 +221,13 @@ func (bg *Background) Run(handler Handler) {
sig := <-sigs
if sig == syscall.SIGTSTP {
bg.processor.stop()
bg.ps.SetStatus(base.StatusStopped)
continue
}
break
}
fmt.Println()
log.Println("[INFO] Starting graceful shutdown...")
logger.info("Starting graceful shutdown")
}
// starts the background-task processing.
@@ -143,8 +241,11 @@ func (bg *Background) start(handler Handler) {
bg.running = true
bg.processor.handler = handler
bg.scheduler.start()
bg.processor.start()
bg.heartbeater.start(&bg.wg)
bg.subscriber.start(&bg.wg)
bg.syncer.start(&bg.wg)
bg.scheduler.start(&bg.wg)
bg.processor.start(&bg.wg)
}
// stops the background-task processing.
@@ -155,10 +256,20 @@ func (bg *Background) stop() {
return
}
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
//
// processor -> syncer (via syncCh)
bg.scheduler.terminate()
bg.processor.terminate()
bg.syncer.terminate()
bg.subscriber.terminate()
bg.heartbeater.terminate()
bg.wg.Wait()
bg.rdb.Close()
bg.processor.handler = nil
bg.running = false
logger.info("Bye!")
}

View File

@@ -5,10 +5,11 @@
package asynq
import (
"context"
"testing"
"time"
"github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp"
"go.uber.org/goleak"
)
@@ -17,31 +18,111 @@ func TestBackground(t *testing.T) {
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := redis.NewClient(&redis.Options{
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
})
}
client := NewClient(r)
bg := NewBackground(r, &Config{
Concurrency: 10,
})
// no-op handler
h := func(task *Task) error {
h := func(ctx context.Context, task *Task) error {
return nil
}
bg.start(HandlerFunc(h))
client.Schedule(&Task{
Type: "send_email",
Payload: map[string]interface{}{"recipient_id": 123},
}, time.Now())
err := client.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
client.Schedule(&Task{
Type: "send_email",
Payload: map[string]interface{}{"recipient_id": 456},
}, time.Now().Add(time.Hour))
err = client.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
bg.stop()
}
func TestGCD(t *testing.T) {
tests := []struct {
input []int
want int
}{
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
got := gcd(tc.input...)
if got != tc.want {
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
}
}
}
func TestNormalizeQueueCfg(t *testing.T) {
tests := []struct {
input map[string]int
want map[string]int
}{
{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
}
for _, tc := range tests {
got := normalizeQueueCfg(tc.input)
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
tc.input, got, tc.want, diff)
}
}
}

View File

@@ -5,6 +5,7 @@
package asynq
import (
"context"
"fmt"
"math/rand"
"sync"
@@ -12,15 +13,18 @@ import (
"time"
)
// Simple E2E Benchmark testing with no scheduled tasks and
// no retries.
// Simple E2E Benchmark testing with no scheduled tasks and retries.
func BenchmarkEndToEndSimple(b *testing.B) {
const count = 100000
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
r := setup(b)
client := NewClient(r)
bg := NewBackground(r, &Config{
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis)
bg := NewBackground(redis, &Config{
Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second
@@ -28,13 +32,15 @@ func BenchmarkEndToEndSimple(b *testing.B) {
})
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := Task{Type: fmt.Sprintf("task%d", i), Payload: Payload{"data": i}}
client.Schedule(&t, time.Now())
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
var wg sync.WaitGroup
wg.Add(count)
handler := func(t *Task) error {
handler := func(ctx context.Context, t *Task) error {
wg.Done()
return nil
}
@@ -55,9 +61,13 @@ func BenchmarkEndToEnd(b *testing.B) {
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
rand.Seed(time.Now().UnixNano())
r := setup(b)
client := NewClient(r)
bg := NewBackground(r, &Config{
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis)
bg := NewBackground(redis, &Config{
Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second
@@ -65,17 +75,21 @@ func BenchmarkEndToEnd(b *testing.B) {
})
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := Task{Type: fmt.Sprintf("task%d", i), Payload: Payload{"data": i}}
client.Schedule(&t, time.Now())
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < count; i++ {
t := Task{Type: fmt.Sprintf("scheduled%d", i), Payload: Payload{"data": i}}
client.Schedule(&t, time.Now().Add(time.Second))
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
var wg sync.WaitGroup
wg.Add(count * 2)
handler := func(t *Task) error {
handler := func(ctx context.Context, t *Task) error {
// randomly fail 1% of tasks
if rand.Intn(100) == 1 {
return fmt.Errorf(":(")
@@ -93,3 +107,64 @@ func BenchmarkEndToEnd(b *testing.B) {
b.StartTimer() // end teardown
}
}
// Simple E2E Benchmark testing with no scheduled tasks and retries with multiple queues.
func BenchmarkEndToEndMultipleQueues(b *testing.B) {
// number of tasks to create for each queue
const (
highCount = 20000
defaultCount = 20000
lowCount = 20000
)
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis)
bg := NewBackground(redis, &Config{
Concurrency: 10,
Queues: map[string]int{
"high": 6,
"default": 3,
"low": 1,
},
})
// Create a bunch of tasks
for i := 0; i < highCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("high")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < defaultCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < lowCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("low")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
var wg sync.WaitGroup
wg.Add(highCount + defaultCount + lowCount)
handler := func(ctx context.Context, t *Task) error {
wg.Done()
return nil
}
b.StartTimer() // end setup
bg.start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
bg.stop()
b.StartTimer() // end teardown
}
}

View File

@@ -5,9 +5,9 @@
package asynq
import (
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
@@ -23,20 +23,24 @@ type Client struct {
rdb *rdb.RDB
}
// NewClient and returns a new Client given a redis configuration.
func NewClient(r *redis.Client) *Client {
rdb := rdb.NewRDB(r)
// NewClient and returns a new Client given a redis connection option.
func NewClient(r RedisConnOpt) *Client {
rdb := rdb.NewRDB(createRedisClient(r))
return &Client{rdb}
}
// Option specifies the processing behavior for the associated task.
// Option specifies the task processing behavior.
type Option interface{}
// max number of times a task will be retried.
type retryOption int
// Internal option representations.
type (
retryOption int
queueOption string
timeoutOption time.Duration
)
// MaxRetry returns an option to specify the max number of times
// a task will be retried.
// the task will be retried.
//
// Negative retry count is treated as zero retry.
func MaxRetry(n int) Option {
@@ -46,18 +50,40 @@ func MaxRetry(n int) Option {
return retryOption(n)
}
// Queue returns an option to specify the queue to enqueue the task into.
//
// Queue name is case-insensitive and the lowercased version is used.
func Queue(name string) Option {
return queueOption(strings.ToLower(name))
}
// Timeout returns an option to specify how long a task may run.
//
// Zero duration means no limit.
func Timeout(d time.Duration) Option {
return timeoutOption(d)
}
type option struct {
retry int
retry int
queue string
timeout time.Duration
}
func composeOptions(opts ...Option) option {
res := option{
retry: defaultMaxRetry,
retry: defaultMaxRetry,
queue: base.DefaultQueueName,
timeout: 0,
}
for _, opt := range opts {
switch opt := opt.(type) {
case retryOption:
res.retry = int(opt)
case queueOption:
res.queue = string(opt)
case timeoutOption:
res.timeout = time.Duration(opt)
default:
// ignore unexpected option
}
@@ -70,28 +96,48 @@ const (
defaultMaxRetry = 25
)
// Schedule registers a task to be processed at the specified time.
// EnqueueAt schedules task to be enqueued at the specified time.
//
// Schedule returns nil if the task is registered successfully,
// otherwise returns non-nil error.
// EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// opts specifies the behavior of task processing. If there are conflicting
// Option the last one overrides the ones before.
func (c *Client) Schedule(task *Task, processAt time.Time, opts ...Option) error {
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
opt := composeOptions(opts...)
msg := &base.TaskMessage{
ID: xid.New(),
Type: task.Type,
Payload: task.Payload,
Queue: "default",
Payload: task.Payload.data,
Queue: opt.queue,
Retry: opt.retry,
Timeout: opt.timeout.String(),
}
return c.enqueue(msg, processAt)
return c.enqueue(msg, t)
}
func (c *Client) enqueue(msg *base.TaskMessage, processAt time.Time) error {
if time.Now().After(processAt) {
// Enqueue enqueues task to be processed immediately.
//
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now(), task, opts...)
}
// EnqueueIn schedules task to be enqueued after the specified delay.
//
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now().Add(d), task, opts...)
}
func (c *Client) enqueue(msg *base.TaskMessage, t time.Time) error {
if time.Now().After(t) {
return c.rdb.Enqueue(msg)
}
return c.rdb.Schedule(msg, processAt)
return c.rdb.Schedule(msg, t)
}

View File

@@ -13,31 +13,42 @@ import (
"github.com/hibiken/asynq/internal/base"
)
func TestClient(t *testing.T) {
func TestClientEnqueueAt(t *testing.T) {
r := setup(t)
client := NewClient(r)
client := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
task := &Task{Type: "send_email", Payload: map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}}
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
now = time.Now()
oneHourLater = now.Add(time.Hour)
)
tests := []struct {
desc string
task *Task
processAt time.Time
opts []Option
wantEnqueued []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage
wantScheduled []h.ZSetEntry
}{
{
desc: "Process task immediately",
task: task,
processAt: time.Now(),
processAt: now,
opts: []Option{},
wantEnqueued: []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload,
Retry: defaultMaxRetry,
Queue: "default",
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
@@ -45,69 +56,241 @@ func TestClient(t *testing.T) {
{
desc: "Schedule task to be processed in the future",
task: task,
processAt: time.Now().Add(2 * time.Hour),
processAt: oneHourLater,
opts: []Option{},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{
{
Msg: &base.TaskMessage{
Type: task.Type,
Payload: task.Payload,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
},
Score: time.Now().Add(2 * time.Hour).Unix(),
Score: float64(oneHourLater.Unix()),
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname)
if diff := cmp.Diff(want, gotEnqueued, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
}
}
gotScheduled := h.GetScheduledEntries(t, r)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledQueue, diff)
}
}
}
func TestClientEnqueue(t *testing.T) {
r := setup(t)
client := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
tests := []struct {
desc string
task *Task
opts []Option
wantEnqueued map[string][]*base.TaskMessage
}{
{
desc: "Process task immediately with a custom retry count",
task: task,
processAt: time.Now(),
desc: "Process task immediately with a custom retry count",
task: task,
opts: []Option{
MaxRetry(3),
},
wantEnqueued: []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload,
Retry: 3,
Queue: "default",
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 3,
Queue: "default",
Timeout: time.Duration(0).String(),
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
{
desc: "Negative retry count",
task: task,
processAt: time.Now(),
desc: "Negative retry count",
task: task,
opts: []Option{
MaxRetry(-2),
},
wantEnqueued: []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload,
Retry: 0, // Retry count should be set to zero
Queue: "default",
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 0, // Retry count should be set to zero
Queue: "default",
Timeout: time.Duration(0).String(),
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
{
desc: "Conflicting options",
task: task,
processAt: time.Now(),
desc: "Conflicting options",
task: task,
opts: []Option{
MaxRetry(2),
MaxRetry(10),
},
wantEnqueued: []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload,
Retry: 10, // Last option takes precedence
Queue: "default",
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 10, // Last option takes precedence
Queue: "default",
Timeout: time.Duration(0).String(),
},
},
},
},
{
desc: "With queue option",
task: task,
opts: []Option{
Queue("custom"),
},
wantEnqueued: map[string][]*base.TaskMessage{
"custom": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "custom",
Timeout: time.Duration(0).String(),
},
},
},
},
{
desc: "Queue option should be case-insensitive",
task: task,
opts: []Option{
Queue("HIGH"),
},
wantEnqueued: map[string][]*base.TaskMessage{
"high": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "high",
Timeout: time.Duration(0).String(),
},
},
},
},
{
desc: "Timeout option sets the timeout duration",
task: task,
opts: []Option{
Timeout(20 * time.Second),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: (20 * time.Second).String(),
},
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
for qname, want := range tc.wantEnqueued {
got := h.GetEnqueuedMessages(t, r, qname)
if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
}
}
}
}
func TestClientEnqueueIn(t *testing.T) {
r := setup(t)
client := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
tests := []struct {
desc string
task *Task
delay time.Duration
opts []Option
wantEnqueued map[string][]*base.TaskMessage
wantScheduled []h.ZSetEntry
}{
{
desc: "schedule a task to be enqueued in one hour",
task: task,
delay: time.Hour,
opts: []Option{},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{
{
Msg: &base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
},
Score: float64(time.Now().Add(time.Hour).Unix()),
},
},
},
{
desc: "Zero delay",
task: task,
delay: 0,
opts: []Option{},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: time.Duration(0).String(),
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
@@ -117,15 +300,17 @@ func TestClient(t *testing.T) {
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.Schedule(tc.task, tc.processAt, tc.opts...)
err := client.EnqueueIn(tc.delay, tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
gotEnqueued := h.GetEnqueuedMessages(t, r)
if diff := cmp.Diff(tc.wantEnqueued, gotEnqueued, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.DefaultQueue, diff)
for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname)
if diff := cmp.Diff(want, gotEnqueued, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
}
}
gotScheduled := h.GetScheduledEntries(t, r)

36
doc.go
View File

@@ -3,18 +3,32 @@
// that can be found in the LICENSE file.
/*
Package asynq provides a framework for background task processing.
Package asynq provides a framework for asynchronous task processing.
Asynq uses Redis as a message broker. To connect to redis server,
specify the options using one of RedisConnOpt types.
redis = &asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
Password: "xxxxx",
DB: 3,
}
The Client is used to register a task to be processed at the specified time.
client := asynq.NewClient(redis)
Task is created with two parameters: its type and payload.
t := asynq.Task{
Type: "send_email",
Payload: map[string]interface{}{"user_id": 42},
}
client := asynq.NewClient(redis)
err := client.Schedule(&t, time.Now().Add(time.Minute))
t := asynq.NewTask(
"send_email",
map[string]interface{}{"user_id": 42})
// Enqueue the task to be processed immediately.
err := client.Enqueue(t)
// Schedule the task to be processed in one minute.
err = client.EnqueueIn(time.Minute, t)
The Background is used to run the background task processing with a given
handler.
@@ -27,23 +41,21 @@ handler.
Handler is an interface with one method ProcessTask which
takes a task and returns an error. Handler should return nil if
the processing is successful, otherwise return a non-nil error.
If handler returns a non-nil error, the task will be retried in the future.
If handler panics or returns a non-nil error, the task will be retried in the future.
Example of a type that implements the Handler interface.
type TaskHandler struct {
// ...
}
func (h *TaskHandler) ProcessTask(task *asynq.Task) error {
func (h *TaskHandler) ProcessTask(ctx context.Context, task *asynq.Task) error {
switch task.Type {
case "send_email":
id, err := task.Payload.GetInt("user_id")
// send email
case "generate_thumbnail":
// generate thumbnail image
//...
default:
return fmt.Errorf("unepected task type %q", task.Type)
return fmt.Errorf("unexpected task type %q", task.Type)
}
return nil
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
docs/assets/asynqmon_ps.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 582 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
docs/assets/demo.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 983 KiB

13
go.mod
View File

@@ -3,19 +3,12 @@ module github.com/hibiken/asynq
go 1.13
require (
github.com/go-redis/redis/v7 v7.0.0-beta.4
github.com/google/go-cmp v0.3.1
github.com/mitchellh/go-homedir v1.1.0
github.com/pelletier/go-toml v1.6.0 // indirect
github.com/go-redis/redis/v7 v7.2.0
github.com/google/go-cmp v0.4.0
github.com/rs/xid v1.2.1
github.com/spf13/afero v1.2.2 // indirect
github.com/spf13/cast v1.3.1
github.com/spf13/cobra v0.0.5
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.6.0
go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e // indirect
golang.org/x/text v0.3.2 // indirect
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/yaml.v2 v2.2.7 // indirect
)

165
go.sum
View File

@@ -1,211 +1,74 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis/v7 v7.0.0-beta.4 h1:p6z7Pde69EGRWvlC++y8aFcaWegyrKHzOBGo0zUACTQ=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2 h1:VUFqw5KcqRf7i70GOzW7N+Q7+gxVBkSSqiXB12+JQ4M=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.6.0 h1:qSjVKzM2dmqQLutPN4Y0SEzDpAf7T6HHIT3E2Xr75Gg=
github.com/spf13/viper v1.6.0/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092 h1:4QSRKanuywn15aTZvI/mIDEgPQpswuFndXpOj3rKEco=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a h1:1n5lsVfiQW3yfsRGu98756EH1YthsFqr/5mxHduZW2A=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

71
heartbeat.go Normal file
View File

@@ -0,0 +1,71 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
// heartbeater is responsible for writing process info to redis periodically to
// indicate that the background worker process is up.
type heartbeater struct {
rdb *rdb.RDB
ps *base.ProcessState
// channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{}
// interval between heartbeats.
interval time.Duration
}
func newHeartbeater(rdb *rdb.RDB, ps *base.ProcessState, interval time.Duration) *heartbeater {
return &heartbeater{
rdb: rdb,
ps: ps,
done: make(chan struct{}),
interval: interval,
}
}
func (h *heartbeater) terminate() {
logger.info("Heartbeater shutting down...")
// Signal the heartbeater goroutine to stop.
h.done <- struct{}{}
}
func (h *heartbeater) start(wg *sync.WaitGroup) {
h.ps.SetStarted(time.Now())
h.ps.SetStatus(base.StatusRunning)
wg.Add(1)
go func() {
defer wg.Done()
h.beat()
for {
select {
case <-h.done:
h.rdb.ClearProcessState(h.ps)
logger.info("Heartbeater done")
return
case <-time.After(h.interval):
h.beat()
}
}
}()
}
func (h *heartbeater) beat() {
// Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed.
err := h.rdb.WriteProcessState(h.ps, h.interval*2)
if err != nil {
logger.error("could not write heartbeat data: %v", err)
}
}

103
heartbeat_test.go Normal file
View File

@@ -0,0 +1,103 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestHeartbeater(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
tests := []struct {
interval time.Duration
host string
pid int
queues map[string]int
concurrency int
}{
{time.Second, "localhost", 45678, map[string]int{"default": 1}, 10},
}
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
for _, tc := range tests {
h.FlushDB(t, r)
state := base.NewProcessState(tc.host, tc.pid, tc.concurrency, tc.queues, false)
hb := newHeartbeater(rdbClient, state, tc.interval)
var wg sync.WaitGroup
hb.start(&wg)
want := &base.ProcessInfo{
Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: time.Now(),
Status: "running",
}
// allow for heartbeater to write to redis
time.Sleep(tc.interval * 2)
ps, err := rdbClient.ListProcesses()
if err != nil {
t.Errorf("could not read process status from redis: %v", err)
hb.terminate()
continue
}
if len(ps) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
hb.terminate()
continue
}
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
hb.terminate()
continue
}
// status change
state.SetStatus(base.StatusStopped)
// allow for heartbeater to write to redis
time.Sleep(tc.interval * 2)
want.Status = "stopped"
ps, err = rdbClient.ListProcesses()
if err != nil {
t.Errorf("could not read process status from redis: %v", err)
hb.terminate()
continue
}
if len(ps) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
hb.terminate()
continue
}
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
hb.terminate()
continue
}
hb.terminate()
}
}

View File

@@ -20,7 +20,7 @@ import (
// ZSetEntry is an entry in redis sorted set.
type ZSetEntry struct {
Msg *base.TaskMessage
Score int64
Score float64
}
// SortMsgOpt is a cmp.Option to sort base.TaskMessage for comparing slice of task messages.
@@ -41,6 +41,34 @@ var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) [
return out
})
// SortProcessInfoOpt is a cmp.Option to sort base.ProcessInfo for comparing slice of process info.
var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.ProcessInfo) []*base.ProcessInfo {
out := append([]*base.ProcessInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host
}
return out[i].PID < out[j].PID
})
return out
})
// SortWorkerInfoOpt is a cmp.Option to sort base.WorkerInfo for comparing slice of worker info.
var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo {
out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].ID.String() < out[j].ID.String()
})
return out
})
// SortStringSliceOpt is a cmp.Option to sort string slice.
var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []string {
out := append([]string(nil), in...)
sort.Strings(out)
return out
})
// IgnoreIDOpt is an cmp.Option to ignore ID field in task messages when comparing.
var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
@@ -49,7 +77,19 @@ func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskM
return &base.TaskMessage{
ID: xid.New(),
Type: taskType,
Queue: "default",
Queue: base.DefaultQueueName,
Retry: 25,
Payload: payload,
}
}
// NewTaskMessageWithQueue returns a new instance of TaskMessage given a
// task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage {
return &base.TaskMessage{
ID: xid.New(),
Type: taskType,
Queue: qname,
Retry: 25,
Payload: payload,
}
@@ -108,10 +148,17 @@ func FlushDB(tb testing.TB, r *redis.Client) {
}
}
// SeedDefaultQueue initializes the default queue with the given messages.
func SeedDefaultQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage) {
// SeedEnqueuedQueue initializes the specified queue with the given messages.
//
// If queue name option is not passed, it defaults to the default queue.
func SeedEnqueuedQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage, queueOpt ...string) {
tb.Helper()
seedRedisList(tb, r, base.DefaultQueue, msgs)
queue := base.DefaultQueue
if len(queueOpt) > 0 {
queue = base.QueueKey(queueOpt[0])
}
r.SAdd(base.AllQueues, queue)
seedRedisList(tb, r, queue, msgs)
}
// SeedInProgressQueue initializes the in-progress queue with the given messages.
@@ -156,10 +203,16 @@ func seedRedisZSet(tb testing.TB, c *redis.Client, key string, items []ZSetEntry
}
}
// GetEnqueuedMessages returns all task messages in the default queue.
func GetEnqueuedMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage {
// GetEnqueuedMessages returns all task messages in the specified queue.
//
// If queue name option is not passed, it defaults to the default queue.
func GetEnqueuedMessages(tb testing.TB, r *redis.Client, queueOpt ...string) []*base.TaskMessage {
tb.Helper()
return getListMessages(tb, r, base.DefaultQueue)
queue := base.DefaultQueue
if len(queueOpt) > 0 {
queue = base.QueueKey(queueOpt[0])
}
return getListMessages(tb, r, queue)
}
// GetInProgressMessages returns all task messages in the in-progress queue.
@@ -220,7 +273,7 @@ func getZSetEntries(tb testing.TB, r *redis.Client, zset string) []ZSetEntry {
for _, z := range data {
entries = append(entries, ZSetEntry{
Msg: MustUnmarshal(tb, z.Member.(string)),
Score: int64(z.Score),
Score: z.Score,
})
}
return entries

View File

@@ -6,35 +6,61 @@
package base
import (
"context"
"fmt"
"strings"
"sync"
"time"
"github.com/rs/xid"
)
// DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default"
// Redis keys
const (
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
DefaultQueue = QueuePrefix + "default" // LIST
ScheduledQueue = "asynq:scheduled" // ZSET
RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST
AllProcesses = "asynq:ps" // ZSET
psPrefix = "asynq:ps:" // STRING - asynq:ps:<host>:<pid>
AllWorkers = "asynq:workers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
AllQueues = "asynq:queues" // SET
DefaultQueue = QueuePrefix + DefaultQueueName // LIST
ScheduledQueue = "asynq:scheduled" // ZSET
RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST
CancelChannel = "asynq:cancel" // PubSub channel
)
// ProcessedKey returns a redis key string for procesed count
// for the given day.
// QueueKey returns a redis key for the given queue name.
func QueueKey(qname string) string {
return QueuePrefix + strings.ToLower(qname)
}
// ProcessedKey returns a redis key for processed count for the given day.
func ProcessedKey(t time.Time) string {
return processedPrefix + t.UTC().Format("2006-01-02")
}
// FailureKey returns a redis key string for failure count
// for the given day.
// FailureKey returns a redis key for failure count for the given day.
func FailureKey(t time.Time) string {
return failurePrefix + t.UTC().Format("2006-01-02")
}
// ProcessInfoKey returns a redis key for process info.
func ProcessInfoKey(hostname string, pid int) string {
return fmt.Sprintf("%s%s:%d", psPrefix, hostname, pid)
}
// WorkersKey returns a redis key for the workers given hostname and pid.
func WorkersKey(hostname string, pid int) string {
return fmt.Sprintf("%s%s:%d", workersPrefix, hostname, pid)
}
// TaskMessage is the internal representation of a task with additional metadata fields.
// Serialized data of this type gets written to redis.
type TaskMessage struct {
@@ -58,4 +84,220 @@ type TaskMessage struct {
// ErrorMsg holds the error message from the last failure.
ErrorMsg string
// Timeout specifies how long a task may run.
// The string value should be compatible with time.Duration.ParseDuration.
//
// Zero means no limit.
Timeout string
}
// ProcessState holds process level information.
//
// ProcessStates are safe for concurrent use by multiple goroutines.
type ProcessState struct {
mu sync.Mutex // guards all data fields
concurrency int
queues map[string]int
strictPriority bool
pid int
host string
status PStatus
started time.Time
workers map[string]*workerStats
}
// PStatus represents status of a process.
type PStatus int
const (
// StatusIdle indicates process is in idle state.
StatusIdle PStatus = iota
// StatusRunning indicates process is up and processing tasks.
StatusRunning
// StatusStopped indicates process is up but not processing new tasks.
StatusStopped
)
var statuses = []string{
"idle",
"running",
"stopped",
}
func (s PStatus) String() string {
if StatusIdle <= s && s <= StatusStopped {
return statuses[s]
}
return "unknown status"
}
type workerStats struct {
msg *TaskMessage
started time.Time
}
// NewProcessState returns a new instance of ProcessState.
func NewProcessState(host string, pid, concurrency int, queues map[string]int, strict bool) *ProcessState {
return &ProcessState{
host: host,
pid: pid,
concurrency: concurrency,
queues: cloneQueueConfig(queues),
strictPriority: strict,
status: StatusIdle,
workers: make(map[string]*workerStats),
}
}
// SetStatus updates the state of process.
func (ps *ProcessState) SetStatus(status PStatus) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.status = status
}
// SetStarted records when the process started processing.
func (ps *ProcessState) SetStarted(t time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.started = t
}
// AddWorkerStats records when a worker started and which task it's processing.
func (ps *ProcessState) AddWorkerStats(msg *TaskMessage, started time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.workers[msg.ID.String()] = &workerStats{msg, started}
}
// DeleteWorkerStats removes a worker's entry from the process state.
func (ps *ProcessState) DeleteWorkerStats(msg *TaskMessage) {
ps.mu.Lock()
defer ps.mu.Unlock()
delete(ps.workers, msg.ID.String())
}
// Get returns current state of process as a ProcessInfo.
func (ps *ProcessState) Get() *ProcessInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
return &ProcessInfo{
Host: ps.host,
PID: ps.pid,
Concurrency: ps.concurrency,
Queues: cloneQueueConfig(ps.queues),
StrictPriority: ps.strictPriority,
Status: ps.status.String(),
Started: ps.started,
ActiveWorkerCount: len(ps.workers),
}
}
// GetWorkers returns a list of currently running workers' info.
func (ps *ProcessState) GetWorkers() []*WorkerInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
var res []*WorkerInfo
for _, w := range ps.workers {
res = append(res, &WorkerInfo{
Host: ps.host,
PID: ps.pid,
ID: w.msg.ID,
Type: w.msg.Type,
Queue: w.msg.Queue,
Payload: clonePayload(w.msg.Payload),
Started: w.started,
})
}
return res
}
func cloneQueueConfig(qcfg map[string]int) map[string]int {
res := make(map[string]int)
for qname, n := range qcfg {
res[qname] = n
}
return res
}
func clonePayload(payload map[string]interface{}) map[string]interface{} {
res := make(map[string]interface{})
for k, v := range payload {
res[k] = v
}
return res
}
// ProcessInfo holds information about a running background worker process.
type ProcessInfo struct {
Host string
PID int
Concurrency int
Queues map[string]int
StrictPriority bool
Status string
Started time.Time
ActiveWorkerCount int
}
// WorkerInfo holds information about a running worker.
type WorkerInfo struct {
Host string
PID int
ID xid.ID
Type string
Queue string
Payload map[string]interface{}
Started time.Time
}
// Cancelations is a collection that holds cancel functions for all in-progress tasks.
//
// Cancelations are safe for concurrent use by multipel goroutines.
type Cancelations struct {
mu sync.Mutex
cancelFuncs map[string]context.CancelFunc
}
// NewCancelations returns a Cancelations instance.
func NewCancelations() *Cancelations {
return &Cancelations{
cancelFuncs: make(map[string]context.CancelFunc),
}
}
// Add adds a new cancel func to the collection.
func (c *Cancelations) Add(id string, fn context.CancelFunc) {
c.mu.Lock()
defer c.mu.Unlock()
c.cancelFuncs[id] = fn
}
// Delete deletes a cancel func from the collection given an id.
func (c *Cancelations) Delete(id string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.cancelFuncs, id)
}
// Get returns a cancel func given an id.
func (c *Cancelations) Get(id string) (fn context.CancelFunc, ok bool) {
c.mu.Lock()
defer c.mu.Unlock()
fn, ok = c.cancelFuncs[id]
return fn, ok
}
// GetAll returns all cancel funcs.
func (c *Cancelations) GetAll() []context.CancelFunc {
c.mu.Lock()
defer c.mu.Unlock()
var res []context.CancelFunc
for _, fn := range c.cancelFuncs {
res = append(res, fn)
}
return res
}

View File

@@ -5,10 +5,32 @@
package base
import (
"context"
"math/rand"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/rs/xid"
)
func TestQueueKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"custom", "asynq:queues:custom"},
}
for _, tc := range tests {
got := QueueKey(tc.qname)
if got != tc.want {
t.Errorf("QueueKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestProcessedKey(t *testing.T) {
tests := []struct {
input time.Time
@@ -44,3 +66,151 @@ func TestFailureKey(t *testing.T) {
}
}
}
func TestProcessInfoKey(t *testing.T) {
tests := []struct {
hostname string
pid int
want string
}{
{"localhost", 9876, "asynq:ps:localhost:9876"},
{"127.0.0.1", 1234, "asynq:ps:127.0.0.1:1234"},
}
for _, tc := range tests {
got := ProcessInfoKey(tc.hostname, tc.pid)
if got != tc.want {
t.Errorf("ProcessInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want)
}
}
}
func TestWorkersKey(t *testing.T) {
tests := []struct {
hostname string
pid int
want string
}{
{"localhost", 9876, "asynq:workers:localhost:9876"},
{"127.0.0.1", 1234, "asynq:workers:127.0.0.1:1234"},
}
for _, tc := range tests {
got := WorkersKey(tc.hostname, tc.pid)
if got != tc.want {
t.Errorf("WorkersKey(%q, %d) = %q, want = %q", tc.hostname, tc.pid, got, tc.want)
}
}
}
// Test for process state being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestProcessStateConcurrentAccess(t *testing.T) {
ps := NewProcessState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false)
var wg sync.WaitGroup
started := time.Now()
msgs := []*TaskMessage{
&TaskMessage{ID: xid.New(), Type: "type1", Payload: map[string]interface{}{"user_id": 42}},
&TaskMessage{ID: xid.New(), Type: "type2"},
&TaskMessage{ID: xid.New(), Type: "type3"},
}
// Simulate hearbeater calling SetStatus and SetStarted.
wg.Add(1)
go func() {
defer wg.Done()
ps.SetStarted(started)
ps.SetStatus(StatusRunning)
}()
// Simulate processor starting worker goroutines.
for _, msg := range msgs {
wg.Add(1)
ps.AddWorkerStats(msg, time.Now())
go func(msg *TaskMessage) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
ps.DeleteWorkerStats(msg)
}(msg)
}
// Simulate hearbeater calling Get and GetWorkers
wg.Add(1)
go func() {
wg.Done()
for i := 0; i < 5; i++ {
ps.Get()
ps.GetWorkers()
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
}
}()
wg.Wait()
want := &ProcessInfo{
Host: "127.0.0.1",
PID: 1234,
Concurrency: 10,
Queues: map[string]int{"default": 1},
StrictPriority: false,
Status: "running",
Started: started,
ActiveWorkerCount: 0,
}
got := ps.Get()
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("(*ProcessState).Get() = %+v, want %+v; (-want,+got)\n%s",
got, want, diff)
}
}
// Test for cancelations being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestCancelationsConcurrentAccess(t *testing.T) {
c := NewCancelations()
_, cancel1 := context.WithCancel(context.Background())
_, cancel2 := context.WithCancel(context.Background())
_, cancel3 := context.WithCancel(context.Background())
var key1, key2, key3 = "key1", "key2", "key3"
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
c.Add(key1, cancel1)
}()
wg.Add(1)
go func() {
defer wg.Done()
c.Add(key2, cancel2)
time.Sleep(200 * time.Millisecond)
c.Delete(key2)
}()
wg.Add(1)
go func() {
defer wg.Done()
c.Add(key3, cancel3)
}()
wg.Wait()
_, ok := c.Get(key1)
if !ok {
t.Errorf("(*Cancelations).Get(%q) = _, false, want <function>, true", key1)
}
_, ok = c.Get(key2)
if ok {
t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2)
}
funcs := c.GetAll()
if len(funcs) != 2 {
t.Errorf("(*Cancelations).GetAll() returns %d functions, want 2", len(funcs))
}
}

View File

@@ -25,6 +25,7 @@ type Stats struct {
Dead int
Processed int
Failed int
Queues map[string]int // map of queue name to number of tasks in the queue (e.g., "default": 100, "critical": 20)
Timestamp time.Time
}
@@ -40,6 +41,7 @@ type EnqueuedTask struct {
ID xid.ID
Type string
Payload map[string]interface{}
Queue string
}
// InProgressTask is a task that's currently being processed.
@@ -56,6 +58,7 @@ type ScheduledTask struct {
Payload map[string]interface{}
ProcessAt time.Time
Score int64
Queue string
}
// RetryTask is a task that's in retry queue because worker failed to process the task.
@@ -69,6 +72,7 @@ type RetryTask struct {
Retried int
Retry int
Score int64
Queue string
}
// DeadTask is a task in that has exhausted all retries.
@@ -79,39 +83,52 @@ type DeadTask struct {
LastFailedAt time.Time
ErrorMsg string
Score int64
Queue string
}
// KEYS[1] -> asynq:queues
// KEYS[2] -> asynq:in_progress
// KEYS[3] -> asynq:scheduled
// KEYS[4] -> asynq:retry
// KEYS[5] -> asynq:dead
// KEYS[6] -> asynq:processed:<yyyy-mm-dd>
// KEYS[7] -> asynq:failure:<yyyy-mm-dd>
var currentStatsCmd = redis.NewScript(`
local res = {}
local queues = redis.call("SMEMBERS", KEYS[1])
for _, qkey in ipairs(queues) do
table.insert(res, qkey)
table.insert(res, redis.call("LLEN", qkey))
end
table.insert(res, KEYS[2])
table.insert(res, redis.call("LLEN", KEYS[2]))
table.insert(res, KEYS[3])
table.insert(res, redis.call("ZCARD", KEYS[3]))
table.insert(res, KEYS[4])
table.insert(res, redis.call("ZCARD", KEYS[4]))
table.insert(res, KEYS[5])
table.insert(res, redis.call("ZCARD", KEYS[5]))
local pcount = 0
local p = redis.call("GET", KEYS[6])
if p then
pcount = tonumber(p)
end
table.insert(res, "processed")
table.insert(res, pcount)
local fcount = 0
local f = redis.call("GET", KEYS[7])
if f then
fcount = tonumber(f)
end
table.insert(res, "failed")
table.insert(res, fcount)
return res`)
// CurrentStats returns a current state of the queues.
func (r *RDB) CurrentStats() (*Stats, error) {
// KEYS[1] -> asynq:queues:default
// KEYS[2] -> asynq:in_progress
// KEYS[3] -> asynq:scheduled
// KEYS[4] -> asynq:retry
// KEYS[5] -> asynq:dead
// KEYS[6] -> asynq:processed:<yyyy-mm-dd>
// KEYS[7] -> asynq:failure:<yyyy-mm-dd>
script := redis.NewScript(`
local qlen = redis.call("LLEN", KEYS[1])
local plen = redis.call("LLEN", KEYS[2])
local slen = redis.call("ZCARD", KEYS[3])
local rlen = redis.call("ZCARD", KEYS[4])
local dlen = redis.call("ZCARD", KEYS[5])
local pcount = 0
local p = redis.call("GET", KEYS[6])
if p then
pcount = tonumber(p)
end
local fcount = 0
local f = redis.call("GET", KEYS[7])
if f then
fcount = tonumber(f)
end
return {qlen, plen, slen, rlen, dlen, pcount, fcount}
`)
now := time.Now()
res, err := script.Run(r.client, []string{
base.DefaultQueue,
res, err := currentStatsCmd.Run(r.client, []string{
base.AllQueues,
base.InProgressQueue,
base.ScheduledQueue,
base.RetryQueue,
@@ -122,22 +139,50 @@ func (r *RDB) CurrentStats() (*Stats, error) {
if err != nil {
return nil, err
}
nums, err := cast.ToIntSliceE(res)
data, err := cast.ToSliceE(res)
if err != nil {
return nil, err
}
return &Stats{
Enqueued: nums[0],
InProgress: nums[1],
Scheduled: nums[2],
Retry: nums[3],
Dead: nums[4],
Processed: nums[5],
Failed: nums[6],
Timestamp: now,
}, nil
stats := &Stats{
Queues: make(map[string]int),
Timestamp: now,
}
for i := 0; i < len(data); i += 2 {
key := cast.ToString(data[i])
val := cast.ToInt(data[i+1])
switch {
case strings.HasPrefix(key, base.QueuePrefix):
stats.Enqueued += val
stats.Queues[strings.TrimPrefix(key, base.QueuePrefix)] = val
case key == base.InProgressQueue:
stats.InProgress = val
case key == base.ScheduledQueue:
stats.Scheduled = val
case key == base.RetryQueue:
stats.Retry = val
case key == base.DeadQueue:
stats.Dead = val
case key == "processed":
stats.Processed = val
case key == "failed":
stats.Failed = val
}
}
return stats, nil
}
var historicalStatsCmd = redis.NewScript(`
local res = {}
for _, key in ipairs(KEYS) do
local n = redis.call("GET", key)
if not n then
n = 0
end
table.insert(res, tonumber(n))
end
return res`)
// HistoricalStats returns a list of stats from the last n days.
func (r *RDB) HistoricalStats(n int) ([]*DailyStats, error) {
if n < 1 {
@@ -153,18 +198,7 @@ func (r *RDB) HistoricalStats(n int) ([]*DailyStats, error) {
keys = append(keys, base.ProcessedKey(ts))
keys = append(keys, base.FailureKey(ts))
}
script := redis.NewScript(`
local res = {}
for _, key in ipairs(KEYS) do
local n = redis.call("GET", key)
if not n then
n = 0
end
table.insert(res, tonumber(n))
end
return res
`)
res, err := script.Run(r.client, keys, len(keys)).Result()
res, err := historicalStatsCmd.Run(r.client, keys, len(keys)).Result()
if err != nil {
return nil, err
}
@@ -200,35 +234,74 @@ func (r *RDB) RedisInfo() (map[string]string, error) {
return info, nil
}
// ListEnqueued returns all enqueued tasks that are ready to be processed.
func (r *RDB) ListEnqueued() ([]*EnqueuedTask, error) {
data, err := r.client.LRange(base.DefaultQueue, 0, -1).Result()
func reverse(x []string) {
for i := len(x)/2 - 1; i >= 0; i-- {
opp := len(x) - 1 - i
x[i], x[opp] = x[opp], x[i]
}
}
// Pagination specifies the page size and page number
// for the list operation.
type Pagination struct {
// Number of items in the page.
Size int
// Page number starting from zero.
Page int
}
func (p Pagination) start() int64 {
return int64(p.Size * p.Page)
}
func (p Pagination) stop() int64 {
return int64(p.Size*p.Page + p.Size - 1)
}
// ListEnqueued returns enqueued tasks that are ready to be processed.
func (r *RDB) ListEnqueued(qname string, pgn Pagination) ([]*EnqueuedTask, error) {
qkey := base.QueueKey(qname)
if !r.client.SIsMember(base.AllQueues, qkey).Val() {
return nil, fmt.Errorf("queue %q does not exist", qname)
}
// Note: Because we use LPUSH to redis list, we need to calculate the
// correct range and reverse the list to get the tasks with pagination.
stop := -pgn.start() - 1
start := -pgn.stop() - 1
data, err := r.client.LRange(qkey, start, stop).Result()
if err != nil {
return nil, err
}
reverse(data)
var tasks []*EnqueuedTask
for _, s := range data {
var msg base.TaskMessage
err := json.Unmarshal([]byte(s), &msg)
if err != nil {
// continue // bad data, ignore and continue
return nil, err
continue // bad data, ignore and continue
}
tasks = append(tasks, &EnqueuedTask{
ID: msg.ID,
Type: msg.Type,
Payload: msg.Payload,
Queue: msg.Queue,
})
}
return tasks, nil
}
// ListInProgress returns all tasks that are currently being processed.
func (r *RDB) ListInProgress() ([]*InProgressTask, error) {
data, err := r.client.LRange(base.InProgressQueue, 0, -1).Result()
func (r *RDB) ListInProgress(pgn Pagination) ([]*InProgressTask, error) {
// Note: Because we use LPUSH to redis list, we need to calculate the
// correct range and reverse the list to get the tasks with pagination.
stop := -pgn.start() - 1
start := -pgn.stop() - 1
data, err := r.client.LRange(base.InProgressQueue, start, stop).Result()
if err != nil {
return nil, err
}
reverse(data)
var tasks []*InProgressTask
for _, s := range data {
var msg base.TaskMessage
@@ -247,8 +320,8 @@ func (r *RDB) ListInProgress() ([]*InProgressTask, error) {
// ListScheduled returns all tasks that are scheduled to be processed
// in the future.
func (r *RDB) ListScheduled() ([]*ScheduledTask, error) {
data, err := r.client.ZRangeWithScores(base.ScheduledQueue, 0, -1).Result()
func (r *RDB) ListScheduled(pgn Pagination) ([]*ScheduledTask, error) {
data, err := r.client.ZRangeWithScores(base.ScheduledQueue, pgn.start(), pgn.stop()).Result()
if err != nil {
return nil, err
}
@@ -268,6 +341,7 @@ func (r *RDB) ListScheduled() ([]*ScheduledTask, error) {
ID: msg.ID,
Type: msg.Type,
Payload: msg.Payload,
Queue: msg.Queue,
ProcessAt: processAt,
Score: int64(z.Score),
})
@@ -277,8 +351,8 @@ func (r *RDB) ListScheduled() ([]*ScheduledTask, error) {
// ListRetry returns all tasks that have failed before and willl be retried
// in the future.
func (r *RDB) ListRetry() ([]*RetryTask, error) {
data, err := r.client.ZRangeWithScores(base.RetryQueue, 0, -1).Result()
func (r *RDB) ListRetry(pgn Pagination) ([]*RetryTask, error) {
data, err := r.client.ZRangeWithScores(base.RetryQueue, pgn.start(), pgn.stop()).Result()
if err != nil {
return nil, err
}
@@ -301,6 +375,7 @@ func (r *RDB) ListRetry() ([]*RetryTask, error) {
ErrorMsg: msg.ErrorMsg,
Retry: msg.Retry,
Retried: msg.Retried,
Queue: msg.Queue,
ProcessAt: processAt,
Score: int64(z.Score),
})
@@ -309,8 +384,8 @@ func (r *RDB) ListRetry() ([]*RetryTask, error) {
}
// ListDead returns all tasks that have exhausted its retry limit.
func (r *RDB) ListDead() ([]*DeadTask, error) {
data, err := r.client.ZRangeWithScores(base.DeadQueue, 0, -1).Result()
func (r *RDB) ListDead(pgn Pagination) ([]*DeadTask, error) {
data, err := r.client.ZRangeWithScores(base.DeadQueue, pgn.start(), pgn.stop()).Result()
if err != nil {
return nil, err
}
@@ -331,6 +406,7 @@ func (r *RDB) ListDead() ([]*DeadTask, error) {
Type: msg.Type,
Payload: msg.Payload,
ErrorMsg: msg.ErrorMsg,
Queue: msg.Queue,
LastFailedAt: lastFailedAt,
Score: int64(z.Score),
})
@@ -398,20 +474,21 @@ func (r *RDB) EnqueueAllDeadTasks() (int64, error) {
return r.removeAndEnqueueAll(base.DeadQueue)
}
func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) {
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
redis.call("LPUSH", KEYS[2], msg)
return 1
end
var removeAndEnqueueCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
local qkey = ARGV[3] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg)
return 1
end
return 0
`)
res, err := script.Run(r.client, []string{zset, base.DefaultQueue}, score, id).Result()
end
return 0`)
func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) {
res, err := removeAndEnqueueCmd.Run(r.client, []string{zset}, score, id, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
@@ -422,16 +499,18 @@ func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) {
return n, nil
}
var removeAndEnqueueAllCmd = redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[1] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg)
end
return table.getn(msgs)`)
func (r *RDB) removeAndEnqueueAll(zset string) (int64, error) {
script := redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
redis.call("ZREM", KEYS[1], msg)
redis.call("LPUSH", KEYS[2], msg)
end
return table.getn(msgs)
`)
res, err := script.Run(r.client, []string{zset, base.DefaultQueue}).Result()
res, err := removeAndEnqueueAllCmd.Run(r.client, []string{zset}, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
@@ -482,31 +561,31 @@ func (r *RDB) KillAllScheduledTasks() (int64, error) {
return r.removeAndKillAll(base.ScheduledQueue)
}
func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> score of the task to kill
// ARGV[2] -> id of the task to kill
// ARGV[3] -> current timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
redis.call("ZADD", KEYS[2], ARGV[3], msg)
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
return 1
end
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> score of the task to kill
// ARGV[2] -> id of the task to kill
// ARGV[3] -> current timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
var removeAndKillCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
redis.call("ZADD", KEYS[2], ARGV[3], msg)
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
return 1
end
return 0
`)
end
return 0`)
func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago
res, err := script.Run(r.client,
res, err := removeAndKillCmd.Run(r.client,
[]string{zset, base.DeadQueue},
score, id, now.Unix(), limit, maxDeadTasks).Result()
if err != nil {
@@ -519,25 +598,25 @@ func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
return n, nil
}
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> current timestamp
// ARGV[2] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[3] -> max number of tasks in dead queue (e.g., 100)
var removeAndKillAllCmd = redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
redis.call("ZADD", KEYS[2], ARGV[1], msg)
redis.call("ZREM", KEYS[1], msg)
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[2])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[3])
end
return table.getn(msgs)`)
func (r *RDB) removeAndKillAll(zset string) (int64, error) {
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> current timestamp
// ARGV[2] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[3] -> max number of tasks in dead queue (e.g., 100)
script := redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
redis.call("ZREM", KEYS[1], msg)
redis.call("ZADD", KEYS[2], ARGV[1], msg)
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[2])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[3])
end
return table.getn(msgs)
`)
now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago
res, err := script.Run(r.client, []string{zset, base.DeadQueue},
res, err := removeAndKillAllCmd.Run(r.client, []string{zset, base.DeadQueue},
now.Unix(), limit, maxDeadTasks).Result()
if err != nil {
return 0, err
@@ -570,19 +649,19 @@ func (r *RDB) DeleteScheduledTask(id xid.ID, score int64) error {
return r.deleteTask(base.ScheduledQueue, id.String(), float64(score))
}
func (r *RDB) deleteTask(zset, id string, score float64) error {
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
return 1
end
var deleteTaskCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
return 1
end
return 0
`)
res, err := script.Run(r.client, []string{zset}, score, id).Result()
end
return 0`)
func (r *RDB) deleteTask(zset, id string, score float64) error {
res, err := deleteTaskCmd.Run(r.client, []string{zset}, score, id).Result()
if err != nil {
return err
}
@@ -610,3 +689,144 @@ func (r *RDB) DeleteAllRetryTasks() error {
func (r *RDB) DeleteAllScheduledTasks() error {
return r.client.Del(base.ScheduledQueue).Err()
}
// ErrQueueNotFound indicates specified queue does not exist.
type ErrQueueNotFound struct {
qname string
}
func (e *ErrQueueNotFound) Error() string {
return fmt.Sprintf("queue %q does not exist", e.qname)
}
// ErrQueueNotEmpty indicates specified queue is not empty.
type ErrQueueNotEmpty struct {
qname string
}
func (e *ErrQueueNotEmpty) Error() string {
return fmt.Sprintf("queue %q is not empty", e.qname)
}
// Skip checking whether queue is empty before removing.
var removeQueueForceCmd = redis.NewScript(`
local n = redis.call("SREM", KEYS[1], KEYS[2])
if n == 0 then
return redis.error_reply("LIST NOT FOUND")
end
redis.call("DEL", KEYS[2])
return redis.status_reply("OK")`)
// Checks whether queue is empty before removing.
var removeQueueCmd = redis.NewScript(`
local l = redis.call("LLEN", KEYS[2]) if l > 0 then
return redis.error_reply("LIST NOT EMPTY")
end
local n = redis.call("SREM", KEYS[1], KEYS[2])
if n == 0 then
return redis.error_reply("LIST NOT FOUND")
end
redis.call("DEL", KEYS[2])
return redis.status_reply("OK")`)
// RemoveQueue removes the specified queue.
//
// If force is set to true, it will remove the queue regardless
// of whether the queue is empty.
// If force is set to false, it will only remove the queue if
// it is empty.
func (r *RDB) RemoveQueue(qname string, force bool) error {
var script *redis.Script
if force {
script = removeQueueForceCmd
} else {
script = removeQueueCmd
}
err := script.Run(r.client,
[]string{base.AllQueues, base.QueueKey(qname)},
force).Err()
if err != nil {
switch err.Error() {
case "LIST NOT FOUND":
return &ErrQueueNotFound{qname}
case "LIST NOT EMPTY":
return &ErrQueueNotEmpty{qname}
default:
return err
}
}
return nil
}
// Note: Script also removes stale keys.
var listProcessesCmd = redis.NewScript(`
local res = {}
local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do
local ps = redis.call("GET", key)
if ps then
table.insert(res, ps)
end
end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`)
// ListProcesses returns the list of process statuses.
func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
res, err := listProcessesCmd.Run(r.client,
[]string{base.AllProcesses}, time.Now().UTC().Unix()).Result()
if err != nil {
return nil, err
}
data, err := cast.ToStringSliceE(res)
if err != nil {
return nil, err
}
var processes []*base.ProcessInfo
for _, s := range data {
var ps base.ProcessInfo
err := json.Unmarshal([]byte(s), &ps)
if err != nil {
continue // skip bad data
}
processes = append(processes, &ps)
}
return processes, nil
}
// Note: Script also removes stale keys.
var listWorkersCmd = redis.NewScript(`
local res = {}
local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do
local workers = redis.call("HVALS", key)
for _, w in ipairs(workers) do
table.insert(res, w)
end
end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`)
// ListWorkers returns the list of worker stats.
func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) {
res, err := listWorkersCmd.Run(r.client, []string{base.AllWorkers}, time.Now().UTC().Unix()).Result()
if err != nil {
return nil, err
}
data, err := cast.ToStringSliceE(res)
if err != nil {
return nil, err
}
var workers []*base.WorkerInfo
for _, s := range data {
var w base.WorkerInfo
err := json.Unmarshal([]byte(s), &w)
if err != nil {
continue // skip bad data
}
workers = append(workers, &w)
}
return workers, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -13,11 +13,12 @@ import (
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
"github.com/spf13/cast"
)
var (
// ErrDequeueTimeout indicates that the blocking dequeue operation timed out.
ErrDequeueTimeout = errors.New("blocking dequeue operation timed out")
// ErrNoProcessableTask indicates that there are no tasks ready to be processed.
ErrNoProcessableTask = errors.New("no tasks are ready for processing")
// ErrTaskNotFound indicates that a task that matches the given identifier was not found.
ErrTaskNotFound = errors.New("could not find a task")
@@ -40,24 +41,41 @@ func (r *RDB) Close() error {
return r.client.Close()
}
// KEYS[1] -> asynq:queues:<qname>
// KEYS[2] -> asynq:queues
// ARGV[1] -> task message data
var enqueueCmd = redis.NewScript(`
redis.call("LPUSH", KEYS[1], ARGV[1])
redis.call("SADD", KEYS[2], KEYS[1])
return 1`)
// Enqueue inserts the given task to the tail of the queue.
func (r *RDB) Enqueue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
qname := base.QueuePrefix + msg.Queue
return r.client.LPush(qname, string(bytes)).Err()
key := base.QueueKey(msg.Queue)
return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, bytes).Err()
}
// Dequeue blocks until there is a task available to be processed,
// once a task is available, it adds the task to "in progress" queue
// and returns the task. If there are no tasks for the entire timeout
// duration, it returns ErrDequeueTimeout.
func (r *RDB) Dequeue(timeout time.Duration) (*base.TaskMessage, error) {
data, err := r.client.BRPopLPush(base.DefaultQueue, base.InProgressQueue, timeout).Result()
// Dequeue queries given queues in order and pops a task message if there is one and returns it.
// If all queues are empty, ErrNoProcessableTask error is returned.
func (r *RDB) Dequeue(qnames ...string) (*base.TaskMessage, error) {
var data string
var err error
if len(qnames) == 1 {
data, err = r.dequeueSingle(base.QueueKey(qnames[0]))
} else {
// TODO(hibiken): Take keys are argument and don't compute every time
var keys []string
for _, q := range qnames {
keys = append(keys, base.QueueKey(q))
}
data, err = r.dequeue(keys...)
}
if err == redis.Nil {
return nil, ErrDequeueTimeout
return nil, ErrNoProcessableTask
}
if err != nil {
return nil, err
@@ -70,65 +88,80 @@ func (r *RDB) Dequeue(timeout time.Duration) (*base.TaskMessage, error) {
return &msg, nil
}
func (r *RDB) dequeueSingle(queue string) (data string, err error) {
// timeout needed to avoid blocking forever
return r.client.BRPopLPush(queue, base.InProgressQueue, time.Second).Result()
}
// KEYS[1] -> asynq:in_progress
// ARGV -> List of queues to query in order
var dequeueCmd = redis.NewScript(`
local res
for _, qkey in ipairs(ARGV) do
res = redis.call("RPOPLPUSH", qkey, KEYS[1])
if res then
return res
end
end
return res`)
func (r *RDB) dequeue(queues ...string) (data string, err error) {
var args []interface{}
for _, qkey := range queues {
args = append(args, qkey)
}
res, err := dequeueCmd.Run(r.client, []string{base.InProgressQueue}, args...).Result()
if err != nil {
return "", err
}
return cast.ToStringE(res)
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:processed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp
// Note: LREM count ZERO means "remove all elements equal to val"
var doneCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
local n = redis.call("INCR", KEYS[2])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[2], ARGV[2])
end
return redis.status_reply("OK")
`)
// Done removes the task from in-progress queue to mark the task as done.
func (r *RDB) Done(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
// Note: LREM count ZERO means "remove all elements equal to val"
// Note: Script will try removing the message by exact match first,
// if the task is mutated and exact match is not found, it'll fallback
// to finding a match with ID.
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:processed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp
script := redis.NewScript(`
local x = redis.call("LREM", KEYS[1], 0, ARGV[1])
if tonumber(x) == 0 then
local target = cjson.decode(ARGV[1])
local data = redis.call("LRANGE", KEYS[1], 0, -1)
for _, s in ipairs(data) do
local msg = cjson.decode(s)
if target["ID"] == msg["ID"] then
redis.call("LREM", KEYS[1], 0, s)
break
end
end
end
local n = redis.call("INCR", KEYS[2])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[2], ARGV[2])
end
return redis.status_reply("OK")
`)
now := time.Now()
processedKey := base.ProcessedKey(now)
expireAt := now.Add(statsTTL)
return script.Run(r.client,
return doneCmd.Run(r.client,
[]string{base.InProgressQueue, processedKey},
string(bytes), expireAt.Unix()).Err()
bytes, expireAt.Unix()).Err()
}
// Requeue moves the task from in-progress queue to the default
// queue.
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:queues:<qname>
// ARGV[1] -> base.TaskMessage value
// Note: Use RPUSH to push to the head of the queue.
var requeueCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("RPUSH", KEYS[2], ARGV[1])
return redis.status_reply("OK")`)
// Requeue moves the task from in-progress queue to the specified queue.
func (r *RDB) Requeue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
// Note: Use RPUSH to push to the head of the queue.
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:queues:default
// ARGV[1] -> base.TaskMessage value
script := redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("RPUSH", KEYS[2], ARGV[1])
return redis.status_reply("OK")
`)
return script.Run(r.client,
[]string{base.InProgressQueue, base.DefaultQueue},
return requeueCmd.Run(r.client,
[]string{base.InProgressQueue, base.QueueKey(msg.Queue)},
string(bytes)).Err()
}
@@ -143,6 +176,27 @@ func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error {
&redis.Z{Member: string(bytes), Score: score}).Err()
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:retry
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq:failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp
var retryCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[4])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[4])
end
return redis.status_reply("OK")`)
// Retry moves the task from in-progress to retry queue, incrementing retry count
// and assigning error message to the task message.
func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
@@ -157,46 +211,11 @@ func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) e
if err != nil {
return err
}
// Note: Script will try removing the message by exact match first,
// if the task is mutated and exact match is not found, it'll fallback
// to finding a match with ID.
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:retry
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq:failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp
script := redis.NewScript(`
local x = redis.call("LREM", KEYS[1], 0, ARGV[1])
if tonumber(x) == 0 then
local target = cjson.decode(ARGV[1])
local data = redis.call("LRANGE", KEYS[1], 0, -1)
for _, s in ipairs(data) do
local msg = cjson.decode(s)
if target["ID"] == msg["ID"] then
redis.call("LREM", KEYS[1], 0, s)
break
end
end
end
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[4])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[4])
end
return redis.status_reply("OK")
`)
now := time.Now()
processedKey := base.ProcessedKey(now)
failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL)
return script.Run(r.client,
return retryCmd.Run(r.client,
[]string{base.InProgressQueue, base.RetryQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), processAt.Unix(), expireAt.Unix()).Err()
}
@@ -206,6 +225,31 @@ const (
deadExpirationInDays = 90
)
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:dead
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq.failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Dead queue
// ARGV[3] -> died_at UNIX timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
// ARGV[6] -> stats expiration timestamp
var killCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[6])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[6])
end
return redis.status_reply("OK")`)
// Kill sends the task to "dead" queue from in-progress queue, assigning
// the error message to the task.
// It also trims the set by timestamp and set size.
@@ -225,61 +269,27 @@ func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
processedKey := base.ProcessedKey(now)
failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL)
// Note: Script will try removing the message by exact match first,
// if the task is mutated and exact match is not found, it'll fallback
// to finding a match with ID.
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:dead
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq.failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Dead queue
// ARGV[3] -> died_at UNIX timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
// ARGV[6] -> stats expiration timestamp
script := redis.NewScript(`
local x = redis.call("LREM", KEYS[1], 0, ARGV[1])
if tonumber(x) == 0 then
local target = cjson.decode(ARGV[1])
local data = redis.call("LRANGE", KEYS[1], 0, -1)
for _, s in ipairs(data) do
local msg = cjson.decode(s)
if target["ID"] == msg["ID"] then
redis.call("LREM", KEYS[1], 0, s)
break
end
end
end
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[6])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[6])
end
return redis.status_reply("OK")
`)
return script.Run(r.client,
return killCmd.Run(r.client,
[]string{base.InProgressQueue, base.DeadQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err()
}
// RestoreUnfinished moves all tasks from in-progress list to the queue
// KEYS[1] -> asynq:in_progress
// ARGV[1] -> queue prefix
var requeueAllCmd = redis.NewScript(`
local msgs = redis.call("LRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[1] .. decoded["Queue"]
redis.call("RPUSH", qkey, msg)
redis.call("LREM", KEYS[1], 0, msg)
end
return table.getn(msgs)`)
// RequeueAll moves all tasks from in-progress list to the queue
// and reports the number of tasks restored.
func (r *RDB) RestoreUnfinished() (int64, error) {
script := redis.NewScript(`
local len = redis.call("LLEN", KEYS[1])
for i = len, 1, -1 do
redis.call("RPOPLPUSH", KEYS[1], KEYS[2])
end
return len
`)
res, err := script.Run(r.client, []string{base.InProgressQueue, base.DefaultQueue}).Result()
func (r *RDB) RequeueAll() (int64, error) {
res, err := requeueAllCmd.Run(r.client, []string{base.InProgressQueue}, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
@@ -292,28 +302,142 @@ func (r *RDB) RestoreUnfinished() (int64, error) {
// CheckAndEnqueue checks for all scheduled tasks and enqueues any tasks that
// have to be processed.
func (r *RDB) CheckAndEnqueue() error {
//
// qnames specifies to which queues to send tasks.
func (r *RDB) CheckAndEnqueue(qnames ...string) error {
delayed := []string{base.ScheduledQueue, base.RetryQueue}
for _, zset := range delayed {
if err := r.forward(zset); err != nil {
var err error
if len(qnames) == 1 {
err = r.forwardSingle(zset, base.QueueKey(qnames[0]))
} else {
err = r.forward(zset)
}
if err != nil {
return err
}
}
return nil
}
// KEYS[1] -> source queue (e.g. scheduled or retry queue)
// ARGV[1] -> current unix time
// ARGV[2] -> queue prefix
var forwardCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[2] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg)
end
return msgs`)
// forward moves all tasks with a score less than the current unix time
// from the given zset to the default queue.
func (r *RDB) forward(from string) error {
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
redis.call("ZREM", KEYS[1], msg)
redis.call("LPUSH", KEYS[2], msg)
end
return msgs
`)
// from the src zset.
func (r *RDB) forward(src string) error {
now := float64(time.Now().Unix())
return script.Run(r.client,
[]string{from, base.DefaultQueue}, now).Err()
return forwardCmd.Run(r.client,
[]string{src}, now, base.QueuePrefix).Err()
}
// KEYS[1] -> source queue (e.g. scheduled or retry queue)
// KEYS[2] -> destination queue
var forwardSingleCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
redis.call("LPUSH", KEYS[2], msg)
redis.call("ZREM", KEYS[1], msg)
end
return msgs`)
// forwardSingle moves all tasks with a score less than the current unix time
// from the src zset to dst list.
func (r *RDB) forwardSingle(src, dst string) error {
now := float64(time.Now().Unix())
return forwardSingleCmd.Run(r.client,
[]string{src, dst}, now).Err()
}
// KEYS[1] -> asynq:ps:<host:pid>
// KEYS[2] -> asynq:ps
// KEYS[3] -> asynq:workers<host:pid>
// keys[4] -> asynq:workers
// ARGV[1] -> expiration time
// ARGV[2] -> TTL in seconds
// ARGV[3] -> process info
// ARGV[4:] -> alternate key-value pair of (worker id, worker data)
// Note: Add key to ZSET with expiration time as score.
// ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996
var writeProcessInfoCmd = redis.NewScript(`
redis.call("SETEX", KEYS[1], ARGV[2], ARGV[3])
redis.call("ZADD", KEYS[2], ARGV[1], KEYS[1])
redis.call("DEL", KEYS[3])
for i = 4, table.getn(ARGV)-1, 2 do
redis.call("HSET", KEYS[3], ARGV[i], ARGV[i+1])
end
redis.call("EXPIRE", KEYS[3], ARGV[2])
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
return redis.status_reply("OK")`)
// WriteProcessState writes process state data to redis with expiration set to the value ttl.
func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error {
info := ps.Get()
bytes, err := json.Marshal(info)
if err != nil {
return err
}
var args []interface{} // args to the lua script
exp := time.Now().Add(ttl).UTC()
workers := ps.GetWorkers()
args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes)
for _, w := range workers {
bytes, err := json.Marshal(w)
if err != nil {
continue // skip bad data
}
args = append(args, w.ID.String(), bytes)
}
pkey := base.ProcessInfoKey(info.Host, info.PID)
wkey := base.WorkersKey(info.Host, info.PID)
return writeProcessInfoCmd.Run(r.client,
[]string{pkey, base.AllProcesses, wkey, base.AllWorkers},
args...).Err()
}
// KEYS[1] -> asynq:ps
// KEYS[2] -> asynq:ps:<host:pid>
// KEYS[3] -> asynq:workers
// KEYS[4] -> asynq:workers<host:pid>
var clearProcessInfoCmd = redis.NewScript(`
redis.call("ZREM", KEYS[1], KEYS[2])
redis.call("DEL", KEYS[2])
redis.call("ZREM", KEYS[3], KEYS[4])
redis.call("DEL", KEYS[4])
return redis.status_reply("OK")`)
// ClearProcessState deletes process state data from redis.
func (r *RDB) ClearProcessState(ps *base.ProcessState) error {
info := ps.Get()
host, pid := info.Host, info.PID
pkey := base.ProcessInfoKey(host, pid)
wkey := base.WorkersKey(host, pid)
return clearProcessInfoCmd.Run(r.client,
[]string{base.AllProcesses, pkey, base.AllWorkers, wkey}).Err()
}
// CancelationPubSub returns a pubsub for cancelation messages.
func (r *RDB) CancelationPubSub() (*redis.PubSub, error) {
pubsub := r.client.Subscribe(base.CancelChannel)
_, err := pubsub.Receive()
if err != nil {
return nil, err
}
return pubsub, nil
}
// PublishCancelation publish cancelation message to all subscribers.
// The message is the ID for the task to be canceled.
func (r *RDB) PublishCancelation(id string) error {
return r.client.Publish(base.CancelChannel, id).Err()
}

File diff suppressed because it is too large Load Diff

35
logger.go Normal file
View File

@@ -0,0 +1,35 @@
package asynq
import (
"io"
"log"
"os"
)
// global logger used in asynq package.
var logger = newLogger(os.Stderr)
func newLogger(out io.Writer) *asynqLogger {
return &asynqLogger{
log.New(out, "", log.Ldate|log.Ltime|log.Lmicroseconds|log.LUTC),
}
}
type asynqLogger struct {
*log.Logger
}
func (l *asynqLogger) info(format string, args ...interface{}) {
format = "INFO: " + format
l.Printf(format, args...)
}
func (l *asynqLogger) warn(format string, args ...interface{}) {
format = "WARN: " + format
l.Printf(format, args...)
}
func (l *asynqLogger) error(format string, args ...interface{}) {
format = "ERROR: " + format
l.Printf(format, args...)
}

117
logger_test.go Normal file
View File

@@ -0,0 +1,117 @@
package asynq
import (
"bytes"
"fmt"
"regexp"
"testing"
)
// regexp for timestamps
const (
rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]`
rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]`
rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]`
)
type tester struct {
desc string
message string
wantPattern string // regexp that log output must match
}
func TestLoggerInfo(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := newLogger(&buf)
logger.info(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerWarn(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := newLogger(&buf)
logger.warn(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerError(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := newLogger(&buf)
logger.error(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}

View File

@@ -11,9 +11,10 @@ import (
"github.com/spf13/cast"
)
// Payload is an arbitrary data needed for task execution.
// The values have to be JSON serializable.
type Payload map[string]interface{}
// Payload holds arbitrary data needed for task execution.
type Payload struct {
data map[string]interface{}
}
type errKeyNotFound struct {
key string
@@ -25,14 +26,14 @@ func (e *errKeyNotFound) Error() string {
// Has reports whether key exists.
func (p Payload) Has(key string) bool {
_, ok := p[key]
_, ok := p.data[key]
return ok
}
// GetString returns a string value if a string type is associated with
// the key, otherwise reports an error.
func (p Payload) GetString(key string) (string, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return "", &errKeyNotFound{key}
}
@@ -42,7 +43,7 @@ func (p Payload) GetString(key string) (string, error) {
// GetInt returns an int value if a numeric type is associated with
// the key, otherwise reports an error.
func (p Payload) GetInt(key string) (int, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
@@ -52,7 +53,7 @@ func (p Payload) GetInt(key string) (int, error) {
// GetFloat64 returns a float64 value if a numeric type is associated with
// the key, otherwise reports an error.
func (p Payload) GetFloat64(key string) (float64, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
@@ -62,7 +63,7 @@ func (p Payload) GetFloat64(key string) (float64, error) {
// GetBool returns a boolean value if a boolean type is associated with
// the key, otherwise reports an error.
func (p Payload) GetBool(key string) (bool, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return false, &errKeyNotFound{key}
}
@@ -72,7 +73,7 @@ func (p Payload) GetBool(key string) (bool, error) {
// GetStringSlice returns a slice of strings if a string slice type is associated with
// the key, otherwise reports an error.
func (p Payload) GetStringSlice(key string) ([]string, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
@@ -82,7 +83,7 @@ func (p Payload) GetStringSlice(key string) ([]string, error) {
// GetIntSlice returns a slice of ints if a int slice type is associated with
// the key, otherwise reports an error.
func (p Payload) GetIntSlice(key string) ([]int, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
@@ -93,7 +94,7 @@ func (p Payload) GetIntSlice(key string) ([]int, error) {
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMap(key string) (map[string]interface{}, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
@@ -104,7 +105,7 @@ func (p Payload) GetStringMap(key string) (map[string]interface{}, error) {
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapString(key string) (map[string]string, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
@@ -115,7 +116,7 @@ func (p Payload) GetStringMapString(key string) (map[string]string, error) {
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapStringSlice(key string) (map[string][]string, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
@@ -126,7 +127,7 @@ func (p Payload) GetStringMapStringSlice(key string) (map[string][]string, error
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
@@ -137,7 +138,7 @@ func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapBool(key string) (map[string]bool, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
@@ -147,7 +148,7 @@ func (p Payload) GetStringMapBool(key string) (map[string]bool, error) {
// GetTime returns a time value if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetTime(key string) (time.Time, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return time.Time{}, &errKeyNotFound{key}
}
@@ -157,7 +158,7 @@ func (p Payload) GetTime(key string) (time.Time, error) {
// GetDuration returns a duration value if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetDuration(key string) (time.Duration, error) {
v, ok := p[key]
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}

View File

@@ -10,6 +10,8 @@ import (
"time"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
)
func TestPayloadGet(t *testing.T) {
@@ -19,7 +21,7 @@ func TestPayloadGet(t *testing.T) {
location := map[string]string{"address": "123 Main St.", "state": "NY", "zipcode": "10002"}
favs := map[string][]string{
"movies": []string{"forrest gump", "star wars"},
"tv_shows": []string{"game of throwns", "HIMYM", "breaking bad"},
"tv_shows": []string{"game of thrones", "HIMYM", "breaking bad"},
}
counter := map[string]int{
"a": 1,
@@ -34,7 +36,7 @@ func TestPayloadGet(t *testing.T) {
now := time.Now()
duration := 15 * time.Minute
payload := Payload{
data := map[string]interface{}{
"greeting": "Hello",
"user_id": 9876,
"pi": 3.1415,
@@ -49,6 +51,7 @@ func TestPayloadGet(t *testing.T) {
"timestamp": now,
"duration": duration,
}
payload := Payload{data}
gotStr, err := payload.GetString("greeting")
if gotStr != "Hello" || err != nil {
@@ -151,7 +154,7 @@ func TestPayloadGetWithMarshaling(t *testing.T) {
now := time.Now()
duration := 15 * time.Minute
in := Payload{
in := Payload{map[string]interface{}{
"subject": "Hello",
"recipient_id": 9876,
"pi": 3.14,
@@ -165,18 +168,19 @@ func TestPayloadGetWithMarshaling(t *testing.T) {
"features": features,
"timestamp": now,
"duration": duration,
}
// encode and then decode
data, err := json.Marshal(in)
}}
// encode and then decode task messsage
inMsg := h.NewTaskMessage("testing", in.data)
data, err := json.Marshal(inMsg)
if err != nil {
t.Fatal(err)
}
var out Payload
err = json.Unmarshal(data, &out)
var outMsg base.TaskMessage
err = json.Unmarshal(data, &outMsg)
if err != nil {
t.Fatal(err)
}
out := Payload{outMsg.Payload}
gotStr, err := out.GetString("subject")
if gotStr != "Hello" || err != nil {
@@ -257,11 +261,94 @@ func TestPayloadGetWithMarshaling(t *testing.T) {
}
}
func TestPayloadHas(t *testing.T) {
payload := Payload{
"user_id": 123,
func TestPayloadKeyNotFound(t *testing.T) {
payload := Payload{nil}
key := "something"
gotStr, err := payload.GetString(key)
if err == nil || gotStr != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
key, gotStr, err)
}
gotInt, err := payload.GetInt(key)
if err == nil || gotInt != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
key, gotInt, err)
}
gotFloat, err := payload.GetFloat64(key)
if err == nil || gotFloat != 0 {
t.Errorf("Payload.GetFloat64(%q = %v, %v; want 0, error",
key, gotFloat, err)
}
gotBool, err := payload.GetBool(key)
if err == nil || gotBool != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
key, gotBool, err)
}
gotStrSlice, err := payload.GetStringSlice(key)
if err == nil || gotStrSlice != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
key, gotStrSlice, err)
}
gotIntSlice, err := payload.GetIntSlice(key)
if err == nil || gotIntSlice != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
key, gotIntSlice, err)
}
gotStrMap, err := payload.GetStringMap(key)
if err == nil || gotStrMap != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
key, gotStrMap, err)
}
gotStrMapStr, err := payload.GetStringMapString(key)
if err == nil || gotStrMapStr != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
key, gotStrMapStr, err)
}
gotStrMapStrSlice, err := payload.GetStringMapStringSlice(key)
if err == nil || gotStrMapStrSlice != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
key, gotStrMapStrSlice, err)
}
gotStrMapInt, err := payload.GetStringMapInt(key)
if err == nil || gotStrMapInt != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want nil, error",
key, gotStrMapInt, err)
}
gotStrMapBool, err := payload.GetStringMapBool(key)
if err == nil || gotStrMapBool != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want nil, error",
key, gotStrMapBool, err)
}
gotTime, err := payload.GetTime(key)
if err == nil || !gotTime.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, error",
key, gotTime, err, time.Time{})
}
gotDuration, err := payload.GetDuration(key)
if err == nil || gotDuration != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want 0, error",
key, gotDuration, err)
}
}
func TestPayloadHas(t *testing.T) {
payload := Payload{map[string]interface{}{
"user_id": 123,
}}
if !payload.Has("user_id") {
t.Errorf("Payload.Has(%q) = false, want true", "user_id")
}

View File

@@ -5,26 +5,39 @@
package asynq
import (
"context"
"fmt"
"log"
"math/rand"
"sort"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"golang.org/x/time/rate"
)
type processor struct {
rdb *rdb.RDB
ps *base.ProcessState
handler Handler
queueConfig map[string]int
// orderedQueues is set only in strict-priority mode.
orderedQueues []string
retryDelayFunc retryDelayFunc
// timeout for blocking dequeue operation.
// dequeue needs to timeout to avoid blocking forever
// in case of a program shutdown or additon of a new queue.
dequeueTimeout time.Duration
errHandler ErrorHandler
// channel via which to send sync requests to syncer.
syncRequestCh chan<- *syncRequest
// rate limiter to prevent spamming logs with a bunch of errors.
errLogLimiter *rate.Limiter
// sema is a counting semaphore to ensure the number of active workers
// does not exceed the limit.
@@ -40,20 +53,37 @@ type processor struct {
// quit channel communicates to the in-flight worker goroutines to stop.
quit chan struct{}
// cancelations is a set of cancel functions for all in-progress tasks.
cancelations *base.Cancelations
}
type retryDelayFunc func(n int, err error, task *Task) time.Duration
func newProcessor(r *rdb.RDB, n int, fn retryDelayFunc) *processor {
// newProcessor constructs a new processor.
func newProcessor(r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc,
syncCh chan<- *syncRequest, c *base.Cancelations, errHandler ErrorHandler) *processor {
info := ps.Get()
qcfg := normalizeQueueCfg(info.Queues)
orderedQueues := []string(nil)
if info.StrictPriority {
orderedQueues = sortByPriority(qcfg)
}
return &processor{
rdb: r,
ps: ps,
queueConfig: qcfg,
orderedQueues: orderedQueues,
retryDelayFunc: fn,
dequeueTimeout: 2 * time.Second,
sema: make(chan struct{}, n),
syncRequestCh: syncCh,
cancelations: c,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, info.Concurrency),
done: make(chan struct{}),
abort: make(chan struct{}),
quit: make(chan struct{}),
handler: HandlerFunc(func(t *Task) error { return fmt.Errorf("handler not set") }),
errHandler: errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
}
}
@@ -61,7 +91,7 @@ func newProcessor(r *rdb.RDB, n int, fn retryDelayFunc) *processor {
// It's safe to call this method multiple times.
func (p *processor) stop() {
p.once.Do(func() {
log.Println("[INFO] Processor shutting down...")
logger.info("Processor shutting down...")
// Unblock if processor is waiting for sema token.
close(p.abort)
// Signal the processor goroutine to stop processing tasks
@@ -77,24 +107,32 @@ func (p *processor) terminate() {
// IDEA: Allow user to customize this timeout value.
const timeout = 8 * time.Second
time.AfterFunc(timeout, func() { close(p.quit) })
log.Println("[INFO] Waiting for all workers to finish...")
logger.info("Waiting for all workers to finish...")
// send cancellation signal to all in-progress task handlers
for _, cancel := range p.cancelations.GetAll() {
cancel()
}
// block until all workers have released the token
for i := 0; i < cap(p.sema); i++ {
p.sema <- struct{}{}
}
log.Println("[INFO] All workers have finished.")
logger.info("All workers have finished")
p.restore() // move any unfinished tasks back to the queue.
}
func (p *processor) start() {
func (p *processor) start(wg *sync.WaitGroup) {
// NOTE: The call to "restore" needs to complete before starting
// the processor goroutine.
p.restore()
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-p.done:
log.Println("[INFO] Processor done.")
logger.info("Processor done")
return
default:
p.exec()
@@ -106,13 +144,22 @@ func (p *processor) start() {
// exec pulls a task out of the queue and starts a worker goroutine to
// process the task.
func (p *processor) exec() {
msg, err := p.rdb.Dequeue(p.dequeueTimeout)
if err == rdb.ErrDequeueTimeout {
// timed out, this is a normal behavior.
qnames := p.queues()
msg, err := p.rdb.Dequeue(qnames...)
if err == rdb.ErrNoProcessableTask {
// queues are empty, this is a normal behavior.
if len(p.queueConfig) > 1 {
// sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: With multiple queues, we are not using blocking pop operation and
// polling queues instead. This adds significant load to redis.
time.Sleep(time.Second)
}
return
}
if err != nil {
log.Printf("[ERROR] unexpected error while pulling a task out of queue: %v\n", err)
if p.errLogLimiter.Allow() {
logger.error("Dequeue error: %v", err)
}
return
}
@@ -122,19 +169,26 @@ func (p *processor) exec() {
p.requeue(msg)
return
case p.sema <- struct{}{}: // acquire token
p.ps.AddWorkerStats(msg, time.Now())
go func() {
defer func() { <-p.sema /* release token */ }()
defer func() {
p.ps.DeleteWorkerStats(msg)
<-p.sema /* release token */
}()
resCh := make(chan error, 1)
task := &Task{Type: msg.Type, Payload: msg.Payload}
task := NewTask(msg.Type, msg.Payload)
ctx, cancel := createContext(msg)
p.cancelations.Add(msg.ID.String(), cancel)
go func() {
resCh <- perform(p.handler, task)
resCh <- perform(ctx, task, p.handler)
p.cancelations.Delete(msg.ID.String())
}()
select {
case <-p.quit:
// time is up, quit this worker goroutine.
log.Printf("[WARN] Terminating in-progress task %+v\n", msg)
logger.warn("Quitting worker to process task id=%s", msg.ID)
return
case resErr := <-resCh:
// Note: One of three things should happen.
@@ -142,6 +196,9 @@ func (p *processor) exec() {
// 2) Retry -> Removes the message from InProgress & Adds the message to Retry
// 3) Kill -> Removes the message from InProgress & Adds the message to Dead
if resErr != nil {
if p.errHandler != nil {
p.errHandler.HandleError(task, resErr, msg.Retried, msg.Retry)
}
if msg.Retried >= msg.Retry {
p.kill(msg, resErr)
} else {
@@ -158,54 +215,190 @@ func (p *processor) exec() {
// restore moves all tasks from "in-progress" back to queue
// to restore all unfinished tasks.
func (p *processor) restore() {
n, err := p.rdb.RestoreUnfinished()
n, err := p.rdb.RequeueAll()
if err != nil {
log.Printf("[ERROR] Could not restore unfinished tasks: %v\n", err)
logger.error("Could not restore unfinished tasks: %v", err)
}
if n > 0 {
log.Printf("[INFO] Restored %d unfinished tasks back to queue.\n", n)
logger.info("Restored %d unfinished tasks back to queue", n)
}
}
func (p *processor) requeue(msg *base.TaskMessage) {
err := p.rdb.Requeue(msg)
if err != nil {
log.Printf("[ERROR] Could not move task from InProgress back to queue: %v\n", err)
logger.error("Could not push task id=%s back to queue: %v", msg.ID, err)
}
}
func (p *processor) markAsDone(msg *base.TaskMessage) {
err := p.rdb.Done(msg)
if err != nil {
log.Printf("[ERROR] Could not remove task from InProgress queue: %v\n", err)
errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue)
logger.warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Done(msg)
},
errMsg: errMsg,
}
}
}
func (p *processor) retry(msg *base.TaskMessage, e error) {
d := p.retryDelayFunc(msg.Retried, e, &Task{Type: msg.Type, Payload: msg.Payload})
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(d)
err := p.rdb.Retry(msg, retryAt, e.Error())
if err != nil {
log.Printf("[ERROR] Could not send task %+v to Retry queue: %v\n", msg, err)
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue)
logger.warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Retry(msg, retryAt, e.Error())
},
errMsg: errMsg,
}
}
}
func (p *processor) kill(msg *base.TaskMessage, e error) {
log.Printf("[WARN] Retry exhausted for task(Type: %q, ID: %v)\n", msg.Type, msg.ID)
logger.warn("Retry exhausted for task id=%s", msg.ID)
err := p.rdb.Kill(msg, e.Error())
if err != nil {
log.Printf("[ERROR] Could not send task %+v to Dead queue: %v\n", msg, err)
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue)
logger.warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Kill(msg, e.Error())
},
errMsg: errMsg,
}
}
}
// queues returns a list of queues to query.
// Order of the queue names is based on the priority of each queue.
// Queue names is sorted by their priority level if strict-priority is true.
// If strict-priority is false, then the order of queue names are roughly based on
// the priority level but randomized in order to avoid starving low priority queues.
func (p *processor) queues() []string {
// skip the overhead of generating a list of queue names
// if we are processing one queue.
if len(p.queueConfig) == 1 {
for qname := range p.queueConfig {
return []string{qname}
}
}
if p.orderedQueues != nil {
return p.orderedQueues
}
var names []string
for qname, priority := range p.queueConfig {
for i := 0; i < int(priority); i++ {
names = append(names, qname)
}
}
r := rand.New(rand.NewSource(time.Now().UnixNano()))
r.Shuffle(len(names), func(i, j int) { names[i], names[j] = names[j], names[i] })
return uniq(names, len(p.queueConfig))
}
// perform calls the handler with the given task.
// If the call returns without panic, it simply returns the value,
// otherwise, it recovers from panic and returns an error.
func perform(h Handler, task *Task) (err error) {
func perform(ctx context.Context, task *Task, h Handler) (err error) {
defer func() {
if x := recover(); x != nil {
err = fmt.Errorf("panic: %v", x)
}
}()
return h.ProcessTask(task)
return h.ProcessTask(ctx, task)
}
// uniq dedupes elements and returns a slice of unique names of length l.
// Order of the output slice is based on the input list.
func uniq(names []string, l int) []string {
var res []string
seen := make(map[string]struct{})
for _, s := range names {
if _, ok := seen[s]; !ok {
seen[s] = struct{}{}
res = append(res, s)
}
if len(res) == l {
break
}
}
return res
}
// sortByPriority returns a list of queue names sorted by
// their priority level in descending order.
func sortByPriority(qcfg map[string]int) []string {
var queues []*queue
for qname, n := range qcfg {
queues = append(queues, &queue{qname, n})
}
sort.Sort(sort.Reverse(byPriority(queues)))
var res []string
for _, q := range queues {
res = append(res, q.name)
}
return res
}
type queue struct {
name string
priority int
}
type byPriority []*queue
func (x byPriority) Len() int { return len(x) }
func (x byPriority) Less(i, j int) bool { return x[i].priority < x[j].priority }
func (x byPriority) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
// normalizeQueueCfg divides priority numbers by their
// greatest common divisor.
func normalizeQueueCfg(queueCfg map[string]int) map[string]int {
var xs []int
for _, x := range queueCfg {
xs = append(xs, x)
}
d := gcd(xs...)
res := make(map[string]int)
for q, x := range queueCfg {
res[q] = x / d
}
return res
}
func gcd(xs ...int) int {
fn := func(x, y int) int {
for y > 0 {
x, y = y, x%y
}
return x
}
res := xs[0]
for i := 0; i < len(xs); i++ {
res = fn(xs[i], res)
if res == 1 {
return 1
}
}
return res
}
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage) (context.Context, context.CancelFunc) {
timeout, err := time.ParseDuration(msg.Timeout)
if err != nil {
logger.error("cannot parse timeout duration for %+v", msg)
return context.WithCancel(context.Background())
}
if timeout == 0 {
return context.WithCancel(context.Background())
}
return context.WithTimeout(context.Background(), timeout)
}

View File

@@ -5,12 +5,15 @@
package asynq
import (
"context"
"fmt"
"sort"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
@@ -25,10 +28,10 @@ func TestProcessorSuccess(t *testing.T) {
m3 := h.NewTaskMessage("reindex", nil)
m4 := h.NewTaskMessage("sync", nil)
t1 := &Task{Type: m1.Type, Payload: m1.Payload}
t2 := &Task{Type: m2.Type, Payload: m2.Payload}
t3 := &Task{Type: m3.Type, Payload: m3.Payload}
t4 := &Task{Type: m4.Type, Payload: m4.Payload}
t1 := NewTask(m1.Type, m1.Payload)
t2 := NewTask(m2.Type, m2.Payload)
t3 := NewTask(m3.Type, m3.Payload)
t4 := NewTask(m4.Type, m4.Payload)
tests := []struct {
enqueued []*base.TaskMessage // initial default queue state
@@ -51,23 +54,25 @@ func TestProcessorSuccess(t *testing.T) {
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedDefaultQueue(t, r, tc.enqueued) // initialize default queue.
h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue.
// instantiate a new processor
var mu sync.Mutex
var processed []*Task
handler := func(task *Task) error {
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
processed = append(processed, task)
return nil
}
p := newProcessor(rdbClient, 10, defaultDelayFunc)
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations()
p := newProcessor(rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
p.handler = HandlerFunc(handler)
p.dequeueTimeout = time.Second // short time out for test purpose
p.start()
var wg sync.WaitGroup
p.start(&wg)
for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg)
if err != nil {
@@ -78,7 +83,7 @@ func TestProcessorSuccess(t *testing.T) {
time.Sleep(tc.wait)
p.terminate()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt); diff != "" {
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
@@ -115,43 +120,57 @@ func TestProcessorRetry(t *testing.T) {
now := time.Now()
tests := []struct {
enqueued []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run
delay time.Duration // retry delay duration
wait time.Duration // wait duration between starting and stopping processor for this test case
wantRetry []h.ZSetEntry // tasks in retry queue at the end
wantDead []*base.TaskMessage // tasks in dead queue at the end
enqueued []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run
delay time.Duration // retry delay duration
handler Handler // task handler
wait time.Duration // wait duration between starting and stopping processor for this test case
wantRetry []h.ZSetEntry // tasks in retry queue at the end
wantDead []*base.TaskMessage // tasks in dead queue at the end
wantErrCount int // number of times error handler should be called
}{
{
enqueued: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{m3, m4},
delay: time.Minute,
wait: time.Second,
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return fmt.Errorf(errMsg)
}),
wait: time.Second,
wantRetry: []h.ZSetEntry{
{Msg: &r2, Score: now.Add(time.Minute).Unix()},
{Msg: &r3, Score: now.Add(time.Minute).Unix()},
{Msg: &r4, Score: now.Add(time.Minute).Unix()},
{Msg: &r2, Score: float64(now.Add(time.Minute).Unix())},
{Msg: &r3, Score: float64(now.Add(time.Minute).Unix())},
{Msg: &r4, Score: float64(now.Add(time.Minute).Unix())},
},
wantDead: []*base.TaskMessage{&r1},
wantDead: []*base.TaskMessage{&r1},
wantErrCount: 4,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedDefaultQueue(t, r, tc.enqueued) // initialize default queue.
h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue.
// instantiate a new processor
delayFunc := func(n int, e error, t *Task) time.Duration {
return tc.delay
}
handler := func(task *Task) error {
return fmt.Errorf(errMsg)
var (
mu sync.Mutex // guards n
n int // number of times error handler is called
)
errHandler := func(t *Task, err error, retried, maxRetry int) {
mu.Lock()
defer mu.Unlock()
n++
}
p := newProcessor(rdbClient, 10, delayFunc)
p.handler = HandlerFunc(handler)
p.dequeueTimeout = time.Second // short time out for test purpose
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations()
p := newProcessor(rdbClient, ps, delayFunc, nil, cancelations, ErrorHandlerFunc(errHandler))
p.handler = tc.handler
p.start()
var wg sync.WaitGroup
p.start(&wg)
for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg)
if err != nil {
@@ -162,8 +181,9 @@ func TestProcessorRetry(t *testing.T) {
time.Sleep(tc.wait)
p.terminate()
cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to second difference in zset score
gotRetry := h.GetRetryEntries(t, r)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt); diff != "" {
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff)
}
@@ -175,6 +195,126 @@ func TestProcessorRetry(t *testing.T) {
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
if n != tc.wantErrCount {
t.Errorf("error handler was called %d times, want %d", n, tc.wantErrCount)
}
}
}
func TestProcessorQueues(t *testing.T) {
sortOpt := cmp.Transformer("SortStrings", func(in []string) []string {
out := append([]string(nil), in...) // Copy input to avoid mutating it
sort.Strings(out)
return out
})
tests := []struct {
queueCfg map[string]int
want []string
}{
{
queueCfg: map[string]int{
"high": 6,
"default": 3,
"low": 1,
},
want: []string{"high", "default", "low"},
},
{
queueCfg: map[string]int{
"default": 1,
},
want: []string{"default"},
},
}
for _, tc := range tests {
cancelations := base.NewCancelations()
ps := base.NewProcessState("localhost", 1234, 10, tc.queueCfg, false)
p := newProcessor(nil, ps, defaultDelayFunc, nil, cancelations, nil)
got := p.queues()
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s",
tc.queueCfg, got, tc.want, diff)
}
}
}
func TestProcessorWithStrictPriority(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("send_email", nil)
m2 := h.NewTaskMessage("send_email", nil)
m3 := h.NewTaskMessage("send_email", nil)
m4 := h.NewTaskMessage("gen_thumbnail", nil)
m5 := h.NewTaskMessage("gen_thumbnail", nil)
m6 := h.NewTaskMessage("sync", nil)
m7 := h.NewTaskMessage("sync", nil)
t1 := NewTask(m1.Type, m1.Payload)
t2 := NewTask(m2.Type, m2.Payload)
t3 := NewTask(m3.Type, m3.Payload)
t4 := NewTask(m4.Type, m4.Payload)
t5 := NewTask(m5.Type, m5.Payload)
t6 := NewTask(m6.Type, m6.Payload)
t7 := NewTask(m7.Type, m7.Payload)
tests := []struct {
enqueued map[string][]*base.TaskMessage // initial queues state
wait time.Duration // wait duration between starting and stopping processor for this test case
wantProcessed []*Task // tasks to be processed at the end
}{
{
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {m4, m5},
"critical": {m1, m2, m3},
"low": {m6, m7},
},
wait: time.Second,
wantProcessed: []*Task{t1, t2, t3, t4, t5, t6, t7},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
for qname, msgs := range tc.enqueued {
h.SeedEnqueuedQueue(t, r, msgs, qname)
}
// instantiate a new processor
var mu sync.Mutex
var processed []*Task
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
processed = append(processed, task)
return nil
}
queueCfg := map[string]int{
"critical": 3,
base.DefaultQueueName: 2,
"low": 1,
}
// Note: Set concurrency to 1 to make sure tasks are processed one at a time.
cancelations := base.NewCancelations()
ps := base.NewProcessState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/)
p := newProcessor(rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
p.handler = HandlerFunc(handler)
var wg sync.WaitGroup
p.start(&wg)
time.Sleep(tc.wait)
p.terminate()
if diff := cmp.Diff(tc.wantProcessed, processed, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
}
}
@@ -187,32 +327,32 @@ func TestPerform(t *testing.T) {
}{
{
desc: "handler returns nil",
handler: func(t *Task) error {
handler: func(ctx context.Context, t *Task) error {
return nil
},
task: &Task{Type: "gen_thumbnail", Payload: map[string]interface{}{"src": "some/img/path"}},
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
wantErr: false,
},
{
desc: "handler returns error",
handler: func(t *Task) error {
handler: func(ctx context.Context, t *Task) error {
return fmt.Errorf("something went wrong")
},
task: &Task{Type: "gen_thumbnail", Payload: map[string]interface{}{"src": "some/img/path"}},
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
wantErr: true,
},
{
desc: "handler panics",
handler: func(t *Task) error {
handler: func(ctx context.Context, t *Task) error {
panic("something went terribly wrong")
},
task: &Task{Type: "gen_thumbnail", Payload: map[string]interface{}{"src": "some/img/path"}},
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
wantErr: true,
},
}
for _, tc := range tests {
got := perform(tc.handler, tc.task)
got := perform(context.Background(), tc.task, tc.handler)
if !tc.wantErr && got != nil {
t.Errorf("%s: perform() = %v, want nil", tc.desc, got)
continue

View File

@@ -5,7 +5,7 @@
package asynq
import (
"log"
"sync"
"time"
"github.com/hibiken/asynq/internal/rdb"
@@ -19,29 +19,39 @@ type scheduler struct {
// poll interval on average
avgInterval time.Duration
// list of queues to move the tasks into.
qnames []string
}
func newScheduler(r *rdb.RDB, avgInterval time.Duration) *scheduler {
func newScheduler(r *rdb.RDB, avgInterval time.Duration, qcfg map[string]int) *scheduler {
var qnames []string
for q := range qcfg {
qnames = append(qnames, q)
}
return &scheduler{
rdb: r,
done: make(chan struct{}),
avgInterval: avgInterval,
qnames: qnames,
}
}
func (s *scheduler) terminate() {
log.Println("[INFO] Scheduler shutting down...")
logger.info("Scheduler shutting down...")
// Signal the scheduler goroutine to stop polling.
s.done <- struct{}{}
}
// start starts the "scheduler" goroutine.
func (s *scheduler) start() {
func (s *scheduler) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-s.done:
log.Println("[INFO] Scheduler done.")
logger.info("Scheduler done")
return
case <-time.After(s.avgInterval):
s.exec()
@@ -51,7 +61,7 @@ func (s *scheduler) start() {
}
func (s *scheduler) exec() {
if err := s.rdb.CheckAndEnqueue(); err != nil {
log.Printf("[ERROR] could not forward scheduled tasks: %v\n", err)
if err := s.rdb.CheckAndEnqueue(s.qnames...); err != nil {
logger.error("Could not enqueue scheduled tasks: %v", err)
}
}

View File

@@ -5,6 +5,7 @@
package asynq
import (
"sync"
"testing"
"time"
@@ -18,7 +19,7 @@ func TestScheduler(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newScheduler(rdbClient, pollInterval)
s := newScheduler(rdbClient, pollInterval, defaultQueueConfig)
t1 := h.NewTaskMessage("gen_thumbnail", nil)
t2 := h.NewTaskMessage("send_email", nil)
t3 := h.NewTaskMessage("reindex", nil)
@@ -36,11 +37,11 @@ func TestScheduler(t *testing.T) {
}{
{
initScheduled: []h.ZSetEntry{
{Msg: t1, Score: now.Add(time.Hour).Unix()},
{Msg: t2, Score: now.Add(-2 * time.Second).Unix()},
{Msg: t1, Score: float64(now.Add(time.Hour).Unix())},
{Msg: t2, Score: float64(now.Add(-2 * time.Second).Unix())},
},
initRetry: []h.ZSetEntry{
{Msg: t3, Score: time.Now().Add(-500 * time.Millisecond).Unix()},
{Msg: t3, Score: float64(time.Now().Add(-500 * time.Millisecond).Unix())},
},
initQueue: []*base.TaskMessage{t4},
wait: pollInterval * 2,
@@ -50,9 +51,9 @@ func TestScheduler(t *testing.T) {
},
{
initScheduled: []h.ZSetEntry{
{Msg: t1, Score: now.Unix()},
{Msg: t2, Score: now.Add(-2 * time.Second).Unix()},
{Msg: t3, Score: now.Add(-500 * time.Millisecond).Unix()},
{Msg: t1, Score: float64(now.Unix())},
{Msg: t2, Score: float64(now.Add(-2 * time.Second).Unix())},
{Msg: t3, Score: float64(now.Add(-500 * time.Millisecond).Unix())},
},
initRetry: []h.ZSetEntry{},
initQueue: []*base.TaskMessage{t4},
@@ -67,9 +68,10 @@ func TestScheduler(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedScheduledQueue(t, r, tc.initScheduled) // initialize scheduled queue
h.SeedRetryQueue(t, r, tc.initRetry) // initialize retry queue
h.SeedDefaultQueue(t, r, tc.initQueue) // initialize default queue
h.SeedEnqueuedQueue(t, r, tc.initQueue) // initialize default queue
s.start()
var wg sync.WaitGroup
s.start(&wg)
time.Sleep(tc.wait)
s.terminate()

139
servemux.go Normal file
View File

@@ -0,0 +1,139 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"sort"
"strings"
"sync"
)
// ServeMux is a multiplexer for asynchronous tasks.
// It matches the type of each task against a list of registered patterns
// and calls the handler for the pattern that most closely matches the
// taks's type name.
//
// Longer patterns take precedence over shorter ones, so that if there are
// handlers registered for both "images" and "images:thumbnails",
// the latter handler will be called for tasks with a type name beginning with
// "images:thumbnails" and the former will receive tasks with type name beginning
// with "images".
type ServeMux struct {
mu sync.RWMutex
m map[string]muxEntry
es []muxEntry // slice of entries sorted from longest to shortest.
}
type muxEntry struct {
h Handler
pattern string
}
// NewServeMux allocates and returns a new ServeMux.
func NewServeMux() *ServeMux {
return new(ServeMux)
}
// ProcessTask dispatches the task to the handler whose
// pattern most closely matches the task type.
func (mux *ServeMux) ProcessTask(ctx context.Context, task *Task) error {
h, _ := mux.Handler(task)
return h.ProcessTask(ctx, task)
}
// Handler returns the handler to use for the given task.
// It always return a non-nil handler.
//
// Handler also returns the registered pattern that matches the task.
//
// If there is no registered handler that applies to the task,
// handler returns a 'not found' handler which returns an error.
func (mux *ServeMux) Handler(t *Task) (h Handler, pattern string) {
mux.mu.RLock()
defer mux.mu.RUnlock()
h, pattern = mux.match(t.Type)
if h == nil {
h, pattern = NotFoundHandler(), ""
}
return h, pattern
}
// Find a handler on a handler map given a typename string.
// Most-specific (longest) pattern wins.
func (mux *ServeMux) match(typename string) (h Handler, pattern string) {
// Check for exact match first.
v, ok := mux.m[typename]
if ok {
return v.h, v.pattern
}
// Check for longest valid match.
// mux.es contains all patterns from longest to shortest.
for _, e := range mux.es {
if strings.HasPrefix(typename, e.pattern) {
return e.h, e.pattern
}
}
return nil, ""
}
// Handle registers the handler for the given pattern.
// If a handler already exists for pattern, Handle panics.
func (mux *ServeMux) Handle(pattern string, handler Handler) {
mux.mu.Lock()
defer mux.mu.Unlock()
if pattern == "" {
panic("asynq: invalid pattern")
}
if handler == nil {
panic("asynq: nil handler")
}
if _, exist := mux.m[pattern]; exist {
panic("asynq: multiple registrations for " + pattern)
}
if mux.m == nil {
mux.m = make(map[string]muxEntry)
}
e := muxEntry{h: handler, pattern: pattern}
mux.m[pattern] = e
mux.es = appendSorted(mux.es, e)
}
func appendSorted(es []muxEntry, e muxEntry) []muxEntry {
n := len(es)
i := sort.Search(n, func(i int) bool {
return len(es[i].pattern) < len(e.pattern)
})
if i == n {
return append(es, e)
}
// we now know that i points at where we want to insert.
es = append(es, muxEntry{}) // try to grow the slice in place, any entry works.
copy(es[i+1:], es[i:]) // shift shorter entries down.
es[i] = e
return es
}
// HandleFunc registers the handler function for the given pattern.
func (mux *ServeMux) HandleFunc(pattern string, handler func(context.Context, *Task) error) {
if handler == nil {
panic("asynq: nil handler")
}
mux.Handle(pattern, HandlerFunc(handler))
}
// NotFound returns an error indicating that the handler was not found for the given task.
func NotFound(ctx context.Context, task *Task) error {
return fmt.Errorf("handler not found for task %q", task.Type)
}
// NotFoundHandler returns a simple task handler that returns a ``not found`` error.
func NotFoundHandler() Handler { return HandlerFunc(NotFound) }

116
servemux_test.go Normal file
View File

@@ -0,0 +1,116 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
)
var called string
// makeFakeHandler returns a handler that updates the global called variable
// to the given identity.
func makeFakeHandler(identity string) Handler {
return HandlerFunc(func(ctx context.Context, t *Task) error {
called = identity
return nil
})
}
// A list of pattern, handler pair that is registered with mux.
var serveMuxRegister = []struct {
pattern string
h Handler
}{
{"email:", makeFakeHandler("default email handler")},
{"email:signup", makeFakeHandler("signup email handler")},
{"csv:export", makeFakeHandler("csv export handler")},
}
var serveMuxTests = []struct {
typename string // task's type name
want string // identifier of the handler that should be called
}{
{"email:signup", "signup email handler"},
{"csv:export", "csv export handler"},
{"email:daily", "default email handler"},
}
func TestServeMux(t *testing.T) {
mux := NewServeMux()
for _, e := range serveMuxRegister {
mux.Handle(e.pattern, e.h)
}
for _, tc := range serveMuxTests {
called = "" // reset to zero value
task := NewTask(tc.typename, nil)
if err := mux.ProcessTask(context.Background(), task); err != nil {
t.Fatal(err)
}
if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
}
}
}
func TestServeMuxRegisterNilHandler(t *testing.T) {
defer func() {
if err := recover(); err == nil {
t.Error("expected call to mux.HandleFunc to panic")
}
}()
mux := NewServeMux()
mux.HandleFunc("email:signup", nil)
}
func TestServeMuxRegisterEmptyPattern(t *testing.T) {
defer func() {
if err := recover(); err == nil {
t.Error("expected call to mux.HandleFunc to panic")
}
}()
mux := NewServeMux()
mux.Handle("", makeFakeHandler("email"))
}
func TestServeMuxRegisterDuplicatePattern(t *testing.T) {
defer func() {
if err := recover(); err == nil {
t.Error("expected call to mux.HandleFunc to panic")
}
}()
mux := NewServeMux()
mux.Handle("email", makeFakeHandler("email"))
mux.Handle("email", makeFakeHandler("email:default"))
}
var notFoundTests = []struct {
typename string // task's type name
}{
{"image:minimize"},
{"csv:"}, // registered patterns match the task's type prefix, not the other way around.
}
func TestServeMuxNotFound(t *testing.T) {
mux := NewServeMux()
for _, e := range serveMuxRegister {
mux.Handle(e.pattern, e.h)
}
for _, tc := range notFoundTests {
task := NewTask(tc.typename, nil)
err := mux.ProcessTask(context.Background(), task)
if err == nil {
t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type)
}
}
}

62
subscriber.go Normal file
View File

@@ -0,0 +1,62 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
type subscriber struct {
rdb *rdb.RDB
// channel to communicate back to the long running "subscriber" goroutine.
done chan struct{}
// cancelations hold cancel functions for all in-progress tasks.
cancelations *base.Cancelations
}
func newSubscriber(rdb *rdb.RDB, cancelations *base.Cancelations) *subscriber {
return &subscriber{
rdb: rdb,
done: make(chan struct{}),
cancelations: cancelations,
}
}
func (s *subscriber) terminate() {
logger.info("Subscriber shutting down...")
// Signal the subscriber goroutine to stop.
s.done <- struct{}{}
}
func (s *subscriber) start(wg *sync.WaitGroup) {
pubsub, err := s.rdb.CancelationPubSub()
cancelCh := pubsub.Channel()
if err != nil {
logger.error("cannot subscribe to cancelation channel: %v", err)
return
}
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-s.done:
pubsub.Close()
logger.info("Subscriber done")
return
case msg := <-cancelCh:
cancel, ok := s.cancelations.Get(msg.Payload)
if ok {
cancel()
}
}
}
}()
}

64
subscriber_test.go Normal file
View File

@@ -0,0 +1,64 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestSubscriber(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
tests := []struct {
registeredID string // ID for which cancel func is registered
publishID string // ID to be published
wantCalled bool // whether cancel func should be called
}{
{"abc123", "abc123", true},
{"abc456", "abc123", false},
}
for _, tc := range tests {
var mu sync.Mutex
called := false
fakeCancelFunc := func() {
mu.Lock()
defer mu.Unlock()
called = true
}
cancelations := base.NewCancelations()
cancelations.Add(tc.registeredID, fakeCancelFunc)
subscriber := newSubscriber(rdbClient, cancelations)
var wg sync.WaitGroup
subscriber.start(&wg)
if err := rdbClient.PublishCancelation(tc.publishID); err != nil {
subscriber.terminate()
t.Fatalf("could not publish cancelation message: %v", err)
}
// allow for redis to publish message
time.Sleep(time.Second)
mu.Lock()
if called != tc.wantCalled {
if tc.wantCalled {
t.Errorf("fakeCancelFunc was not called, want the function to be called")
} else {
t.Errorf("fakeCancelFunc was called, want the function to not be called")
}
}
mu.Unlock()
subscriber.terminate()
}
}

72
syncer.go Normal file
View File

@@ -0,0 +1,72 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
)
// syncer is responsible for queuing up failed requests to redis and retry
// those requests to sync state between the background process and redis.
type syncer struct {
requestsCh <-chan *syncRequest
// channel to communicate back to the long running "syncer" goroutine.
done chan struct{}
// interval between sync operations.
interval time.Duration
}
type syncRequest struct {
fn func() error // sync operation
errMsg string // error message
}
func newSyncer(requestsCh <-chan *syncRequest, interval time.Duration) *syncer {
return &syncer{
requestsCh: requestsCh,
done: make(chan struct{}),
interval: interval,
}
}
func (s *syncer) terminate() {
logger.info("Syncer shutting down...")
// Signal the syncer goroutine to stop.
s.done <- struct{}{}
}
func (s *syncer) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
var requests []*syncRequest
for {
select {
case <-s.done:
// Try sync one last time before shutting down.
for _, req := range requests {
if err := req.fn(); err != nil {
logger.error(req.errMsg)
}
}
logger.info("Syncer done")
return
case req := <-s.requestsCh:
requests = append(requests, req)
case <-time.After(s.interval):
var temp []*syncRequest
for _, req := range requests {
if err := req.fn(); err != nil {
temp = append(temp, req)
}
}
requests = temp
}
}
}()
}

92
syncer_test.go Normal file
View File

@@ -0,0 +1,92 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"fmt"
"sync"
"testing"
"time"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestSyncer(t *testing.T) {
inProgress := []*base.TaskMessage{
h.NewTaskMessage("send_email", nil),
h.NewTaskMessage("reindex", nil),
h.NewTaskMessage("gen_thumbnail", nil),
}
r := setup(t)
rdbClient := rdb.NewRDB(r)
h.SeedInProgressQueue(t, r, inProgress)
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncRequestCh, interval)
var wg sync.WaitGroup
syncer.start(&wg)
defer syncer.terminate()
for _, msg := range inProgress {
m := msg
syncRequestCh <- &syncRequest{
fn: func() error {
return rdbClient.Done(m)
},
}
}
time.Sleep(2 * interval) // ensure that syncer runs at least once
gotInProgress := h.GetInProgressMessages(t, r)
if l := len(gotInProgress); l != 0 {
t.Errorf("%q has length %d; want 0", base.InProgressQueue, l)
}
}
func TestSyncerRetry(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncRequestCh, interval)
var wg sync.WaitGroup
syncer.start(&wg)
defer syncer.terminate()
var (
mu sync.Mutex
counter int
)
// Increment the counter for each call.
// Initial call will fail and second call will succeed.
requestFunc := func() error {
mu.Lock()
defer mu.Unlock()
if counter == 0 {
counter++
return fmt.Errorf("zero")
}
counter++
return nil
}
syncRequestCh <- &syncRequest{
fn: requestFunc,
errMsg: "error",
}
// allow syncer to retry
time.Sleep(3 * interval)
mu.Lock()
if counter != 2 {
t.Errorf("counter = %d, want 2", counter)
}
mu.Unlock()
}

165
tools/asynqmon/README.md Normal file
View File

@@ -0,0 +1,165 @@
# Asynqmon
Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Stats](#stats)
- [History](#history)
- [Process Status](#process-status)
- [List](#list)
- [Enqueue](#enqueue)
- [Delete](#delete)
- [Kill](#kill)
- [Cancel](#cancel)
- [Config File](#config-file)
## Installation
In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynqmon
This will create the asynqmon executable under your `$GOPATH/bin` directory.
## Quickstart
The tool has a few commands to inspect the state of tasks and queues.
Run `asynqmon help` to see all the available commands.
Asynqmon needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
By default, Asynqmon will try to connect to a redis server running at `localhost:6379`.
### Stats
Stats command gives the overview of the current state of tasks and queues. You can run it in conjunction with `watch` command to repeatedly run `stats`.
Example:
watch -n 3 asynqmon stats
This will run `asynqmon stats` command every 3 seconds.
![Gif](/docs/assets/asynqmon_stats.gif)
### History
History command shows the number of processed and failed tasks from the last x days.
By default, it shows the stats from the last 10 days. Use `--days` to specify the number of days.
Example:
asynqmon history --days=30
![Gif](/docs/assets/asynqmon_history.gif)
### Process Status
PS (ProcessStatus) command shows the list of running worker processes.
Example:
asynqmon ps
![Gif](/docs/assets/asynqmon_ps.gif)
### List
List command shows all tasks in the specified state in a table format
Example:
asynqmon ls retry
asynqmon ls scheduled
asynqmon ls dead
asynqmon ls enqueued:default
asynqmon ls inprogress
### Enqueue
There are two commands to enqueue tasks.
Command `enq` takes a task ID and moves the task to **Enqueued** state. You can obtain the task ID by running `ls` command.
Example:
asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example:
asynqmon enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state.
### Delete
There are two commands for task deletion.
Command `del` takes a task ID and deletes the task. You can obtain the task ID by running `ls` command.
Example:
asynqmon del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state.
Example:
asynqmon delall retry
Running the above command will delete all **Retry** tasks.
### Kill
There are two commands to kill (i.e. move to dead state) tasks.
Command `kill` takes a task ID and kills the task. You can obtain the task ID by running `ls` command.
Example:
asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state.
Example:
asynqmon killall retry
Running the above command will move all **Retry** tasks to **Dead** state.
### Cancel
Command `cancel` takes a task ID and sends a cancelation signal to the goroutine processing the specified task.
You can obtain the task ID by running `ls` command.
The task should be in "in-progress" state.
Handler implementation needs to be context aware in order to actually stop processing.
Example:
asynqmon cancel bnogo8gt6toe23vhef0g
## Config File
You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server.
By default, `asynqmon` will try to read config file located in
`$HOME/.asynqmon.(yaml|json)`. You can specify the file location via `--config` flag.
Config file example:
```yaml
uri: 127.0.0.1:6379
db: 2
password: mypassword
```
This will set the default values for `--uri`, `--db`, and `--password` flags.

View File

@@ -0,0 +1,53 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// cancelCmd represents the cancel command
var cancelCmd = &cobra.Command{
Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynqmon cancel) will send a cancelation signal to the goroutine processing
the specified task.
The command takes one argument which specifies the task to cancel.
The task should be in in-progress state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing.
Example: asynqmon cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: cancel,
}
func init() {
rootCmd.AddCommand(cancelCmd)
}
func cancel(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
err := r.PublishCancelation(args[0])
if err != nil {
fmt.Printf("could not send cancelation signal: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully sent cancelation siganl for task %s\n", args[0])
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// delCmd represents the del command
@@ -20,7 +21,7 @@ var delCmd = &cobra.Command{
Long: `Del (asynqmon del) will delete a task given an identifier.
The command takes one argument which specifies the task to delete.
The task should be in either scheduled, retry or dead queue.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`,
@@ -49,8 +50,9 @@ func del(cmd *cobra.Command, args []string) {
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":

View File

@@ -11,19 +11,20 @@ import (
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var delallValidArgs = []string{"scheduled", "retry", "dead"}
// delallCmd represents the delall command
var delallCmd = &cobra.Command{
Use: "delall [queue name]",
Short: "Deletes all tasks from the specified queue",
Long: `Delall (asynqmon delall) will delete all tasks from the specified queue.
Use: "delall [state]",
Short: "Deletes all tasks in the specified state",
Long: `Delall (asynqmon delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
Example: asynqmon delall dead -> Deletes all tasks from the dead queue`,
Example: asynqmon delall dead -> Deletes all dead tasks`,
ValidArgs: delallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: delall,
@@ -45,8 +46,9 @@ func init() {
func delall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var err error
@@ -58,12 +60,12 @@ func delall(cmd *cobra.Command, args []string) {
case "dead":
err = r.DeleteAllDeadTasks()
default:
fmt.Printf("error: `asynqmon delall [queue name]` only accepts %v as the argument.\n", delallValidArgs)
fmt.Printf("error: `asynqmon delall [state]` only accepts %v as the argument.\n", delallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Deleted all tasks from %q queue\n", args[0])
fmt.Printf("Deleted all tasks in %q state\n", args[0])
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// enqCmd represents the enq command
@@ -20,7 +21,7 @@ var enqCmd = &cobra.Command{
Long: `Enq (asynqmon enq) will enqueue a task given an identifier.
The command takes one argument which specifies the task to enqueue.
The task should be in either scheduled, retry or dead queue.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command.
The task enqueued by this command will be processed as soon as the task
@@ -52,8 +53,9 @@ func enq(cmd *cobra.Command, args []string) {
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":

View File

@@ -11,22 +11,23 @@ import (
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var enqallValidArgs = []string{"scheduled", "retry", "dead"}
// enqallCmd represents the enqall command
var enqallCmd = &cobra.Command{
Use: "enqall [queue name]",
Short: "Enqueues all tasks from the specified queue",
Long: `Enqall (asynqmon enqall) will enqueue all tasks from the specified queue.
Use: "enqall [state]",
Short: "Enqueues all tasks in the specified state",
Long: `Enqall (asynqmon enqall) will enqueue all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
The tasks enqueued by this command will be processed as soon as it
gets dequeued by a processor.
Example: asynqmon enqall dead -> Enqueues all tasks from the dead queue`,
Example: asynqmon enqall dead -> Enqueues all dead tasks`,
ValidArgs: enqallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: enqall,
@@ -48,8 +49,9 @@ func init() {
func enqall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var n int64
@@ -62,12 +64,12 @@ func enqall(cmd *cobra.Command, args []string) {
case "dead":
n, err = r.EnqueueAllDeadTasks()
default:
fmt.Printf("error: `asynqmon enqall [queue name]` only accepts %v as the argument.\n", enqallValidArgs)
fmt.Printf("error: `asynqmon enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Enqueued %d tasks from %q queue\n", n, args[0])
fmt.Printf("Enqueued %d tasks in %q state\n", n, args[0])
}

View File

@@ -7,60 +7,45 @@ package cmd
import (
"fmt"
"os"
"strconv"
"strings"
"text/tabwriter"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var days int
// historyCmd represents the history command
var historyCmd = &cobra.Command{
Use: "history [num of days]",
Use: "history",
Short: "Shows historical aggregate data",
Long: `History (asynqmon history) will show the number of processed tasks
as well as the error rate for the last n days.
Long: `History (asynqmon history) will show the number of processed and failed tasks
from the last x days.
Example: asynqmon history 7 -> Shows stats from the last 7 days`,
Args: cobra.ExactArgs(1),
By default, it will show the data from the last 10 days.
Example: asynqmon history -x=30 -> Shows stats from the last 30 days`,
Args: cobra.NoArgs,
Run: history,
}
func init() {
rootCmd.AddCommand(historyCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// historyCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// historyCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
historyCmd.Flags().IntVarP(&days, "days", "x", 10, "show data from last x days")
}
func history(cmd *cobra.Command, args []string) {
n, err := strconv.Atoi(args[0])
if err != nil {
fmt.Printf(`Error: Invalid argument. Argument has to be an integer.
Usage: asynqmon history [num of days]
`)
os.Exit(1)
}
if err != nil {
}
c := redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
stats, err := r.HistoricalStats(n)
stats, err := r.HistoricalStats(days)
if err != nil {
fmt.Println(err)
os.Exit(1)

View File

@@ -11,16 +11,17 @@ import (
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// killCmd represents the kill command
var killCmd = &cobra.Command{
Use: "kill [task id]",
Short: "Sends a task to dead queue given an identifier",
Long: `Kill (asynqmon kill) will send a task to dead queue given an identifier.
Short: "Kills a task given an identifier",
Long: `Kill (asynqmon kill) will put a task in dead state given an identifier.
The command takes one argument which specifies the task to kill.
The task should be in either scheduled or retry queue.
The task should be in either scheduled or retry state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Example: asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g`,
@@ -49,8 +50,9 @@ func kill(cmd *cobra.Command, args []string) {
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":

View File

@@ -11,19 +11,20 @@ import (
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var killallValidArgs = []string{"scheduled", "retry"}
// killallCmd represents the killall command
var killallCmd = &cobra.Command{
Use: "killall [queue name]",
Short: "Sends all tasks to dead queue from the specified queue",
Long: `Killall (asynqmon killall) will moves all tasks from the specified queue to dead queue.
Use: "killall [state]",
Short: "Kills all tasks in the specified state",
Long: `Killall (asynqmon killall) will update all tasks from the specified state to dead state.
The argument should be either "scheduled" or "retry".
Example: asynqmon killall retry -> Moves all tasks from retry queue to dead queue`,
Example: asynqmon killall retry -> Update all retry tasks to dead tasks`,
ValidArgs: killallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: killall,
@@ -45,8 +46,9 @@ func init() {
func killall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var n int64
@@ -57,12 +59,12 @@ func killall(cmd *cobra.Command, args []string) {
case "retry":
n, err = r.KillAllRetryTasks()
default:
fmt.Printf("error: `asynqmon killall [queue name]` only accepts %v as the argument.\n", killallValidArgs)
fmt.Printf("error: `asynqmon killall [state]` only accepts %v as the argument.\n", killallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Sent %d tasks to \"dead\" queue from %q queue\n", n, args[0])
fmt.Printf("Successfully updated %d tasks to \"dead\" state\n", n)
}

View File

@@ -10,56 +10,72 @@ import (
"os"
"strconv"
"strings"
"text/tabwriter"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var lsValidArgs = []string{"enqueued", "inprogress", "scheduled", "retry", "dead"}
// lsCmd represents the ls command
var lsCmd = &cobra.Command{
Use: "ls [queue name]",
Short: "Lists queue contents",
Long: `Ls (asynqmon ls) will list all tasks from the specified queue in a table format.
Use: "ls [state]",
Short: "Lists tasks in the specified state",
Long: `Ls (asynqmon ls) will list all tasks in the specified state in a table format.
The command takes one argument which specifies the queue to inspect. The value
of the argument should be one of "enqueued", "inprogress", "scheduled",
The command takes one argument which specifies the state of tasks.
The argument value should be one of "enqueued", "inprogress", "scheduled",
"retry", or "dead".
Example: asynqmon ls dead`,
ValidArgs: lsValidArgs,
Args: cobra.ExactValidArgs(1),
Run: ls,
Example:
asynqmon ls dead -> Lists all tasks in dead state
Enqueued tasks requires a queue name after ":"
Example:
asynqmon ls enqueued:default -> List tasks from default queue
asynqmon ls enqueued:critical -> List tasks from critical queue
`,
Args: cobra.ExactValidArgs(1),
Run: ls,
}
// Flags
var pageSize int
var pageNum int
func init() {
rootCmd.AddCommand(lsCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// lsCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// lsCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
lsCmd.Flags().IntVar(&pageSize, "size", 30, "page size")
lsCmd.Flags().IntVar(&pageNum, "page", 0, "page number - zero indexed (default 0)")
}
func ls(cmd *cobra.Command, args []string) {
if pageSize < 0 {
fmt.Println("page size cannot be negative.")
os.Exit(1)
}
if pageNum < 0 {
fmt.Println("page number cannot be negative.")
os.Exit(1)
}
c := redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
switch args[0] {
parts := strings.Split(args[0], ":")
switch parts[0] {
case "enqueued":
listEnqueued(r)
if len(parts) != 2 {
fmt.Printf("error: Missing queue name\n`asynqmon ls enqueued:[queue name]`\n")
os.Exit(1)
}
listEnqueued(r, parts[1])
case "inprogress":
listInProgress(r)
case "scheduled":
@@ -69,7 +85,7 @@ func ls(cmd *cobra.Command, args []string) {
case "dead":
listDead(r)
default:
fmt.Printf("error: `asynqmon ls [queue name]` only accepts %v as the argument.\n", lsValidArgs)
fmt.Printf("error: `asynqmon ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
os.Exit(1)
}
}
@@ -105,27 +121,28 @@ func parseQueryID(queryID string) (id xid.ID, score int64, qtype string, err err
return id, score, qtype, nil
}
func listEnqueued(r *rdb.RDB) {
tasks, err := r.ListEnqueued()
func listEnqueued(r *rdb.RDB, qname string) {
tasks, err := r.ListEnqueued(qname, rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No enqueued tasks")
fmt.Printf("No enqueued tasks in %q queue\n", qname)
return
}
cols := []string{"ID", "Type", "Payload"}
cols := []string{"ID", "Type", "Payload", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload)
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listInProgress(r *rdb.RDB) {
tasks, err := r.ListInProgress()
tasks, err := r.ListInProgress(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -141,10 +158,11 @@ func listInProgress(r *rdb.RDB) {
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listScheduled(r *rdb.RDB) {
tasks, err := r.ListScheduled()
tasks, err := r.ListScheduled(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -153,18 +171,19 @@ func listScheduled(r *rdb.RDB) {
fmt.Println("No scheduled tasks")
return
}
cols := []string{"ID", "Type", "Payload", "Process In"}
cols := []string{"ID", "Type", "Payload", "Process In", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
processIn := fmt.Sprintf("%.0f seconds", t.ProcessAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "s"), t.Type, t.Payload, processIn)
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "s"), t.Type, t.Payload, processIn, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listRetry(r *rdb.RDB) {
tasks, err := r.ListRetry()
tasks, err := r.ListRetry(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -173,18 +192,24 @@ func listRetry(r *rdb.RDB) {
fmt.Println("No retry tasks")
return
}
cols := []string{"ID", "Type", "Payload", "Retry In", "Last Error", "Retried", "Max Retry"}
cols := []string{"ID", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
retryIn := fmt.Sprintf("%.0f seconds", t.ProcessAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "r"), t.Type, t.Payload, retryIn, t.ErrorMsg, t.Retried, t.Retry)
var nextRetry string
if d := t.ProcessAt.Sub(time.Now()); d > 0 {
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
} else {
nextRetry = "right now"
}
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "r"), t.Type, t.Payload, nextRetry, t.ErrorMsg, t.Retried, t.Retry, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listDead(r *rdb.RDB) {
tasks, err := r.ListDead()
tasks, err := r.ListDead(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -193,26 +218,12 @@ func listDead(r *rdb.RDB) {
fmt.Println("No dead tasks")
return
}
cols := []string{"ID", "Type", "Payload", "Last Failed", "Last Error"}
cols := []string{"ID", "Type", "Payload", "Last Failed", "Last Error", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "d"), t.Type, t.Payload, t.LastFailedAt, t.ErrorMsg)
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "d"), t.Type, t.Payload, t.LastFailedAt, t.ErrorMsg, t.Queue)
}
}
printTable(cols, printRows)
}
func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
format := strings.Repeat("%v\t", len(cols)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
var headers []interface{}
var seps []interface{}
for _, name := range cols {
headers = append(headers, name)
seps = append(seps, strings.Repeat("-", len(name)))
}
fmt.Fprintf(tw, format, headers...)
fmt.Fprintf(tw, format, seps...)
printRows(tw, format)
tw.Flush()
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}

118
tools/asynqmon/cmd/ps.go Normal file
View File

@@ -0,0 +1,118 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// psCmd represents the ps command
var psCmd = &cobra.Command{
Use: "ps",
Short: "Shows all background worker processes",
Long: `Ps (asynqmon ps) will show all background worker processes
backed by the specified redis instance.
The command shows the following for each process:
* Host and PID of the process
* Number of active workers out of worker pool
* Queue configuration
* State of the worker process ("running" | "stopped")
* Time the process was started
A "running" process is processing tasks in queues.
A "stopped" process is no longer processing new tasks.`,
Args: cobra.NoArgs,
Run: ps,
}
func init() {
rootCmd.AddCommand(psCmd)
}
func ps(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
processes, err := r.ListProcesses()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(processes) == 0 {
fmt.Println("No processes")
return
}
// sort by hostname and pid
sort.Slice(processes, func(i, j int) bool {
x, y := processes[i], processes[j]
if x.Host != y.Host {
return x.Host < y.Host
}
return x.PID < y.PID
})
// print processes
cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"}
printRows := func(w io.Writer, tmpl string) {
for _, ps := range processes {
fmt.Fprintf(w, tmpl,
ps.Host, ps.PID, ps.Status,
fmt.Sprintf("%d/%d", ps.ActiveWorkerCount, ps.Concurrency),
formatQueues(ps.Queues), timeAgo(ps.Started))
}
}
printTable(cols, printRows)
}
// timeAgo takes a time and returns a string of the format "<duration> ago".
func timeAgo(since time.Time) string {
d := time.Since(since).Round(time.Second)
return fmt.Sprintf("%v ago", d)
}
func formatQueues(qmap map[string]int) string {
// sort queues by priority and name
type queue struct {
name string
priority int
}
var queues []*queue
for qname, p := range qmap {
queues = append(queues, &queue{qname, p})
}
sort.Slice(queues, func(i, j int) bool {
x, y := queues[i], queues[j]
if x.priority != y.priority {
return x.priority > y.priority
}
return x.name < y.name
})
var b strings.Builder
l := len(queues)
for _, q := range queues {
fmt.Fprintf(&b, "%s:%d", q.name, q.priority)
l--
if l > 0 {
b.WriteString(" ")
}
}
return b.String()
}

54
tools/asynqmon/cmd/rmq.go Normal file
View File

@@ -0,0 +1,54 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// rmqCmd represents the rmq command
var rmqCmd = &cobra.Command{
Use: "rmq [queue name]",
Short: "Removes the specified queue",
Long: `Rmq (asynqmon rmq) will remove the specified queue.
By default, it will remove the queue only if it's empty.
Use --force option to override this behavior.
Example: asynqmon rmq low -> Removes "low" queue`,
Args: cobra.ExactValidArgs(1),
Run: rmq,
}
var rmqForce bool
func init() {
rootCmd.AddCommand(rmqCmd)
rmqCmd.Flags().BoolVarP(&rmqForce, "force", "f", false, "remove the queue regardless of its size")
}
func rmq(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.RemoveQueue(args[0], rmqForce)
if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynqmon rmq --force %s'\n", err, args[0])
os.Exit(1)
}
fmt.Printf("error: %v", err)
os.Exit(1)
}
fmt.Printf("Successfully removed queue %q\n", args[0])
}

View File

@@ -6,8 +6,12 @@ package cmd
import (
"fmt"
"github.com/spf13/cobra"
"io"
"os"
"strings"
"text/tabwriter"
"github.com/spf13/cobra"
homedir "github.com/mitchellh/go-homedir"
"github.com/spf13/viper"
@@ -18,22 +22,13 @@ var cfgFile string
// Flags
var uri string
var db int
var password string
// rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{
Use: "asynqmon",
Short: "A monitoring tool for asynq queues",
Long: `Asynqmon is a CLI tool to inspect and monitor queues managed by asynq package.
Asynqmon has a few commands to query and mutate the current state of the queues.
Monitoring commands such as "stats" and "ls" can be used in conjunction with the
"watch" command to continuously run the command at a certain interval.
Example: watch -n 5 asynqmon stats`,
// Uncomment the following line if your bare application
// has an action associated with it:
// Run: func(cmd *cobra.Command, args []string) { },
Long: `Asynqmon is a montoring CLI to inspect tasks and queues managed by asynq.`,
}
// Execute adds all child commands to the root command and sets flags appropriately.
@@ -48,13 +43,16 @@ func Execute() {
func init() {
cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file (default is $HOME/.asynqmon.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "Redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "Redis database number (default is 0)")
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynqmon.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
viper.BindPFlag("uri", rootCmd.PersistentFlags().Lookup("uri"))
viper.BindPFlag("db", rootCmd.PersistentFlags().Lookup("db"))
viper.BindPFlag("password", rootCmd.PersistentFlags().Lookup("password"))
}
// initConfig reads in config file and ENV variables if set.
// TODO(hibiken): Remove this if not necessary.
func initConfig() {
if cfgFile != "" {
// Use config file from the flag.
@@ -79,3 +77,36 @@ func initConfig() {
fmt.Println("Using config file:", viper.ConfigFileUsed())
}
}
// printTable is a helper function to print data in table format.
//
// cols is a list of headers and printRow specifies how to print rows.
//
// Example:
// type User struct {
// Name string
// Addr string
// Age int
// }
// data := []*User{{"user1", "addr1", 24}, {"user2", "addr2", 42}, ...}
// cols := []string{"Name", "Addr", "Age"}
// printRows := func(w io.Writer, tmpl string) {
// for _, u := range data {
// fmt.Fprintf(w, tmpl, u.Name, u.Addr, u.Age)
// }
// }
// printTable(cols, printRows)
func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
format := strings.Repeat("%v\t", len(cols)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
var headers []interface{}
var seps []interface{}
for _, name := range cols {
headers = append(headers, name)
seps = append(seps, strings.Repeat("-", len(name)))
}
fmt.Fprintf(tw, format, headers...)
fmt.Fprintf(tw, format, seps...)
printRows(tw, format)
tw.Flush()
}

View File

@@ -7,25 +7,33 @@ package cmd
import (
"fmt"
"os"
"sort"
"strconv"
"strings"
"text/tabwriter"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// statsCmd represents the stats command
var statsCmd = &cobra.Command{
Use: "stats",
Short: "Shows current state of the queues",
Long: `Stats (aysnqmon stats) will show the number of tasks in each queue at that instant.
It also displays basic information about the running redis instance.
Short: "Shows current state of the tasks and queues",
Long: `Stats (aysnqmon stats) will show the overview of tasks and queues at that instant.
To monitor the queues continuously, it's recommended that you run this
Specifically, the command shows the following:
* Number of tasks in each state
* Number of tasks in each queue
* Aggregate data for the current day
* Basic information about the running redis instance
To monitor the tasks continuously, it's recommended that you run this
command in conjunction with the watch command.
Example: watch -n 5 asynqmon stats`,
Example: watch -n 3 asynqmon stats -> Shows current state of tasks every three seconds`,
Args: cobra.NoArgs,
Run: stats,
}
@@ -46,8 +54,9 @@ func init() {
func stats(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: uri,
DB: db,
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
@@ -61,8 +70,12 @@ func stats(cmd *cobra.Command, args []string) {
fmt.Println(err)
os.Exit(1)
}
fmt.Println("STATES")
printStates(stats)
fmt.Println()
fmt.Println("QUEUES")
printQueues(stats)
printQueues(stats.Queues)
fmt.Println()
fmt.Printf("STATS FOR %s UTC\n", stats.Timestamp.UTC().Format("2006-01-02"))
@@ -74,7 +87,7 @@ func stats(cmd *cobra.Command, args []string) {
fmt.Println()
}
func printQueues(s *rdb.Stats) {
func printStates(s *rdb.Stats) {
format := strings.Repeat("%v\t", 5) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "InProgress", "Enqueued", "Scheduled", "Retry", "Dead")
@@ -83,6 +96,24 @@ func printQueues(s *rdb.Stats) {
tw.Flush()
}
func printQueues(queues map[string]int) {
var qnames, seps, counts []string
for q := range queues {
qnames = append(qnames, strings.Title(q))
}
sort.Strings(qnames) // sort for stable order
for _, q := range qnames {
seps = append(seps, strings.Repeat("-", len(q)))
counts = append(counts, strconv.Itoa(queues[strings.ToLower(q)]))
}
format := strings.Repeat("%v\t", len(qnames)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, toInterfaceSlice(qnames)...)
fmt.Fprintf(tw, format, toInterfaceSlice(seps)...)
fmt.Fprintf(tw, format, toInterfaceSlice(counts)...)
tw.Flush()
}
func printStats(s *rdb.Stats) {
format := strings.Repeat("%v\t", 3) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
@@ -112,3 +143,11 @@ func printInfo(info map[string]string) {
)
tw.Flush()
}
func toInterfaceSlice(strs []string) []interface{} {
var res []interface{}
for _, s := range strs {
res = append(res, s)
}
return res
}

View File

@@ -0,0 +1,75 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// workersCmd represents the workers command
var workersCmd = &cobra.Command{
Use: "workers",
Short: "Shows all running workers information",
Long: `Workers (asynqmon workers) will show all running workers information.
The command shows the follwoing for each worker:
* Process in which the worker is running
* ID of the task worker is processing
* Type of the task worker is processing
* Payload of the task worker is processing
* Queue that the task was pulled from.
* Time the worker started processing the task`,
Args: cobra.NoArgs,
Run: workers,
}
func init() {
rootCmd.AddCommand(workersCmd)
}
func workers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
workers, err := r.ListWorkers()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(workers) == 0 {
fmt.Println("No workers")
return
}
// sort by started timestamp or ID.
sort.Slice(workers, func(i, j int) bool {
x, y := workers[i], workers[j]
if x.Started != y.Started {
return x.Started.Before(y.Started)
}
return x.ID.String() < y.ID.String()
})
cols := []string{"Process", "ID", "Type", "Payload", "Queue", "Started"}
printRows := func(w io.Writer, tmpl string) {
for _, wk := range workers {
fmt.Fprintf(w, tmpl,
fmt.Sprintf("%s:%d", wk.Host, wk.PID), wk.ID, wk.Type, wk.Payload, wk.Queue, timeAgo(wk.Started))
}
}
printTable(cols, printRows)
}

14
tools/go.mod Normal file
View File

@@ -0,0 +1,14 @@
module github.com/hibiken/asynq/tools
go 1.13
require (
github.com/go-redis/redis/v7 v7.2.0
github.com/hibiken/asynq v0.4.0
github.com/mitchellh/go-homedir v1.1.0
github.com/rs/xid v1.2.1
github.com/spf13/cobra v0.0.5
github.com/spf13/viper v1.6.2
)
replace github.com/hibiken/asynq => ./..

195
tools/go.sum Normal file
View File

@@ -0,0 +1,195 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis v6.15.7+incompatible h1:3skhDh95XQMpnqeqNftPkQD9jL9e5e36z/1SUm6dy1U=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.1.0 h1:I4C4a8UGbFejiVjtYVTRVOiMIJ5pm5Yru6ibvDX/OS0=
github.com/go-redis/redis/v7 v7.1.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hibiken/asynq v0.4.0 h1:NvAfYX0DRe04WgGMKRg5oX7bs6ktv2fu9YwB6O356FI=
github.com/hibiken/asynq v0.4.0/go.mod h1:dtrVkxCsGPVhVNHMDXAH7lFq64kbj43+G6lt4FQZfW4=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.6.0/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E=
github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=