2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-10-21 09:36:12 +08:00

Compare commits

..

84 Commits

Author SHA1 Message Date
Ken Hibino
f91c05b92c v0.7.0 2020-03-22 12:04:37 -07:00
Ken Hibino
9b4438347e Fix comment 2020-03-21 11:44:26 -07:00
Ken Hibino
c33dd447ac Allow client to enqueue a task with unique option
Changes:

- Added Unique option for clients
- Require go v.13 or above (to use new errors wrapping functions)
- Fixed adding queue key to all-queues set (asynq:queues) when scheduling.
2020-03-21 11:40:40 -07:00
Ken Hibino
6df2c3ae2b v0.6.2 2020-03-15 21:02:28 -07:00
Ken Hibino
37554fd23c Update readme example code 2020-03-15 14:56:00 -07:00
Ken Hibino
77f5a38453 Refactor payload_test to reduce cyclomatic complexities 2020-03-14 12:30:42 -07:00
Ken Hibino
8d2b9d6be7 Add comments to exported types and functions from internal/log package 2020-03-13 21:04:45 -07:00
Bo-Yi Wu
1b7d557c66 fix typo 2020-03-13 20:02:26 -07:00
Bo-Yi Wu
30b68728d4 chore(lint): fix from gofmt -s 2020-03-13 20:01:39 -07:00
Ken Hibino
310d38620d Minor tweak to readme example code 2020-03-13 17:27:20 -07:00
Ken Hibino
1a53bbf21b Update changelog 2020-03-13 17:27:20 -07:00
Ken Hibino
9c79a7d507 Simplify code with gofmt -s 2020-03-13 14:24:24 -07:00
Ken Hibino
516f95edff Add Use method to better support middlewares with ServeMux 2020-03-13 14:13:17 -07:00
Ken Hibino
cf7a677312 v0.6.1 2020-03-12 08:42:34 -07:00
Ken Hibino
0bc6eba021 Allow custom logger to be used in Background 2020-03-12 08:40:37 -07:00
Ken Hibino
d664d68fa4 Extract out log package 2020-03-09 07:17:52 -07:00
Ken Hibino
a425f54d23 [ci skip] Remove todo comment 2020-03-09 06:09:07 -07:00
Ken Hibino
3c722386b0 Add Deadline option when enqueuing tasks
Deadline option sets the deadline for the given task's context deadline.
2020-03-08 17:12:42 -07:00
Ken Hibino
25992c2781 [ci skip] Minor readme update 2020-03-03 21:39:16 -08:00
Ken Hibino
b9e3cad7a7 [ci skip] Update readme
- Added flow chart for task queue
- Reordered sections
2020-03-02 07:06:11 -08:00
Ken Hibino
b6486716b4 v0.6.0 2020-03-01 15:54:59 -08:00
Ken Hibino
742ed6546f Add ServeMux type
Allow user to use ServeMux type to be used as a Handler.
ServeMux API is design to be similar to net/http.ServeMux API.
2020-03-01 15:53:18 -08:00
Ken Hibino
897ab4e28b Add ErrorHandler type to changelog 2020-02-29 22:09:13 -08:00
Ken Hibino
a4e4c0b1d5 Call error handler when task was not processed successfully 2020-02-29 22:09:13 -08:00
Ken Hibino
95b7dcaad4 Clean up processor test 2020-02-29 22:09:13 -08:00
Ken Hibino
8d3248e850 Add ErrorHandler type and add it to Config 2020-02-29 22:09:13 -08:00
Ken Hibino
e69264dc04 Run travis build with go v1.14.x 2020-02-27 08:28:35 -08:00
Ken Hibino
93ff4e4149 v0.5.0 2020-02-23 20:44:44 -08:00
Ken Hibino
3ee97f327e [ci skip] Update readme 2020-02-23 20:44:44 -08:00
Ken Hibino
49c117f4d1 Change Client APIs
Use `Enqueue`, `EnqueueAt`, and `EnqueueIn` to enqueue and schedule
tasks.
`Schedule` method was removed.
2020-02-23 20:40:40 -08:00
Ken Hibino
5b53a2aee9 Update redis package to v7.2.0 2020-02-22 21:21:55 -08:00
Ken Hibino
26e399bc9c Add workers command to asynqmon 2020-02-22 21:21:55 -08:00
Ken Hibino
e21fe3bd79 Add ListWorkers to RDB 2020-02-22 21:21:55 -08:00
Ken Hibino
37c6c73d9b Add test for cancelation pubsub 2020-02-22 15:16:24 -08:00
Ken Hibino
5775a5818d Update WriteProcessState to write both ProcessInfo and WorkerInfo to
redis
2020-02-22 08:56:51 -08:00
Ken Hibino
e3c9aae8c7 Include worker stats to ProcessState 2020-02-22 08:56:51 -08:00
Ken Hibino
9e02a91808 Add workers key in base package 2020-02-22 08:56:51 -08:00
Ken Hibino
830020eb39 Add ProcessState type to base package 2020-02-22 08:56:51 -08:00
Ken Hibino
f9a6c6156f [ci skip] Add codecov badge in readme 2020-02-19 21:37:54 -08:00
Ken Hibino
8540172306 Update travis.yml to upload coverage report to codecov 2020-02-19 21:27:40 -08:00
Ken Hibino
b44908fefd Move tools to its own go module 2020-02-17 21:55:55 -08:00
Ken Hibino
e78d5e4171 Enable the race detector in CI builds 2020-02-16 21:25:10 -08:00
Ken Hibino
0cc486a975 Fix data race in syncer test 2020-02-16 21:25:10 -08:00
Ken Hibino
906f231e6c Fix data race in subscriber test 2020-02-16 18:22:59 -08:00
Ken Hibino
39f237899b Clean up heartbeater test 2020-02-16 18:22:59 -08:00
Ken Hibino
3d9a222bb3 Use sync.WaitGroup for shutdown 2020-02-16 18:22:59 -08:00
Ken Hibino
2bcaea52ce Make heartbeater goroutine a monitor goroutine for process info. 2020-02-16 18:22:59 -08:00
Ken Hibino
0856ef32e0 v0.4.0 2020-02-13 06:25:27 -08:00
Ken Hibino
70efb7b455 [ci skip] Update changelog 2020-02-13 06:23:51 -08:00
Ken Hibino
d33ca98648 Use int as priority value type.
Do not use unsigned int for merely non-negative quantities as it
complicates simple arithmetic.
2020-02-13 06:23:51 -08:00
Ken Hibino
bf31fcc3ec [ci skip] Update readme and changelog 2020-02-13 06:23:51 -08:00
Ken Hibino
cd55a0e316 Add cancel command to asynqmon cli 2020-02-13 06:23:51 -08:00
Ken Hibino
6685827147 Add redis pubsub subscriber for cancelation 2020-02-13 06:23:51 -08:00
Ken Hibino
133bb6c2c6 Update docs for context and timeout 2020-02-13 06:23:51 -08:00
Ken Hibino
39459b4412 Allow user to specify timeout per task 2020-02-13 06:23:51 -08:00
Ken Hibino
6e14062325 Send cancellation signal to all in-progress handlers on shutdown 2020-02-13 06:23:51 -08:00
Ken Hibino
1b41c721b4 Change Handler interface to take context.Context 2020-02-13 06:23:51 -08:00
Ken Hibino
79a04e52a3 [cip skip] Add link to intro video in readme 2020-02-09 10:59:32 -08:00
Ken Hibino
d9327cf24b Avoid creating script struct on every rdb method invocation 2020-02-09 09:34:41 -08:00
Ken Hibino
7e33c9ebe0 [ci skip] Add contribution doc 2020-02-08 09:48:04 -08:00
Ken Hibino
a6b79eb7a0 [ci skip] Update readme 2020-02-07 06:45:36 -08:00
Ken Hibino
7af69c8d3c [ci skip] Fix comments 2020-02-06 20:46:59 -08:00
Ken Hibino
6dd4c688f4 [ci skip] Update readme 2020-02-05 22:12:26 -08:00
Ken Hibino
024cfa9fe6 Update output text for task's next retry value 2020-02-05 22:12:26 -08:00
Ken Hibino
461a6f2587 v0.3.0 2020-02-04 06:29:57 -08:00
Ken Hibino
ba9b42f428 Clean up cli description text 2020-02-04 06:16:29 -08:00
Ken Hibino
3744072e9b Sort processes by host and pid in ps output 2020-02-03 06:59:02 -08:00
Ken Hibino
eadfd5f8b4 [ci skip] Update changelog 2020-02-02 20:59:00 -08:00
Ken Hibino
96f06ac89b Add ps command to asynqmon 2020-02-02 20:56:33 -08:00
Ken Hibino
d03fa34eaf Add hearbeater 2020-02-01 09:35:49 -08:00
Ken Hibino
489e695433 [ci skip] Change font used in demo.gif 2020-01-30 21:47:08 -08:00
Ken Hibino
9ae4be8184 Run and compare benchmarks after successful ci-build 2020-01-30 21:38:16 -08:00
Ken Hibino
36af486303 [ci skip] Add prettier gif for demo 2020-01-29 21:36:17 -08:00
Ken Hibino
a46eb97e6f [ci skip] Shorten readme 2020-01-26 20:06:52 -08:00
Ken Hibino
c934ef115b v0.2.2 2020-01-26 16:07:44 -08:00
Ken Hibino
6fbaa2ed6c (fix): RestoreUnfinished to select correct queue 2020-01-26 16:05:46 -08:00
Ken Hibino
166497748b (fix): Requeue to select correct queue 2020-01-26 16:05:46 -08:00
Ken Hibino
31123fd42a Paginate tasks with asynqmon ls command
Changes:
* Added --page and --size flags to ls command
* By default, the command will show first 30 tasks from the specified
queue
2020-01-26 13:12:01 -08:00
Ken Hibino
3ed155b45b [ci skip] Update readme 2020-01-25 08:08:13 -08:00
Ken Hibino
58d2ed94e7 [ci skip] Fix typo 2020-01-23 06:05:18 -08:00
Ken Hibino
e253211a60 v0.2.1 2020-01-22 06:45:16 -08:00
Ken Hibino
44c657bec6 Rate limit error logs 2020-01-22 06:36:18 -08:00
Ken Hibino
db8e9d05c3 Add custom logger 2020-01-22 06:02:53 -08:00
Ken Hibino
b02e4e6b09 [ci skip] Update readme 2020-01-21 17:48:55 -08:00
51 changed files with 5625 additions and 1462 deletions

View File

@@ -2,8 +2,11 @@ language: go
go_import_path: github.com/hibiken/asynq
git:
depth: 1
env:
- GO111MODULE=on # go modules are the default
go: [1.12.x, 1.13.x]
go: [1.13.x, 1.14.x]
script:
- go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
services:
- redis-server
after_success:
- bash ./.travis/benchcmp.sh
- bash <(curl -s https://codecov.io/bash)

15
.travis/benchcmp.sh Executable file
View File

@@ -0,0 +1,15 @@
if [ "${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}" != "master" ]; then
REMOTE_URL="$(git config --get remote.origin.url)";
cd ${TRAVIS_BUILD_DIR}/.. && \
git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \
cd "${TRAVIS_REPO_SLUG}-bench" && \
# Benchmark master
git checkout master && \
go test -run=XXX -bench=. ./... > master.txt && \
# Benchmark feature branch
git checkout ${TRAVIS_COMMIT} && \
go test -run=XXX -bench=. ./... > feature.txt && \
go get -u golang.org/x/tools/cmd/benchcmp && \
# compare two benchmarks
benchcmp master.txt feature.txt;
fi

View File

@@ -7,6 +7,83 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.7.0] - 2020-03-22
### Changed
- Support Go v1.13+, dropped support for go v1.12
### Added
- `Unique` option was added to allow client to enqueue a task only if it's unique within a certain time period.
## [0.6.2] - 2020-03-15
### Added
- `Use` method was added to `ServeMux` to apply middlewares to all handlers.
## [0.6.1] - 2020-03-12
### Added
- `Client` can optionally schedule task with `asynq.Deadline(time)` to specify deadline for task's context. Default is no deadline.
- `Logger` option was added to config, which allows user to specify the logger used by the background instance.
## [0.6.0] - 2020-03-01
### Added
- Added `ServeMux` type to make it easy for users to implement Handler interface.
- `ErrorHandler` type was added. Allow users to specify error handling function (e.g. Report error to error reporting service such as Honeybadger, Bugsnag, etc)
## [0.5.0] - 2020-02-23
### Changed
- `Client` API has changed. Use `Enqueue`, `EnqueueAt` and `EnqueueIn` to enqueue and schedule tasks.
### Added
- `asynqmon workers` was added to list all running workers information
## [0.4.0] - 2020-02-13
### Changed
- `Handler` interface has changed. `ProcessTask` method takes two arguments `context.Context` and `*asynq.Task`
- `Queues` field in `Config` has change from `map[string]uint` to `map[string]int`
### Added
- `Client` can optionally schedule task with `asynq.Timeout(duration)` to specify timeout duration for task. Default is no timeout.
- `asynqmon cancel [task id]` will send a cancelation signal to the goroutine processing the speicified task.
## [0.3.0] - 2020-02-04
### Added
- `asynqmon ps` was added to list all background worker processes
## [0.2.2] - 2020-01-26
### Fixed
- Fixed restoring unfinished tasks back to correct queues.
### Changed
- `asynqmon ls` command is now paginated (default 30 tasks from first page)
- `asynqmon ls enqueued:[queue name]` requires queue name to be specified
## [0.2.1] - 2020-01-22
### Fixed
- More structured log messages
- Prevent spamming logs with a bunch of errors when Redis connection is lost
- Fixed and updated README doc
## [0.2.0] - 2020-01-19
### Added

51
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,51 @@
# Contributing
Thanks for your interest in contributing to Asynq!
We are open to, and grateful for, any contributions made by the community.
## Reporting Bugs
Have a look at our [issue tracker](https://github.com/hibiken/asynq/issues). If you can't find an issue (open or closed)
describing your problem (or a very similar one) there, please open a new issue with
the following details:
- Which versions of Go and Redis are you using?
- What are you trying to accomplish?
- What is the full error you are seeing?
- How can we reproduce this?
- Please quote as much of your code as needed to reproduce (best link to a
public repository or Gist)
## Getting Help
We run a [Gitter
channel](https://gitter.im/go-asynq/community) where you can ask questions and
get help. Feel free to ask there before opening a GitHub issue.
## Submitting Feature Requests
If you can't find an issue (open or closed) describing your idea on our [issue
tracker](https://github.com/hibiken/asynq/issues), open an issue. Adding answers to the following
questions in your description is +1:
- What do you want to do, and how do you expect Asynq to support you with that?
- How might this be added to Asynq?
- What are possible alternatives?
- Are there any disadvantages?
Thank you! We'll try to respond as quickly as possible.
## Contributing Code
1. Fork this repo
2. Download your fork `git clone https://github.com/your-username/asynq && cd asynq`
3. Create your branch `git checkout -b your-branch-name`
4. Make and commit your changes
5. Push the branch `git push origin your-branch-name`
6. Create a new pull request
Please try to keep your pull request focused in scope and avoid including unrelated commits.
After you have submitted your pull request, we'll try to get back to you as soon as possible. We may suggest some changes or improvements.
Thank you for contributing!

342
README.md
View File

@@ -1,175 +1,221 @@
# Asynq
[![Build Status](https://travis-ci.com/hibiken/asynq.svg?token=paqzfpSkF4p23s5Ux39b&branch=master)](https://travis-ci.com/hibiken/asynq)&nbsp;[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)&nbsp;[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)&nbsp;[![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq)&nbsp;[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![Build Status](https://travis-ci.com/hibiken/asynq.svg?token=paqzfpSkF4p23s5Ux39b&branch=master)](https://travis-ci.com/hibiken/asynq)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)
[![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq)
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq)
Simple and efficent asynchronous task processing library in Go.
Asynq is a simple Go library for queueing tasks and processing them in the background with workers.
It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before the release of verson 1.0.0.
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before v1.0.0 release.
## Table of Contents
![Task Queue Diagram](/docs/assets/task-queue.png)
- [Overview](#overview)
- [Requirements](#requirements)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [Monitoring CLI](#monitoring-cli)
- [Acknowledgements](#acknowledgements)
- [License](#license)
## Quickstart
## Overview
First, make sure you are running a Redis server locally.
![Gif](/docs/assets/asynqmon_stats.gif)
```sh
$ redis-server
```
Asynq provides a simple interface to asynchronous task processing.
Next, write a package that encapslates task creation and task handling.
It also ships with a tool to monitor the queues and take manual actions if needed.
```go
package tasks
Asynq provides:
import (
"fmt"
- Clear separation of task producer and consumer
- Ability to schedule task processing in the future
- Automatic retry of failed tasks with exponential backoff
- Automatic failover using Redis sentinels
- Ability to configure max retry count per task
- Ability to configure max number of worker goroutines to process tasks
- Support for priority queues
- Unix signal handling to gracefully shutdown background processing
- CLI tool to query and mutate queues state for mointoring and administrative purposes
"github.com/hibiken/asynq"
)
// A list of background task types.
const (
EmailDelivery = "email:deliver"
ImageProcessing = "image:process"
)
// Write function NewXXXTask to create a task.
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
return asynq.NewTask(EmailDelivery, payload)
}
func NewImageProcessingTask(src, dst string) *asynq.Task {
payload := map[string]interface{}{"src": src, "dst": dst}
return asynq.NewTask(ImageProcessing, payload)
}
// Write function HandleXXXTask to handle the given task.
// NOTE: It satisfies the asynq.HandlerFunc interface.
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
userID, err := t.Payload.GetInt("user_id")
if err != nil {
return err
}
tmplID, err := t.Payload.GetString("template_id")
if err != nil {
return err
}
fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID)
// Email delivery logic ...
return nil
}
func HandleImageProcessingTask(ctx context.Context, t *asynq.Task) error {
src, err := t.Payload.GetString("src")
if err != nil {
return err
}
dst, err := t.Payload.GetString("dst")
if err != nil {
return err
}
fmt.Printf("Process image: src = %s, dst = %s\n", src, dst)
// Image processing logic ...
return nil
}
```
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to enqueue tasks to the task queue.
A task will be processed by a background worker as soon as the task gets enqueued.
Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
```go
package main
import (
"time"
"github.com/hibiken/asynq"
"your/app/package/tasks"
)
const redisAddr = "127.0.0.1:6379"
func main() {
r := &asynq.RedisClientOpt{Addr: redisAddr}
c := asynq.NewClient(r)
// Example 1: Enqueue task to be processed immediately.
t := tasks.NewEmailDeliveryTask(42, "some:template:id")
err := c.Enqueue(t)
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
// Example 2: Schedule task to be processed in the future.
t = tasks.NewEmailDeliveryTask(42, "other:template:id")
err = c.EnqueueIn(24*time.Hour, t)
if err != nil {
log.Fatal("could not schedule task: %v", err)
}
// Example 3: Pass options to tune task processing behavior.
// Options include MaxRetry, Queue, Timeout, Deadline, etc.
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
err = c.Enqueue(t, asynq.MaxRetry(10), asynq.Queue("critical"), asynq.Timeout(time.Minute))
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
}
```
Next, create a binary to process these tasks in the background.
To start the background workers, use [`Background`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Background) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
```go
package main
import (
"github.com/hibiken/asynq"
"your/app/package/tasks"
)
const redisAddr = "127.0.0.1:6379"
func main() {
r := &asynq.RedisClientOpt{Addr: redisAddr}
bg := asynq.NewBackground(r, &asynq.Config{
// Specify how many concurrent workers to use
Concurrency: 10,
// Optionally specify multiple queues with different priority.
Queues: map[string]int{
"critical": 6,
"default": 3,
"low": 1,
},
// See the godoc for other configuration options
})
// mux maps a type to a handler
mux := asynq.NewServeMux()
mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask)
mux.HandleFunc(tasks.ImageProcessing, tasks.HandleImageProcessingTask)
// ...register other handlers...
bg.Run(mux)
}
```
For a more detailed walk-through of the library, see our [Getting Started Guide](https://github.com/hibiken/asynq/wiki/Getting-Started).
To Learn more about `asynq` features and APIs, see our [Wiki](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq).
## Command Line Tool
Asynq ships with a command line tool to inspect the state of queues and tasks.
Here's an example of running the `stats` command.
![Gif](/docs/assets/demo.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynqmon/README.md).
## Installation
To install `asynq` library, run the following command:
```sh
go get -u github.com/hibiken/asynq
```
To install the CLI tool, run the following command:
```sh
go get -u github.com/hibiken/asynq/tools/asynqmon
```
## Requirements
| Dependency | Version |
| -------------------------- | ------- |
| [Redis](https://redis.io/) | v2.8+ |
| [Go](https://golang.org/) | v1.12+ |
| [Go](https://golang.org/) | v1.13+ |
## Installation
## Contributing
```
go get -u github.com/hibiken/asynq
```
## Getting Started
1. Import `asynq` in your file.
```go
import "github.com/hibiken/asynq"
```
2. Asynq uses redis as a message broker.
Use one of `RedisConnOpt` types to specify how to connect to Redis.
```go
var redis = &asynq.RedisClientOpt{
Addr: "localhost:6379",
// Omit if no password is required
Password: "mypassword",
// Use a dedicated db number for asynq.
// By default, Redis offers 16 databases (0..15)
DB: 0,
}
```
3. Create a `Client` instance to create and schedule tasks.
```go
func main() {
client := asynq.NewClient(redis)
// Create a task with typename and payload.
t1 := asynq.NewTask(
"send_welcome_email",
map[string]interface{}{"user_id": 42})
t2 := asynq.NewTask(
"send_reminder_email",
map[string]interface{}{"user_id": 42})
// Process the task immediately.
err := client.Schedule(t1, time.Now())
// Process the task 24 hours later.
err = client.Schedule(t2, time.Now().Add(24 * time.Hour))
// Specify the max number of retry (default: 25)
err = client.Schedule(t1, time.Now(), asynq.MaxRetry(1))
}
```
4. Create a `Background` instance to process tasks.
```go
func main() {
bg := asynq.NewBackground(redis, &asynq.Config{
Concurrency: 10,
})
// Blocks until signal TERM or INT is received.
// For graceful shutdown, send signal TSTP to stop processing more tasks
// before sending TERM or INT signal to terminate the process.
bg.Run(handler)
}
```
Note that `Client` and `Background` are intended to be used in separate executable binaries.
The argument to `(*asynq.Background).Run` is an interface `asynq.Handler` which has one method `ProcessTask`.
```go
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried.
type Handler interface {
ProcessTask(*Task) error
}
```
The simplest way to implement a handler is to define a function with the same signature and use `asynq.HandlerFunc` adapter type when passing it to `Run`.
```go
func handler(t *asynq.Task) error {
switch t.Type {
case "send_welcome_email":
id, err := t.Payload.GetInt("user_id")
if err != nil {
return err
}
fmt.Printf("Send Welcome Email to %d\n", id)
// ... handle other types ...
default:
return fmt.Errorf("unexpected task type: %s", t.Type)
}
return nil
}
func main() {
bg := asynq.NewBackground(redis, &asynq.Config{
Concurrency: 10,
})
// Use asynq.HandlerFunc adapter for a handler function
bg.Run(asynq.HandlerFunc(handler))
}
```
## Monitoring CLI
Asynq ships with a CLI tool to inspect the state of queues and tasks.
To install the CLI, run the following command:
go get github.com/hibiken/asynq/tools/asynqmon
For details on how to use the tool, see the [README](/tools/asynqmon/README.md) for the asynqmon CLI.
We are open to, and grateful for, any contributions (Github issues/pull-requests, feedback on Gitter channel, etc) made by the community.
Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
## Acknowledgements
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynqmon CLI is built with cobra
## License

View File

@@ -11,7 +11,7 @@ import (
"github.com/go-redis/redis/v7"
)
// Task represents a task to be performed.
// Task represents a unit of work to be performed.
type Task struct {
// Type indicates the type of task to be performed.
Type string
@@ -20,10 +20,9 @@ type Task struct {
Payload Payload
}
// NewTask returns a new Task. The typename and payload argument set Type
// and Payload field respectively.
// NewTask returns a new Task given a type name and payload data.
//
// The payload must be serializable to JSON.
// The payload values must be serializable.
func NewTask(typename string, payload map[string]interface{}) *Task {
return &Task{
Type: typename,
@@ -31,13 +30,11 @@ func NewTask(typename string, payload map[string]interface{}) *Task {
}
}
// RedisConnOpt is a discriminated union of redis-client-option types.
// RedisConnOpt is a discriminated union of types that represent Redis connection configuration option.
//
// RedisConnOpt represents a sum of following types:
//
// RedisClientOpt | *RedisClientOpt | RedisFailoverClientOpt | *RedisFailoverClientOpt
//
// Passing unexpected type to a RedisConnOpt variable can cause panic.
type RedisConnOpt interface{}
// RedisClientOpt is used to create a redis client that connects
@@ -53,7 +50,7 @@ type RedisClientOpt struct {
// Redis server password.
Password string
// Redis DB to select after connecting to the server.
// Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select.
DB int
@@ -61,13 +58,13 @@ type RedisClientOpt struct {
// Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int
// TLS Config used to connect to the server.
// TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set.
TLSConfig *tls.Config
}
// RedisFailoverClientOpt is used to creates a redis client that talks
// to redis sentinels for service discovery and has automatic failover
// to redis sentinels for service discovery and has an automatic failover
// capability.
type RedisFailoverClientOpt struct {
// Redis master name that monitored by sentinels.
@@ -84,7 +81,7 @@ type RedisFailoverClientOpt struct {
// Redis server password.
Password string
// Redis DB to select after connecting to the server.
// Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select.
DB int
@@ -92,11 +89,14 @@ type RedisFailoverClientOpt struct {
// Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int
// TLS Config used to connect to the server.
// TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set.
TLSConfig *tls.Config
}
// createRedisClient returns a redis client given a redis connection configuration.
//
// Passing an unexpected type as a RedisConnOpt argument will cause panic.
func createRedisClient(r RedisConnOpt) *redis.Client {
switch r := r.(type) {
case *RedisClientOpt:
@@ -138,6 +138,6 @@ func createRedisClient(r RedisConnOpt) *redis.Client {
TLSConfig: r.TLSConfig,
})
default:
panic(fmt.Sprintf("unexpected type %T for RedisConnOpt", r))
panic(fmt.Sprintf("asynq: unexpected type %T for RedisConnOpt", r))
}
}

View File

@@ -5,12 +5,14 @@
package asynq
import (
"os"
"sort"
"testing"
"github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/log"
)
// This file defines test helper functions used by
@@ -22,6 +24,8 @@ const (
redisDB = 14
)
var testLogger = log.NewLogger(os.Stderr)
func setup(tb testing.TB) *redis.Client {
tb.Helper()
r := redis.NewClient(&redis.Options{

View File

@@ -5,8 +5,8 @@
package asynq
import (
"context"
"fmt"
"log"
"math"
"math/rand"
"os"
@@ -16,6 +16,7 @@ import (
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
)
@@ -34,17 +35,26 @@ type Background struct {
mu sync.Mutex
running bool
ps *base.ProcessState
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
logger Logger
rdb *rdb.RDB
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
}
// Config specifies the background-task processing behavior.
type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to zero or negative value, NewBackground will overwrite the value to one.
// If set to a zero or negative value, NewBackground will overwrite the value to one.
Concurrency int
// Function to calculate retry delay for a failed task.
@@ -56,15 +66,15 @@ type Config struct {
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// List of queues to process with given priority level. Keys are the names of the
// queues and values are associated priority level.
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
//
// If set to nil or not specified, the background will process only the "default" queue.
//
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]uint{
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
@@ -72,7 +82,9 @@ type Config struct {
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
Queues map[string]uint
//
// If a queue has a zero or negative priority value, the queue will be ignored.
Queues map[string]int
// StrictPriority indicates whether the queue priority should be treated strictly.
//
@@ -80,6 +92,59 @@ type Config struct {
// The tasks in lower priority queues are processed only when those queues with
// higher priorities are empty.
StrictPriority bool
// ErrorHandler handles errors returned by the task handler.
//
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
// Logger specifies the logger used by the background instance.
//
// If unset, default logger is used.
Logger Logger
}
// An ErrorHandler handles errors returned by the task handler.
type ErrorHandler interface {
HandleError(task *Task, err error, retried, maxRetry int)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(task *Task, err error, retried, maxRetry int)
// HandleError calls fn(task, err, retried, maxRetry)
func (fn ErrorHandlerFunc) HandleError(task *Task, err error, retried, maxRetry int) {
fn(task, err, retried, maxRetry)
}
// Logger implements logging with various log levels.
type Logger interface {
// Debug logs a message at Debug level.
Debug(format string, args ...interface{})
// Info logs a message at Info level.
Info(format string, args ...interface{})
// Warn logs a message at Warning level.
Warn(format string, args ...interface{})
// Error logs a message at Error level.
Error(format string, args ...interface{})
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
Fatal(format string, args ...interface{})
}
// Formula taken from https://github.com/mperham/sidekiq.
@@ -89,7 +154,7 @@ func defaultDelayFunc(n int, e error, t *Task) time.Duration {
return time.Duration(s) * time.Second
}
var defaultQueueConfig = map[string]uint{
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
@@ -104,26 +169,48 @@ func NewBackground(r RedisConnOpt, cfg *Config) *Background {
if delayFunc == nil {
delayFunc = defaultDelayFunc
}
queues := cfg.Queues
if queues == nil || len(queues) == 0 {
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if p > 0 {
queues[qname] = p
}
}
if len(queues) == 0 {
queues = defaultQueueConfig
}
qcfg := normalizeQueueCfg(queues)
logger := cfg.Logger
if logger == nil {
logger = log.NewLogger(os.Stderr)
}
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
pid := os.Getpid()
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncRequestCh, 5*time.Second)
rdb := rdb.NewRDB(createRedisClient(r))
scheduler := newScheduler(rdb, 5*time.Second, qcfg)
processor := newProcessor(rdb, n, qcfg, cfg.StrictPriority, delayFunc, syncRequestCh)
ps := base.NewProcessState(host, pid, n, queues, cfg.StrictPriority)
syncCh := make(chan *syncRequest)
cancels := base.NewCancelations()
syncer := newSyncer(logger, syncCh, 5*time.Second)
heartbeater := newHeartbeater(logger, rdb, ps, 5*time.Second)
scheduler := newScheduler(logger, rdb, 5*time.Second, queues)
processor := newProcessor(logger, rdb, ps, delayFunc, syncCh, cancels, cfg.ErrorHandler)
subscriber := newSubscriber(logger, rdb, cancels)
return &Background{
logger: logger,
rdb: rdb,
ps: ps,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
}
}
// A Handler processes a task.
// A Handler processes tasks.
//
// ProcessTask should return nil if the processing of a task
// is successful.
@@ -131,18 +218,18 @@ func NewBackground(r RedisConnOpt, cfg *Config) *Background {
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
type Handler interface {
ProcessTask(*Task) error
ProcessTask(context.Context, *Task) error
}
// The HandlerFunc type is an adapter to allow the use of
// ordinary functions as a Handler. If f is a function
// with the appropriate signature, HandlerFunc(f) is a
// Handler that calls f.
type HandlerFunc func(*Task) error
type HandlerFunc func(context.Context, *Task) error
// ProcessTask calls fn(task)
func (fn HandlerFunc) ProcessTask(task *Task) error {
return fn(task)
// ProcessTask calls fn(ctx, task)
func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// Run starts the background-task processing and blocks until
@@ -150,9 +237,21 @@ func (fn HandlerFunc) ProcessTask(task *Task) error {
// a signal, it gracefully shuts down all pending workers and other
// goroutines to process the tasks.
func (bg *Background) Run(handler Handler) {
type prefixLogger interface {
SetPrefix(prefix string)
}
// If logger supports setting prefix, then set prefix for log output.
if l, ok := bg.logger.(prefixLogger); ok {
l.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid()))
}
bg.logger.Info("Starting processing")
bg.start(handler)
defer bg.stop()
bg.logger.Info("Send signal TSTP to stop processing new tasks")
bg.logger.Info("Send signal TERM or INT to terminate the process")
// Wait for a signal to terminate.
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGTSTP)
@@ -160,12 +259,13 @@ func (bg *Background) Run(handler Handler) {
sig := <-sigs
if sig == syscall.SIGTSTP {
bg.processor.stop()
bg.ps.SetStatus(base.StatusStopped)
continue
}
break
}
fmt.Println()
log.Println("[INFO] Starting graceful shutdown...")
bg.logger.Info("Starting graceful shutdown")
}
// starts the background-task processing.
@@ -179,9 +279,11 @@ func (bg *Background) start(handler Handler) {
bg.running = true
bg.processor.handler = handler
bg.syncer.start()
bg.scheduler.start()
bg.processor.start()
bg.heartbeater.start(&bg.wg)
bg.subscriber.start(&bg.wg)
bg.syncer.start(&bg.wg)
bg.scheduler.start(&bg.wg)
bg.processor.start(&bg.wg)
}
// stops the background-task processing.
@@ -192,45 +294,20 @@ func (bg *Background) stop() {
return
}
// Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines.
//
// processor -> syncer (via syncCh)
bg.scheduler.terminate()
bg.processor.terminate()
// Note: processor and all worker goroutines need to be exited
// before shutting down syncer to avoid goroutine leak.
bg.syncer.terminate()
bg.subscriber.terminate()
bg.heartbeater.terminate()
bg.wg.Wait()
bg.rdb.Close()
bg.processor.handler = nil
bg.running = false
}
// normalizeQueueCfg divides priority numbers by their
// greatest common divisor.
func normalizeQueueCfg(queueCfg map[string]uint) map[string]uint {
var xs []uint
for _, x := range queueCfg {
xs = append(xs, x)
}
d := gcd(xs...)
res := make(map[string]uint)
for q, x := range queueCfg {
res[q] = x / d
}
return res
}
func gcd(xs ...uint) uint {
fn := func(x, y uint) uint {
for y > 0 {
x, y = y, x%y
}
return x
}
res := xs[0]
for i := 0; i < len(xs); i++ {
res = fn(xs[i], res)
if res == 1 {
return 1
}
}
return res
bg.logger.Info("Bye!")
}

View File

@@ -5,6 +5,7 @@
package asynq
import (
"context"
"testing"
"time"
@@ -27,31 +28,37 @@ func TestBackground(t *testing.T) {
})
// no-op handler
h := func(task *Task) error {
h := func(ctx context.Context, task *Task) error {
return nil
}
bg.start(HandlerFunc(h))
client.Schedule(NewTask("send_email", map[string]interface{}{"recipient_id": 123}), time.Now())
err := client.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
client.Schedule(NewTask("send_email", map[string]interface{}{"recipient_id": 456}), time.Now().Add(time.Hour))
err = client.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
bg.stop()
}
func TestGCD(t *testing.T) {
tests := []struct {
input []uint
want uint
input []int
want int
}{
{[]uint{6, 2, 12}, 2},
{[]uint{3, 3, 3}, 3},
{[]uint{6, 3, 1}, 1},
{[]uint{1}, 1},
{[]uint{1, 0, 2}, 1},
{[]uint{8, 0, 4}, 4},
{[]uint{9, 12, 18, 30}, 3},
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
@@ -64,46 +71,46 @@ func TestGCD(t *testing.T) {
func TestNormalizeQueueCfg(t *testing.T) {
tests := []struct {
input map[string]uint
want map[string]uint
input map[string]int
want map[string]int
}{
{
input: map[string]uint{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]uint{
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]uint{
input: map[string]int{
"default": 10,
},
want: map[string]uint{
want: map[string]int{
"default": 1,
},
},
{
input: map[string]uint{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]uint{
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]uint{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]uint{
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,

View File

@@ -5,6 +5,7 @@
package asynq
import (
"context"
"fmt"
"math/rand"
"sync"
@@ -12,8 +13,7 @@ import (
"time"
)
// Simple E2E Benchmark testing with no scheduled tasks and
// no retries.
// Simple E2E Benchmark testing with no scheduled tasks and retries.
func BenchmarkEndToEndSimple(b *testing.B) {
const count = 100000
for n := 0; n < b.N; n++ {
@@ -33,12 +33,14 @@ func BenchmarkEndToEndSimple(b *testing.B) {
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
client.Schedule(t, time.Now())
if err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
var wg sync.WaitGroup
wg.Add(count)
handler := func(t *Task) error {
handler := func(ctx context.Context, t *Task) error {
wg.Done()
return nil
}
@@ -74,16 +76,20 @@ func BenchmarkEndToEnd(b *testing.B) {
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
client.Schedule(t, time.Now())
if err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
client.Schedule(t, time.Now().Add(time.Second))
if err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
var wg sync.WaitGroup
wg.Add(count * 2)
handler := func(t *Task) error {
handler := func(ctx context.Context, t *Task) error {
// randomly fail 1% of tasks
if rand.Intn(100) == 1 {
return fmt.Errorf(":(")
@@ -101,3 +107,64 @@ func BenchmarkEndToEnd(b *testing.B) {
b.StartTimer() // end teardown
}
}
// Simple E2E Benchmark testing with no scheduled tasks and retries with multiple queues.
func BenchmarkEndToEndMultipleQueues(b *testing.B) {
// number of tasks to create for each queue
const (
highCount = 20000
defaultCount = 20000
lowCount = 20000
)
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis)
bg := NewBackground(redis, &Config{
Concurrency: 10,
Queues: map[string]int{
"high": 6,
"default": 3,
"low": 1,
},
})
// Create a bunch of tasks
for i := 0; i < highCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("high")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < defaultCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < lowCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("low")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
var wg sync.WaitGroup
wg.Add(highCount + defaultCount + lowCount)
handler := func(ctx context.Context, t *Task) error {
wg.Done()
return nil
}
b.StartTimer() // end setup
bg.start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
bg.stop()
b.StartTimer() // end teardown
}
}

143
client.go
View File

@@ -5,6 +5,9 @@
package asynq
import (
"errors"
"fmt"
"sort"
"strings"
"time"
@@ -36,6 +39,9 @@ type Option interface{}
type (
retryOption int
queueOption string
timeoutOption time.Duration
deadlineOption time.Time
uniqueOption time.Duration
)
// MaxRetry returns an option to specify the max number of times
@@ -56,15 +62,50 @@ func Queue(name string) Option {
return queueOption(strings.ToLower(name))
}
// Timeout returns an option to specify how long a task may run.
//
// Zero duration means no limit.
func Timeout(d time.Duration) Option {
return timeoutOption(d)
}
// Deadline returns an option to specify the deadline for the given task.
func Deadline(t time.Time) Option {
return deadlineOption(t)
}
// Unique returns an option to enqueue a task only if the given task is unique.
// Task enqueued with this option is guaranteed to be unique within the given ttl.
// Once the task gets processed successfully or once the TTL has expired, another task with the same uniqueness may be enqueued.
// ErrDuplicateTask error is returned when enqueueing a duplicate task.
//
// Uniqueness of a task is based on the following properties:
// - Task Type
// - Task Payload
// - Queue Name
func Unique(ttl time.Duration) Option {
return uniqueOption(ttl)
}
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
//
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
var ErrDuplicateTask = errors.New("task already exists")
type option struct {
retry int
queue string
timeout time.Duration
deadline time.Time
uniqueTTL time.Duration
}
func composeOptions(opts ...Option) option {
res := option{
retry: defaultMaxRetry,
queue: base.DefaultQueueName,
timeout: 0,
deadline: time.Time{},
}
for _, opt := range opts {
switch opt := opt.(type) {
@@ -72,6 +113,12 @@ func composeOptions(opts ...Option) option {
res.retry = int(opt)
case queueOption:
res.queue = string(opt)
case timeoutOption:
res.timeout = time.Duration(opt)
case deadlineOption:
res.deadline = time.Time(opt)
case uniqueOption:
res.uniqueTTL = time.Duration(opt)
default:
// ignore unexpected option
}
@@ -79,19 +126,51 @@ func composeOptions(opts ...Option) option {
return res
}
// uniqueKey computes the redis key used for the given task.
// It returns an empty string if ttl is zero.
func uniqueKey(t *Task, ttl time.Duration, qname string) string {
if ttl == 0 {
return ""
}
return fmt.Sprintf("%s:%s:%s", t.Type, serializePayload(t.Payload.data), qname)
}
func serializePayload(payload map[string]interface{}) string {
if payload == nil {
return "nil"
}
type entry struct {
k string
v interface{}
}
var es []entry
for k, v := range payload {
es = append(es, entry{k, v})
}
// sort entries by key
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
var b strings.Builder
for _, e := range es {
if b.Len() > 0 {
b.WriteString(",")
}
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
}
return b.String()
}
const (
// Max retry count by default
defaultMaxRetry = 25
)
// Schedule registers a task to be processed at the specified time.
// EnqueueAt schedules task to be enqueued at the specified time.
//
// Schedule returns nil if the task is registered successfully,
// otherwise returns a non-nil error.
// EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// opts specifies the behavior of task processing. If there are conflicting
// Option values the last one overrides others.
func (c *Client) Schedule(task *Task, processAt time.Time, opts ...Option) error {
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
opt := composeOptions(opts...)
msg := &base.TaskMessage{
ID: xid.New(),
@@ -99,13 +178,53 @@ func (c *Client) Schedule(task *Task, processAt time.Time, opts ...Option) error
Payload: task.Payload.data,
Queue: opt.queue,
Retry: opt.retry,
Timeout: opt.timeout.String(),
Deadline: opt.deadline.Format(time.RFC3339),
UniqueKey: uniqueKey(task, opt.uniqueTTL, opt.queue),
}
return c.enqueue(msg, processAt)
var err error
if time.Now().After(t) {
err = c.enqueue(msg, opt.uniqueTTL)
} else {
err = c.schedule(msg, t, opt.uniqueTTL)
}
if err == rdb.ErrDuplicateTask {
return fmt.Errorf("%w", ErrDuplicateTask)
}
return err
}
func (c *Client) enqueue(msg *base.TaskMessage, processAt time.Time) error {
if time.Now().After(processAt) {
return c.rdb.Enqueue(msg)
}
return c.rdb.Schedule(msg, processAt)
// Enqueue enqueues task to be processed immediately.
//
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now(), task, opts...)
}
// EnqueueIn schedules task to be enqueued after the specified delay.
//
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now().Add(d), task, opts...)
}
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
return c.rdb.EnqueueUnique(msg, uniqueTTL)
}
return c.rdb.Enqueue(msg)
}
func (c *Client) schedule(msg *base.TaskMessage, t time.Time, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
ttl := t.Add(uniqueTTL).Sub(time.Now())
return c.rdb.ScheduleUnique(msg, t, ttl)
}
return c.rdb.Schedule(msg, t)
}

View File

@@ -5,23 +5,33 @@
package asynq
import (
"errors"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
)
func TestClient(t *testing.T) {
func TestClientEnqueueAt(t *testing.T) {
r := setup(t)
client := NewClient(&RedisClientOpt{
Addr: "localhost:6379",
DB: 14,
client := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
now = time.Now()
oneHourLater = now.Add(time.Hour)
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct {
desc string
task *Task
@@ -33,15 +43,17 @@ func TestClient(t *testing.T) {
{
desc: "Process task immediately",
task: task,
processAt: time.Now(),
processAt: now,
opts: []Option{},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
},
@@ -50,7 +62,7 @@ func TestClient(t *testing.T) {
{
desc: "Schedule task to be processed in the future",
task: task,
processAt: time.Now().Add(2 * time.Hour),
processAt: oneHourLater,
opts: []Option{},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{
@@ -60,113 +72,19 @@ func TestClient(t *testing.T) {
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: noTimeout,
Deadline: noDeadline,
},
Score: float64(time.Now().Add(2 * time.Hour).Unix()),
Score: float64(oneHourLater.Unix()),
},
},
},
{
desc: "Process task immediately with a custom retry count",
task: task,
processAt: time.Now(),
opts: []Option{
MaxRetry(3),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 3,
Queue: "default",
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
{
desc: "Negative retry count",
task: task,
processAt: time.Now(),
opts: []Option{
MaxRetry(-2),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 0, // Retry count should be set to zero
Queue: "default",
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
{
desc: "Conflicting options",
task: task,
processAt: time.Now(),
opts: []Option{
MaxRetry(2),
MaxRetry(10),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: 10, // Last option takes precedence
Queue: "default",
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
{
desc: "With queue option",
task: task,
processAt: time.Now(),
opts: []Option{
Queue("custom"),
},
wantEnqueued: map[string][]*base.TaskMessage{
"custom": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "custom",
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
{
desc: "Queue option should be case-insensitive",
task: task,
processAt: time.Now(),
opts: []Option{
Queue("HIGH"),
},
wantEnqueued: map[string][]*base.TaskMessage{
"high": []*base.TaskMessage{
&base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "high",
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.Schedule(tc.task, tc.processAt, tc.opts...)
err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
@@ -185,3 +103,470 @@ func TestClient(t *testing.T) {
}
}
}
func TestClientEnqueue(t *testing.T) {
r := setup(t)
client := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct {
desc string
task *Task
opts []Option
wantEnqueued map[string][]*base.TaskMessage
}{
{
desc: "Process task immediately with a custom retry count",
task: task,
opts: []Option{
MaxRetry(3),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: 3,
Queue: "default",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
},
},
{
desc: "Negative retry count",
task: task,
opts: []Option{
MaxRetry(-2),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: 0, // Retry count should be set to zero
Queue: "default",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
},
},
{
desc: "Conflicting options",
task: task,
opts: []Option{
MaxRetry(2),
MaxRetry(10),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: 10, // Last option takes precedence
Queue: "default",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
},
},
{
desc: "With queue option",
task: task,
opts: []Option{
Queue("custom"),
},
wantEnqueued: map[string][]*base.TaskMessage{
"custom": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "custom",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
},
},
{
desc: "Queue option should be case-insensitive",
task: task,
opts: []Option{
Queue("HIGH"),
},
wantEnqueued: map[string][]*base.TaskMessage{
"high": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "high",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
},
},
{
desc: "With timeout option",
task: task,
opts: []Option{
Timeout(20 * time.Second),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: (20 * time.Second).String(),
Deadline: noDeadline,
},
},
},
},
{
desc: "With deadline option",
task: task,
opts: []Option{
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: noTimeout,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC).Format(time.RFC3339),
},
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
for qname, want := range tc.wantEnqueued {
got := h.GetEnqueuedMessages(t, r, qname)
if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
}
}
}
}
func TestClientEnqueueIn(t *testing.T) {
r := setup(t)
client := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct {
desc string
task *Task
delay time.Duration
opts []Option
wantEnqueued map[string][]*base.TaskMessage
wantScheduled []h.ZSetEntry
}{
{
desc: "schedule a task to be enqueued in one hour",
task: task,
delay: time.Hour,
opts: []Option{},
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil
wantScheduled: []h.ZSetEntry{
{
Msg: &base.TaskMessage{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: noTimeout,
Deadline: noDeadline,
},
Score: float64(time.Now().Add(time.Hour).Unix()),
},
},
},
{
desc: "Zero delay",
task: task,
delay: 0,
opts: []Option{},
wantEnqueued: map[string][]*base.TaskMessage{
"default": {
{
Type: task.Type,
Payload: task.Payload.data,
Retry: defaultMaxRetry,
Queue: "default",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
},
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
err := client.EnqueueIn(tc.delay, tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname)
if diff := cmp.Diff(want, gotEnqueued, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
}
}
gotScheduled := h.GetScheduledEntries(t, r)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledQueue, diff)
}
}
}
func TestUniqueKey(t *testing.T) {
tests := []struct {
desc string
task *Task
ttl time.Duration
qname string
want string
}{
{
"with zero TTL",
NewTask("email:send", map[string]interface{}{"a": 123, "b": "hello", "c": true}),
0,
"default",
"",
},
{
"with primitive types",
NewTask("email:send", map[string]interface{}{"a": 123, "b": "hello", "c": true}),
10 * time.Minute,
"default",
"email:send:a=123,b=hello,c=true:default",
},
{
"with unsorted keys",
NewTask("email:send", map[string]interface{}{"b": "hello", "c": true, "a": 123}),
10 * time.Minute,
"default",
"email:send:a=123,b=hello,c=true:default",
},
{
"with composite types",
NewTask("email:send",
map[string]interface{}{
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
"names": []string{"bob", "mike", "rob"}}),
10 * time.Minute,
"default",
"email:send:address=map[city:Boston line:123 Main St state:MA],names=[bob mike rob]:default",
},
{
"with complex types",
NewTask("email:send",
map[string]interface{}{
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
"duration": time.Hour}),
10 * time.Minute,
"default",
"email:send:duration=1h0m0s,time=2020-07-28 00:00:00 +0000 UTC:default",
},
{
"with nil payload",
NewTask("reindex", nil),
10 * time.Minute,
"default",
"reindex:nil:default",
},
}
for _, tc := range tests {
got := uniqueKey(tc.task, tc.ttl, tc.qname)
if got != tc.want {
t.Errorf("%s: uniqueKey(%v, %v, %q) = %q, want %q", tc.desc, tc.task, tc.ttl, tc.qname, got, tc.want)
}
}
}
func TestEnqueueUnique(t *testing.T) {
r := setup(t)
c := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
tests := []struct {
task *Task
ttl time.Duration
}{
{
NewTask("email", map[string]interface{}{"user_id": 123}),
time.Hour,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
err := c.Enqueue(tc.task, Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val()
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl)
continue
}
// Enqueue the task again. It should fail.
err = c.Enqueue(tc.task, Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}
func TestEnqueueInUnique(t *testing.T) {
r := setup(t)
c := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
tests := []struct {
task *Task
d time.Duration
ttl time.Duration
}{
{
NewTask("reindex", nil),
time.Hour,
10 * time.Minute,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
err := c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val()
wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
continue
}
// Enqueue the task again. It should fail.
err = c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}
func TestEnqueueAtUnique(t *testing.T) {
r := setup(t)
c := NewClient(RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
})
tests := []struct {
task *Task
at time.Time
ttl time.Duration
}{
{
NewTask("reindex", nil),
time.Now().Add(time.Hour),
10 * time.Minute,
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed.
err := c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl))
if err != nil {
t.Fatal(err)
}
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val()
wantTTL := tc.at.Add(tc.ttl).Sub(time.Now())
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
continue
}
// Enqueue the task again. It should fail.
err = c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl))
if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue
}
if !errors.Is(err, ErrDuplicateTask) {
t.Errorf("Enqueueing %+v returned an error that is not ErrDuplicateTask", tc.task)
continue
}
}
}

15
doc.go
View File

@@ -3,14 +3,14 @@
// that can be found in the LICENSE file.
/*
Package asynq provides a framework for background task processing.
Package asynq provides a framework for asynchronous task processing.
Asynq uses Redis as a message broker. To connect to redis server,
specify the options using one of RedisConnOpt types.
redis = &asynq.RedisClientOpt{
Addr: "localhost:6379",
Password: "secretpassword",
Addr: "127.0.0.1:6379",
Password: "xxxxx",
DB: 3,
}
@@ -24,8 +24,11 @@ Task is created with two parameters: its type and payload.
"send_email",
map[string]interface{}{"user_id": 42})
// Schedule the task t to be processed a minute from now.
err := client.Schedule(t, time.Now().Add(time.Minute))
// Enqueue the task to be processed immediately.
err := client.Enqueue(t)
// Schedule the task to be processed in one minute.
err = client.EnqueueIn(time.Minute, t)
The Background is used to run the background task processing with a given
handler.
@@ -45,7 +48,7 @@ Example of a type that implements the Handler interface.
// ...
}
func (h *TaskHandler) ProcessTask(task *asynq.Task) error {
func (h *TaskHandler) ProcessTask(ctx context.Context, task *asynq.Task) error {
switch task.Type {
case "send_email":
id, err := task.Payload.GetInt("user_id")

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
docs/assets/asynqmon_ps.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 582 KiB

BIN
docs/assets/demo.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 983 KiB

BIN
docs/assets/task-queue.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

11
go.mod
View File

@@ -3,19 +3,12 @@ module github.com/hibiken/asynq
go 1.13
require (
github.com/go-redis/redis/v7 v7.0.0-beta.4
github.com/go-redis/redis/v7 v7.2.0
github.com/google/go-cmp v0.4.0
github.com/mitchellh/go-homedir v1.1.0
github.com/pelletier/go-toml v1.6.0 // indirect
github.com/rs/xid v1.2.1
github.com/spf13/afero v1.2.2 // indirect
github.com/spf13/cast v1.3.1
github.com/spf13/cobra v0.0.5
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.6.0
go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e // indirect
golang.org/x/text v0.3.2 // indirect
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/yaml.v2 v2.2.7 // indirect
)

159
go.sum
View File

@@ -1,213 +1,74 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis/v7 v7.0.0-beta.4 h1:p6z7Pde69EGRWvlC++y8aFcaWegyrKHzOBGo0zUACTQ=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2 h1:VUFqw5KcqRf7i70GOzW7N+Q7+gxVBkSSqiXB12+JQ4M=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.6.0 h1:qSjVKzM2dmqQLutPN4Y0SEzDpAf7T6HHIT3E2Xr75Gg=
github.com/spf13/viper v1.6.0/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092 h1:4QSRKanuywn15aTZvI/mIDEgPQpswuFndXpOj3rKEco=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a h1:1n5lsVfiQW3yfsRGu98756EH1YthsFqr/5mxHduZW2A=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

73
heartbeat.go Normal file
View File

@@ -0,0 +1,73 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
// heartbeater is responsible for writing process info to redis periodically to
// indicate that the background worker process is up.
type heartbeater struct {
logger Logger
rdb *rdb.RDB
ps *base.ProcessState
// channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{}
// interval between heartbeats.
interval time.Duration
}
func newHeartbeater(l Logger, rdb *rdb.RDB, ps *base.ProcessState, interval time.Duration) *heartbeater {
return &heartbeater{
logger: l,
rdb: rdb,
ps: ps,
done: make(chan struct{}),
interval: interval,
}
}
func (h *heartbeater) terminate() {
h.logger.Info("Heartbeater shutting down...")
// Signal the heartbeater goroutine to stop.
h.done <- struct{}{}
}
func (h *heartbeater) start(wg *sync.WaitGroup) {
h.ps.SetStarted(time.Now())
h.ps.SetStatus(base.StatusRunning)
wg.Add(1)
go func() {
defer wg.Done()
h.beat()
for {
select {
case <-h.done:
h.rdb.ClearProcessState(h.ps)
h.logger.Info("Heartbeater done")
return
case <-time.After(h.interval):
h.beat()
}
}
}()
}
func (h *heartbeater) beat() {
// Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed.
err := h.rdb.WriteProcessState(h.ps, h.interval*2)
if err != nil {
h.logger.Error("could not write heartbeat data: %v", err)
}
}

103
heartbeat_test.go Normal file
View File

@@ -0,0 +1,103 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestHeartbeater(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
tests := []struct {
interval time.Duration
host string
pid int
queues map[string]int
concurrency int
}{
{time.Second, "localhost", 45678, map[string]int{"default": 1}, 10},
}
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
for _, tc := range tests {
h.FlushDB(t, r)
state := base.NewProcessState(tc.host, tc.pid, tc.concurrency, tc.queues, false)
hb := newHeartbeater(testLogger, rdbClient, state, tc.interval)
var wg sync.WaitGroup
hb.start(&wg)
want := &base.ProcessInfo{
Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: time.Now(),
Status: "running",
}
// allow for heartbeater to write to redis
time.Sleep(tc.interval * 2)
ps, err := rdbClient.ListProcesses()
if err != nil {
t.Errorf("could not read process status from redis: %v", err)
hb.terminate()
continue
}
if len(ps) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
hb.terminate()
continue
}
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
hb.terminate()
continue
}
// status change
state.SetStatus(base.StatusStopped)
// allow for heartbeater to write to redis
time.Sleep(tc.interval * 2)
want.Status = "stopped"
ps, err = rdbClient.ListProcesses()
if err != nil {
t.Errorf("could not read process status from redis: %v", err)
hb.terminate()
continue
}
if len(ps) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
hb.terminate()
continue
}
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
hb.terminate()
continue
}
hb.terminate()
}
}

View File

@@ -41,6 +41,34 @@ var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) [
return out
})
// SortProcessInfoOpt is a cmp.Option to sort base.ProcessInfo for comparing slice of process info.
var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.ProcessInfo) []*base.ProcessInfo {
out := append([]*base.ProcessInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host
}
return out[i].PID < out[j].PID
})
return out
})
// SortWorkerInfoOpt is a cmp.Option to sort base.WorkerInfo for comparing slice of worker info.
var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo {
out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].ID.String() < out[j].ID.String()
})
return out
})
// SortStringSliceOpt is a cmp.Option to sort string slice.
var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []string {
out := append([]string(nil), in...)
sort.Strings(out)
return out
})
// IgnoreIDOpt is an cmp.Option to ignore ID field in task messages when comparing.
var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")

View File

@@ -6,7 +6,10 @@
package base
import (
"context"
"fmt"
"strings"
"sync"
"time"
"github.com/rs/xid"
@@ -17,6 +20,10 @@ const DefaultQueueName = "default"
// Redis keys
const (
AllProcesses = "asynq:ps" // ZSET
psPrefix = "asynq:ps:" // STRING - asynq:ps:<host>:<pid>
AllWorkers = "asynq:workers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
@@ -26,25 +33,34 @@ const (
RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST
CancelChannel = "asynq:cancel" // PubSub channel
)
// QueueKey returns a redis key string for the given queue name.
// QueueKey returns a redis key for the given queue name.
func QueueKey(qname string) string {
return QueuePrefix + strings.ToLower(qname)
}
// ProcessedKey returns a redis key string for processed count
// for the given day.
// ProcessedKey returns a redis key for processed count for the given day.
func ProcessedKey(t time.Time) string {
return processedPrefix + t.UTC().Format("2006-01-02")
}
// FailureKey returns a redis key string for failure count
// for the given day.
// FailureKey returns a redis key for failure count for the given day.
func FailureKey(t time.Time) string {
return failurePrefix + t.UTC().Format("2006-01-02")
}
// ProcessInfoKey returns a redis key for process info.
func ProcessInfoKey(hostname string, pid int) string {
return fmt.Sprintf("%s%s:%d", psPrefix, hostname, pid)
}
// WorkersKey returns a redis key for the workers given hostname and pid.
func WorkersKey(hostname string, pid int) string {
return fmt.Sprintf("%s%s:%d", workersPrefix, hostname, pid)
}
// TaskMessage is the internal representation of a task with additional metadata fields.
// Serialized data of this type gets written to redis.
type TaskMessage struct {
@@ -68,4 +84,232 @@ type TaskMessage struct {
// ErrorMsg holds the error message from the last failure.
ErrorMsg string
// Timeout specifies how long a task may run.
// The string value should be compatible with time.Duration.ParseDuration.
//
// Zero means no limit.
Timeout string
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Deadline string
// UniqueKey holds the redis key used for uniqueness lock for this task.
//
// Empty string indicates that no uniqueness lock was used.
UniqueKey string
}
// ProcessState holds process level information.
//
// ProcessStates are safe for concurrent use by multiple goroutines.
type ProcessState struct {
mu sync.Mutex // guards all data fields
concurrency int
queues map[string]int
strictPriority bool
pid int
host string
status PStatus
started time.Time
workers map[string]*workerStats
}
// PStatus represents status of a process.
type PStatus int
const (
// StatusIdle indicates process is in idle state.
StatusIdle PStatus = iota
// StatusRunning indicates process is up and processing tasks.
StatusRunning
// StatusStopped indicates process is up but not processing new tasks.
StatusStopped
)
var statuses = []string{
"idle",
"running",
"stopped",
}
func (s PStatus) String() string {
if StatusIdle <= s && s <= StatusStopped {
return statuses[s]
}
return "unknown status"
}
type workerStats struct {
msg *TaskMessage
started time.Time
}
// NewProcessState returns a new instance of ProcessState.
func NewProcessState(host string, pid, concurrency int, queues map[string]int, strict bool) *ProcessState {
return &ProcessState{
host: host,
pid: pid,
concurrency: concurrency,
queues: cloneQueueConfig(queues),
strictPriority: strict,
status: StatusIdle,
workers: make(map[string]*workerStats),
}
}
// SetStatus updates the state of process.
func (ps *ProcessState) SetStatus(status PStatus) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.status = status
}
// SetStarted records when the process started processing.
func (ps *ProcessState) SetStarted(t time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.started = t
}
// AddWorkerStats records when a worker started and which task it's processing.
func (ps *ProcessState) AddWorkerStats(msg *TaskMessage, started time.Time) {
ps.mu.Lock()
defer ps.mu.Unlock()
ps.workers[msg.ID.String()] = &workerStats{msg, started}
}
// DeleteWorkerStats removes a worker's entry from the process state.
func (ps *ProcessState) DeleteWorkerStats(msg *TaskMessage) {
ps.mu.Lock()
defer ps.mu.Unlock()
delete(ps.workers, msg.ID.String())
}
// Get returns current state of process as a ProcessInfo.
func (ps *ProcessState) Get() *ProcessInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
return &ProcessInfo{
Host: ps.host,
PID: ps.pid,
Concurrency: ps.concurrency,
Queues: cloneQueueConfig(ps.queues),
StrictPriority: ps.strictPriority,
Status: ps.status.String(),
Started: ps.started,
ActiveWorkerCount: len(ps.workers),
}
}
// GetWorkers returns a list of currently running workers' info.
func (ps *ProcessState) GetWorkers() []*WorkerInfo {
ps.mu.Lock()
defer ps.mu.Unlock()
var res []*WorkerInfo
for _, w := range ps.workers {
res = append(res, &WorkerInfo{
Host: ps.host,
PID: ps.pid,
ID: w.msg.ID,
Type: w.msg.Type,
Queue: w.msg.Queue,
Payload: clonePayload(w.msg.Payload),
Started: w.started,
})
}
return res
}
func cloneQueueConfig(qcfg map[string]int) map[string]int {
res := make(map[string]int)
for qname, n := range qcfg {
res[qname] = n
}
return res
}
func clonePayload(payload map[string]interface{}) map[string]interface{} {
res := make(map[string]interface{})
for k, v := range payload {
res[k] = v
}
return res
}
// ProcessInfo holds information about a running background worker process.
type ProcessInfo struct {
Host string
PID int
Concurrency int
Queues map[string]int
StrictPriority bool
Status string
Started time.Time
ActiveWorkerCount int
}
// WorkerInfo holds information about a running worker.
type WorkerInfo struct {
Host string
PID int
ID xid.ID
Type string
Queue string
Payload map[string]interface{}
Started time.Time
}
// Cancelations is a collection that holds cancel functions for all in-progress tasks.
//
// Cancelations are safe for concurrent use by multipel goroutines.
type Cancelations struct {
mu sync.Mutex
cancelFuncs map[string]context.CancelFunc
}
// NewCancelations returns a Cancelations instance.
func NewCancelations() *Cancelations {
return &Cancelations{
cancelFuncs: make(map[string]context.CancelFunc),
}
}
// Add adds a new cancel func to the collection.
func (c *Cancelations) Add(id string, fn context.CancelFunc) {
c.mu.Lock()
defer c.mu.Unlock()
c.cancelFuncs[id] = fn
}
// Delete deletes a cancel func from the collection given an id.
func (c *Cancelations) Delete(id string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.cancelFuncs, id)
}
// Get returns a cancel func given an id.
func (c *Cancelations) Get(id string) (fn context.CancelFunc, ok bool) {
c.mu.Lock()
defer c.mu.Unlock()
fn, ok = c.cancelFuncs[id]
return fn, ok
}
// GetAll returns all cancel funcs.
func (c *Cancelations) GetAll() []context.CancelFunc {
c.mu.Lock()
defer c.mu.Unlock()
var res []context.CancelFunc
for _, fn := range c.cancelFuncs {
res = append(res, fn)
}
return res
}

View File

@@ -5,8 +5,14 @@
package base
import (
"context"
"math/rand"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/rs/xid"
)
func TestQueueKey(t *testing.T) {
@@ -60,3 +66,151 @@ func TestFailureKey(t *testing.T) {
}
}
}
func TestProcessInfoKey(t *testing.T) {
tests := []struct {
hostname string
pid int
want string
}{
{"localhost", 9876, "asynq:ps:localhost:9876"},
{"127.0.0.1", 1234, "asynq:ps:127.0.0.1:1234"},
}
for _, tc := range tests {
got := ProcessInfoKey(tc.hostname, tc.pid)
if got != tc.want {
t.Errorf("ProcessInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want)
}
}
}
func TestWorkersKey(t *testing.T) {
tests := []struct {
hostname string
pid int
want string
}{
{"localhost", 9876, "asynq:workers:localhost:9876"},
{"127.0.0.1", 1234, "asynq:workers:127.0.0.1:1234"},
}
for _, tc := range tests {
got := WorkersKey(tc.hostname, tc.pid)
if got != tc.want {
t.Errorf("WorkersKey(%q, %d) = %q, want = %q", tc.hostname, tc.pid, got, tc.want)
}
}
}
// Test for process state being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestProcessStateConcurrentAccess(t *testing.T) {
ps := NewProcessState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false)
var wg sync.WaitGroup
started := time.Now()
msgs := []*TaskMessage{
{ID: xid.New(), Type: "type1", Payload: map[string]interface{}{"user_id": 42}},
{ID: xid.New(), Type: "type2"},
{ID: xid.New(), Type: "type3"},
}
// Simulate hearbeater calling SetStatus and SetStarted.
wg.Add(1)
go func() {
defer wg.Done()
ps.SetStarted(started)
ps.SetStatus(StatusRunning)
}()
// Simulate processor starting worker goroutines.
for _, msg := range msgs {
wg.Add(1)
ps.AddWorkerStats(msg, time.Now())
go func(msg *TaskMessage) {
defer wg.Done()
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
ps.DeleteWorkerStats(msg)
}(msg)
}
// Simulate hearbeater calling Get and GetWorkers
wg.Add(1)
go func() {
wg.Done()
for i := 0; i < 5; i++ {
ps.Get()
ps.GetWorkers()
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
}
}()
wg.Wait()
want := &ProcessInfo{
Host: "127.0.0.1",
PID: 1234,
Concurrency: 10,
Queues: map[string]int{"default": 1},
StrictPriority: false,
Status: "running",
Started: started,
ActiveWorkerCount: 0,
}
got := ps.Get()
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("(*ProcessState).Get() = %+v, want %+v; (-want,+got)\n%s",
got, want, diff)
}
}
// Test for cancelations being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestCancelationsConcurrentAccess(t *testing.T) {
c := NewCancelations()
_, cancel1 := context.WithCancel(context.Background())
_, cancel2 := context.WithCancel(context.Background())
_, cancel3 := context.WithCancel(context.Background())
var key1, key2, key3 = "key1", "key2", "key3"
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
c.Add(key1, cancel1)
}()
wg.Add(1)
go func() {
defer wg.Done()
c.Add(key2, cancel2)
time.Sleep(200 * time.Millisecond)
c.Delete(key2)
}()
wg.Add(1)
go func() {
defer wg.Done()
c.Add(key3, cancel3)
}()
wg.Wait()
_, ok := c.Get(key1)
if !ok {
t.Errorf("(*Cancelations).Get(%q) = _, false, want <function>, true", key1)
}
_, ok = c.Get(key2)
if ok {
t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2)
}
funcs := c.GetAll()
if len(funcs) != 2 {
t.Errorf("(*Cancelations).GetAll() returns %d functions, want 2", len(funcs))
}
}

57
internal/log/log.go Normal file
View File

@@ -0,0 +1,57 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package log exports logging related types and functions.
package log
import (
"io"
stdlog "log"
"os"
)
// NewLogger creates and returns a new instance of Logger.
func NewLogger(out io.Writer) *Logger {
return &Logger{
stdlog.New(out, "", stdlog.Ldate|stdlog.Ltime|stdlog.Lmicroseconds|stdlog.LUTC),
}
}
// Logger is a wrapper object around log.Logger from the standard library.
// It supports logging at various log levels.
type Logger struct {
*stdlog.Logger
}
// Debug logs a message at Debug level.
func (l *Logger) Debug(format string, args ...interface{}) {
format = "DEBUG: " + format
l.Printf(format, args...)
}
// Info logs a message at Info level.
func (l *Logger) Info(format string, args ...interface{}) {
format = "INFO: " + format
l.Printf(format, args...)
}
// Warn logs a message at Warning level.
func (l *Logger) Warn(format string, args ...interface{}) {
format = "WARN: " + format
l.Printf(format, args...)
}
// Error logs a message at Error level.
func (l *Logger) Error(format string, args ...interface{}) {
format = "ERROR: " + format
l.Printf(format, args...)
}
// Fatal logs a message at Fatal level
// and process will exit with status set to 1.
func (l *Logger) Fatal(format string, args ...interface{}) {
format = "FATAL: " + format
l.Printf(format, args...)
os.Exit(1)
}

153
internal/log/log_test.go Normal file
View File

@@ -0,0 +1,153 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package log
import (
"bytes"
"fmt"
"regexp"
"testing"
)
// regexp for timestamps
const (
rgxdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]`
rgxtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]`
rgxmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]`
)
type tester struct {
desc string
message string
wantPattern string // regexp that log output must match
}
func TestLoggerDebug(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s DEBUG: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s DEBUG: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(&buf)
logger.Debug(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerInfo(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s INFO: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(&buf)
logger.Info(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerWarn(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s WARN: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(&buf)
logger.Warn(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}
func TestLoggerError(t *testing.T) {
tests := []tester{
{
desc: "without trailing newline, logger adds newline",
message: "hello, world!",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
{
desc: "with trailing newline, logger preserves newline",
message: "hello, world!\n",
wantPattern: fmt.Sprintf("^%s %s%s ERROR: hello, world!\n$", rgxdate, rgxtime, rgxmicroseconds),
},
}
for _, tc := range tests {
var buf bytes.Buffer
logger := NewLogger(&buf)
logger.Error(tc.message)
got := buf.String()
matched, err := regexp.MatchString(tc.wantPattern, got)
if err != nil {
t.Fatal("pattern did not compile:", err)
}
if !matched {
t.Errorf("logger.info(%q) outputted %q, should match pattern %q",
tc.message, got, tc.wantPattern)
}
}
}

View File

@@ -86,49 +86,48 @@ type DeadTask struct {
Queue string
}
// CurrentStats returns a current state of the queues.
func (r *RDB) CurrentStats() (*Stats, error) {
// KEYS[1] -> asynq:queues
// KEYS[2] -> asynq:in_progress
// KEYS[3] -> asynq:scheduled
// KEYS[4] -> asynq:retry
// KEYS[5] -> asynq:dead
// KEYS[6] -> asynq:processed:<yyyy-mm-dd>
// KEYS[7] -> asynq:failure:<yyyy-mm-dd>
script := redis.NewScript(`
local res = {}
local queues = redis.call("SMEMBERS", KEYS[1])
for _, qkey in ipairs(queues) do
// KEYS[1] -> asynq:queues
// KEYS[2] -> asynq:in_progress
// KEYS[3] -> asynq:scheduled
// KEYS[4] -> asynq:retry
// KEYS[5] -> asynq:dead
// KEYS[6] -> asynq:processed:<yyyy-mm-dd>
// KEYS[7] -> asynq:failure:<yyyy-mm-dd>
var currentStatsCmd = redis.NewScript(`
local res = {}
local queues = redis.call("SMEMBERS", KEYS[1])
for _, qkey in ipairs(queues) do
table.insert(res, qkey)
table.insert(res, redis.call("LLEN", qkey))
end
table.insert(res, KEYS[2])
table.insert(res, redis.call("LLEN", KEYS[2]))
table.insert(res, KEYS[3])
table.insert(res, redis.call("ZCARD", KEYS[3]))
table.insert(res, KEYS[4])
table.insert(res, redis.call("ZCARD", KEYS[4]))
table.insert(res, KEYS[5])
table.insert(res, redis.call("ZCARD", KEYS[5]))
local pcount = 0
local p = redis.call("GET", KEYS[6])
if p then
end
table.insert(res, KEYS[2])
table.insert(res, redis.call("LLEN", KEYS[2]))
table.insert(res, KEYS[3])
table.insert(res, redis.call("ZCARD", KEYS[3]))
table.insert(res, KEYS[4])
table.insert(res, redis.call("ZCARD", KEYS[4]))
table.insert(res, KEYS[5])
table.insert(res, redis.call("ZCARD", KEYS[5]))
local pcount = 0
local p = redis.call("GET", KEYS[6])
if p then
pcount = tonumber(p)
end
table.insert(res, "processed")
table.insert(res, pcount)
local fcount = 0
local f = redis.call("GET", KEYS[7])
if f then
end
table.insert(res, "processed")
table.insert(res, pcount)
local fcount = 0
local f = redis.call("GET", KEYS[7])
if f then
fcount = tonumber(f)
end
table.insert(res, "failed")
table.insert(res, fcount)
return res
`)
end
table.insert(res, "failed")
table.insert(res, fcount)
return res`)
// CurrentStats returns a current state of the queues.
func (r *RDB) CurrentStats() (*Stats, error) {
now := time.Now()
res, err := script.Run(r.client, []string{
res, err := currentStatsCmd.Run(r.client, []string{
base.AllQueues,
base.InProgressQueue,
base.ScheduledQueue,
@@ -173,6 +172,17 @@ func (r *RDB) CurrentStats() (*Stats, error) {
return stats, nil
}
var historicalStatsCmd = redis.NewScript(`
local res = {}
for _, key in ipairs(KEYS) do
local n = redis.call("GET", key)
if not n then
n = 0
end
table.insert(res, tonumber(n))
end
return res`)
// HistoricalStats returns a list of stats from the last n days.
func (r *RDB) HistoricalStats(n int) ([]*DailyStats, error) {
if n < 1 {
@@ -188,18 +198,7 @@ func (r *RDB) HistoricalStats(n int) ([]*DailyStats, error) {
keys = append(keys, base.ProcessedKey(ts))
keys = append(keys, base.FailureKey(ts))
}
script := redis.NewScript(`
local res = {}
for _, key in ipairs(KEYS) do
local n = redis.call("GET", key)
if not n then
n = 0
end
table.insert(res, tonumber(n))
end
return res
`)
res, err := script.Run(r.client, keys, len(keys)).Result()
res, err := historicalStatsCmd.Run(r.client, keys, len(keys)).Result()
if err != nil {
return nil, err
}
@@ -235,67 +234,46 @@ func (r *RDB) RedisInfo() (map[string]string, error) {
return info, nil
}
func reverse(x []string) {
for i := len(x)/2 - 1; i >= 0; i-- {
opp := len(x) - 1 - i
x[i], x[opp] = x[opp], x[i]
}
}
// Pagination specifies the page size and page number
// for the list operation.
type Pagination struct {
// Number of items in the page.
Size int
// Page number starting from zero.
Page int
}
func (p Pagination) start() int64 {
return int64(p.Size * p.Page)
}
func (p Pagination) stop() int64 {
return int64(p.Size*p.Page + p.Size - 1)
}
// ListEnqueued returns enqueued tasks that are ready to be processed.
//
// Queue names can be optionally passed to query only the specified queues.
// If none are passed, it will query all queues.
func (r *RDB) ListEnqueued(qnames ...string) ([]*EnqueuedTask, error) {
if len(qnames) == 0 {
return r.listAllEnqueued()
func (r *RDB) ListEnqueued(qname string, pgn Pagination) ([]*EnqueuedTask, error) {
qkey := base.QueueKey(qname)
if !r.client.SIsMember(base.AllQueues, qkey).Val() {
return nil, fmt.Errorf("queue %q does not exist", qname)
}
return r.listEnqueued(qnames...)
}
func (r *RDB) listAllEnqueued() ([]*EnqueuedTask, error) {
script := redis.NewScript(`
local res = {}
local queues = redis.call("SMEMBERS", KEYS[1])
for _, qkey in ipairs(queues) do
local msgs = redis.call("LRANGE", qkey, 0, -1)
for _, msg in ipairs(msgs) do
table.insert(res, msg)
end
end
return res
`)
res, err := script.Run(r.client, []string{base.AllQueues}).Result()
// Note: Because we use LPUSH to redis list, we need to calculate the
// correct range and reverse the list to get the tasks with pagination.
stop := -pgn.start() - 1
start := -pgn.stop() - 1
data, err := r.client.LRange(qkey, start, stop).Result()
if err != nil {
return nil, err
}
data, err := cast.ToStringSliceE(res)
if err != nil {
return nil, err
}
return toEnqueuedTasks(data)
}
func (r *RDB) listEnqueued(qnames ...string) ([]*EnqueuedTask, error) {
script := redis.NewScript(`
local res = {}
for _, qkey in ipairs(KEYS) do
local msgs = redis.call("LRANGE", qkey, 0, -1)
for _, msg in ipairs(msgs) do
table.insert(res, msg)
end
end
return res
`)
var keys []string
for _, q := range qnames {
keys = append(keys, base.QueueKey(q))
}
res, err := script.Run(r.client, keys).Result()
if err != nil {
return nil, err
}
data, err := cast.ToStringSliceE(res)
if err != nil {
return nil, err
}
return toEnqueuedTasks(data)
}
func toEnqueuedTasks(data []string) ([]*EnqueuedTask, error) {
reverse(data)
var tasks []*EnqueuedTask
for _, s := range data {
var msg base.TaskMessage
@@ -314,11 +292,16 @@ func toEnqueuedTasks(data []string) ([]*EnqueuedTask, error) {
}
// ListInProgress returns all tasks that are currently being processed.
func (r *RDB) ListInProgress() ([]*InProgressTask, error) {
data, err := r.client.LRange(base.InProgressQueue, 0, -1).Result()
func (r *RDB) ListInProgress(pgn Pagination) ([]*InProgressTask, error) {
// Note: Because we use LPUSH to redis list, we need to calculate the
// correct range and reverse the list to get the tasks with pagination.
stop := -pgn.start() - 1
start := -pgn.stop() - 1
data, err := r.client.LRange(base.InProgressQueue, start, stop).Result()
if err != nil {
return nil, err
}
reverse(data)
var tasks []*InProgressTask
for _, s := range data {
var msg base.TaskMessage
@@ -337,8 +320,8 @@ func (r *RDB) ListInProgress() ([]*InProgressTask, error) {
// ListScheduled returns all tasks that are scheduled to be processed
// in the future.
func (r *RDB) ListScheduled() ([]*ScheduledTask, error) {
data, err := r.client.ZRangeWithScores(base.ScheduledQueue, 0, -1).Result()
func (r *RDB) ListScheduled(pgn Pagination) ([]*ScheduledTask, error) {
data, err := r.client.ZRangeWithScores(base.ScheduledQueue, pgn.start(), pgn.stop()).Result()
if err != nil {
return nil, err
}
@@ -368,8 +351,8 @@ func (r *RDB) ListScheduled() ([]*ScheduledTask, error) {
// ListRetry returns all tasks that have failed before and willl be retried
// in the future.
func (r *RDB) ListRetry() ([]*RetryTask, error) {
data, err := r.client.ZRangeWithScores(base.RetryQueue, 0, -1).Result()
func (r *RDB) ListRetry(pgn Pagination) ([]*RetryTask, error) {
data, err := r.client.ZRangeWithScores(base.RetryQueue, pgn.start(), pgn.stop()).Result()
if err != nil {
return nil, err
}
@@ -401,8 +384,8 @@ func (r *RDB) ListRetry() ([]*RetryTask, error) {
}
// ListDead returns all tasks that have exhausted its retry limit.
func (r *RDB) ListDead() ([]*DeadTask, error) {
data, err := r.client.ZRangeWithScores(base.DeadQueue, 0, -1).Result()
func (r *RDB) ListDead(pgn Pagination) ([]*DeadTask, error) {
data, err := r.client.ZRangeWithScores(base.DeadQueue, pgn.start(), pgn.stop()).Result()
if err != nil {
return nil, err
}
@@ -491,21 +474,21 @@ func (r *RDB) EnqueueAllDeadTasks() (int64, error) {
return r.removeAndEnqueueAll(base.DeadQueue)
}
func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) {
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
var removeAndEnqueueCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
local qkey = ARGV[3] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg)
return 1
end
end
return 0
`)
res, err := script.Run(r.client, []string{zset}, score, id, base.QueuePrefix).Result()
end
return 0`)
func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) {
res, err := removeAndEnqueueCmd.Run(r.client, []string{zset}, score, id, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
@@ -516,18 +499,18 @@ func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) {
return n, nil
}
func (r *RDB) removeAndEnqueueAll(zset string) (int64, error) {
script := redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
redis.call("ZREM", KEYS[1], msg)
var removeAndEnqueueAllCmd = redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[1] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
end
return table.getn(msgs)
`)
res, err := script.Run(r.client, []string{zset}, base.QueuePrefix).Result()
redis.call("ZREM", KEYS[1], msg)
end
return table.getn(msgs)`)
func (r *RDB) removeAndEnqueueAll(zset string) (int64, error) {
res, err := removeAndEnqueueAllCmd.Run(r.client, []string{zset}, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
@@ -578,17 +561,16 @@ func (r *RDB) KillAllScheduledTasks() (int64, error) {
return r.removeAndKillAll(base.ScheduledQueue)
}
func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> score of the task to kill
// ARGV[2] -> id of the task to kill
// ARGV[3] -> current timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> score of the task to kill
// ARGV[2] -> id of the task to kill
// ARGV[3] -> current timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
var removeAndKillCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
@@ -597,12 +579,13 @@ func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
return 1
end
end
return 0
`)
end
return 0`)
func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago
res, err := script.Run(r.client,
res, err := removeAndKillCmd.Run(r.client,
[]string{zset, base.DeadQueue},
score, id, now.Unix(), limit, maxDeadTasks).Result()
if err != nil {
@@ -615,25 +598,25 @@ func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
return n, nil
}
func (r *RDB) removeAndKillAll(zset string) (int64, error) {
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> current timestamp
// ARGV[2] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[3] -> max number of tasks in dead queue (e.g., 100)
script := redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
redis.call("ZREM", KEYS[1], msg)
// KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead
// ARGV[1] -> current timestamp
// ARGV[2] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[3] -> max number of tasks in dead queue (e.g., 100)
var removeAndKillAllCmd = redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
redis.call("ZADD", KEYS[2], ARGV[1], msg)
redis.call("ZREM", KEYS[1], msg)
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[2])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[3])
end
return table.getn(msgs)
`)
end
return table.getn(msgs)`)
func (r *RDB) removeAndKillAll(zset string) (int64, error) {
now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago
res, err := script.Run(r.client, []string{zset, base.DeadQueue},
res, err := removeAndKillAllCmd.Run(r.client, []string{zset, base.DeadQueue},
now.Unix(), limit, maxDeadTasks).Result()
if err != nil {
return 0, err
@@ -666,19 +649,19 @@ func (r *RDB) DeleteScheduledTask(id xid.ID, score int64) error {
return r.deleteTask(base.ScheduledQueue, id.String(), float64(score))
}
func (r *RDB) deleteTask(zset, id string, score float64) error {
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
var deleteTaskCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then
redis.call("ZREM", KEYS[1], msg)
return 1
end
end
return 0
`)
res, err := script.Run(r.client, []string{zset}, score, id).Result()
end
return 0`)
func (r *RDB) deleteTask(zset, id string, score float64) error {
res, err := deleteTaskCmd.Run(r.client, []string{zset}, score, id).Result()
if err != nil {
return err
}
@@ -725,6 +708,27 @@ func (e *ErrQueueNotEmpty) Error() string {
return fmt.Sprintf("queue %q is not empty", e.qname)
}
// Skip checking whether queue is empty before removing.
var removeQueueForceCmd = redis.NewScript(`
local n = redis.call("SREM", KEYS[1], KEYS[2])
if n == 0 then
return redis.error_reply("LIST NOT FOUND")
end
redis.call("DEL", KEYS[2])
return redis.status_reply("OK")`)
// Checks whether queue is empty before removing.
var removeQueueCmd = redis.NewScript(`
local l = redis.call("LLEN", KEYS[2]) if l > 0 then
return redis.error_reply("LIST NOT EMPTY")
end
local n = redis.call("SREM", KEYS[1], KEYS[2])
if n == 0 then
return redis.error_reply("LIST NOT FOUND")
end
redis.call("DEL", KEYS[2])
return redis.status_reply("OK")`)
// RemoveQueue removes the specified queue.
//
// If force is set to true, it will remove the queue regardless
@@ -734,27 +738,9 @@ func (e *ErrQueueNotEmpty) Error() string {
func (r *RDB) RemoveQueue(qname string, force bool) error {
var script *redis.Script
if force {
script = redis.NewScript(`
local n = redis.call("SREM", KEYS[1], KEYS[2])
if n == 0 then
return redis.error_reply("LIST NOT FOUND")
end
redis.call("DEL", KEYS[2])
return redis.status_reply("OK")
`)
script = removeQueueForceCmd
} else {
script = redis.NewScript(`
local l = redis.call("LLEN", KEYS[2])
if l > 0 then
return redis.error_reply("LIST NOT EMPTY")
end
local n = redis.call("SREM", KEYS[1], KEYS[2])
if n == 0 then
return redis.error_reply("LIST NOT FOUND")
end
redis.call("DEL", KEYS[2])
return redis.status_reply("OK")
`)
script = removeQueueCmd
}
err := script.Run(r.client,
[]string{base.AllQueues, base.QueueKey(qname)},
@@ -771,3 +757,76 @@ func (r *RDB) RemoveQueue(qname string, force bool) error {
}
return nil
}
// Note: Script also removes stale keys.
var listProcessesCmd = redis.NewScript(`
local res = {}
local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do
local ps = redis.call("GET", key)
if ps then
table.insert(res, ps)
end
end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`)
// ListProcesses returns the list of process statuses.
func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
res, err := listProcessesCmd.Run(r.client,
[]string{base.AllProcesses}, time.Now().UTC().Unix()).Result()
if err != nil {
return nil, err
}
data, err := cast.ToStringSliceE(res)
if err != nil {
return nil, err
}
var processes []*base.ProcessInfo
for _, s := range data {
var ps base.ProcessInfo
err := json.Unmarshal([]byte(s), &ps)
if err != nil {
continue // skip bad data
}
processes = append(processes, &ps)
}
return processes, nil
}
// Note: Script also removes stale keys.
var listWorkersCmd = redis.NewScript(`
local res = {}
local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do
local workers = redis.call("HVALS", key)
for _, w in ipairs(workers) do
table.insert(res, w)
end
end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`)
// ListWorkers returns the list of worker stats.
func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) {
res, err := listWorkersCmd.Run(r.client, []string{base.AllWorkers}, time.Now().UTC().Unix()).Result()
if err != nil {
return nil, err
}
data, err := cast.ToStringSliceE(res)
if err != nil {
return nil, err
}
var workers []*base.WorkerInfo
for _, s := range data {
var w base.WorkerInfo
err := json.Unmarshal([]byte(s), &w)
if err != nil {
continue // skip bad data
}
workers = append(workers, &w)
}
return workers, nil
}

View File

@@ -5,6 +5,7 @@
package rdb
import (
"fmt"
"sort"
"testing"
"time"
@@ -231,24 +232,23 @@ func TestListEnqueued(t *testing.T) {
t1 := &EnqueuedTask{ID: m1.ID, Type: m1.Type, Payload: m1.Payload, Queue: m1.Queue}
t2 := &EnqueuedTask{ID: m2.ID, Type: m2.Type, Payload: m2.Payload, Queue: m2.Queue}
t3 := &EnqueuedTask{ID: m3.ID, Type: m3.Type, Payload: m3.Payload, Queue: m3.Queue}
t4 := &EnqueuedTask{ID: m4.ID, Type: m4.Type, Payload: m4.Payload, Queue: m4.Queue}
tests := []struct {
enqueued map[string][]*base.TaskMessage
qnames []string
qname string
want []*EnqueuedTask
}{
{
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {m1, m2},
},
qnames: []string{},
qname: base.DefaultQueueName,
want: []*EnqueuedTask{t1, t2},
},
{
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {},
},
qnames: []string{},
qname: base.DefaultQueueName,
want: []*EnqueuedTask{},
},
{
@@ -257,8 +257,8 @@ func TestListEnqueued(t *testing.T) {
"critical": {m3},
"low": {m4},
},
qnames: []string{},
want: []*EnqueuedTask{t1, t2, t3, t4},
qname: base.DefaultQueueName,
want: []*EnqueuedTask{t1, t2},
},
{
enqueued: map[string][]*base.TaskMessage{
@@ -266,18 +266,9 @@ func TestListEnqueued(t *testing.T) {
"critical": {m3},
"low": {m4},
},
qnames: []string{"critical"},
qname: "critical",
want: []*EnqueuedTask{t3},
},
{
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {m1, m2},
"critical": {m3},
"low": {m4},
},
qnames: []string{"critical", "low"},
want: []*EnqueuedTask{t3, t4},
},
}
for _, tc := range tests {
@@ -286,9 +277,10 @@ func TestListEnqueued(t *testing.T) {
h.SeedEnqueuedQueue(t, r.client, msgs, qname)
}
got, err := r.ListEnqueued(tc.qnames...)
got, err := r.ListEnqueued(tc.qname, Pagination{Size: 20, Page: 0})
op := fmt.Sprintf("r.ListEnqueued(%q, Pagination{Size: 20, Page: 0})", tc.qname)
if err != nil {
t.Errorf("r.ListEnqueued() = %v, %v, want %v, nil", got, err, tc.want)
t.Errorf("%s = %v, %v, want %v, nil", op, got, err, tc.want)
continue
}
sortOpt := cmp.Transformer("SortMsg", func(in []*EnqueuedTask) []*EnqueuedTask {
@@ -299,11 +291,76 @@ func TestListEnqueued(t *testing.T) {
return out
})
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("r.ListEnqueued() = %v, %v, want %v, nil; (-want, +got)\n%s", got, err, tc.want, diff)
t.Errorf("%s = %v, %v, want %v, nil; (-want, +got)\n%s", op, got, err, tc.want, diff)
continue
}
}
}
func TestListEnqueuedPagination(t *testing.T) {
r := setup(t)
var msgs []*base.TaskMessage
for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil)
msgs = append(msgs, msg)
}
// create 100 tasks in default queue
h.SeedEnqueuedQueue(t, r.client, msgs)
msgs = []*base.TaskMessage(nil) // empty list
for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("custom %d", i), nil)
msgs = append(msgs, msg)
}
// create 100 tasks in custom queue
h.SeedEnqueuedQueue(t, r.client, msgs, "custom")
tests := []struct {
desc string
qname string
page int
size int
wantSize int
wantFirst string
wantLast string
}{
{"first page", "default", 0, 20, 20, "task 0", "task 19"},
{"second page", "default", 1, 20, 20, "task 20", "task 39"},
{"different page size", "default", 2, 30, 30, "task 60", "task 89"},
{"last page", "default", 3, 30, 10, "task 90", "task 99"},
{"out of range", "default", 4, 30, 0, "", ""},
{"second page with custom queue", "custom", 1, 20, 20, "custom 20", "custom 39"},
}
for _, tc := range tests {
got, err := r.ListEnqueued(tc.qname, Pagination{Size: tc.size, Page: tc.page})
op := fmt.Sprintf("r.ListEnqueued(%q, Pagination{Size: %d, Page: %d})", tc.qname, tc.size, tc.page)
if err != nil {
t.Errorf("%s; %s returned error %v", tc.desc, op, err)
continue
}
if len(got) != tc.wantSize {
t.Errorf("%s; %s returned a list of size %d, want %d", tc.desc, op, len(got), tc.wantSize)
continue
}
if tc.wantSize == 0 {
continue
}
first := got[0]
if first.Type != tc.wantFirst {
t.Errorf("%s; %s returned a list with first message %q, want %q",
tc.desc, op, first.Type, tc.wantFirst)
}
last := got[len(got)-1]
if last.Type != tc.wantLast {
t.Errorf("%s; %s returned a list with the last message %q, want %q",
tc.desc, op, last.Type, tc.wantLast)
}
}
}
func TestListInProgress(t *testing.T) {
r := setup(t)
@@ -330,9 +387,10 @@ func TestListInProgress(t *testing.T) {
h.FlushDB(t, r.client) // clean up db before each test case
h.SeedInProgressQueue(t, r.client, tc.inProgress)
got, err := r.ListInProgress()
got, err := r.ListInProgress(Pagination{Size: 20, Page: 0})
op := "r.ListInProgress(Pagination{Size: 20, Page: 0})"
if err != nil {
t.Errorf("r.ListInProgress() = %v, %v, want %v, nil", got, err, tc.want)
t.Errorf("%s = %v, %v, want %v, nil", op, got, err, tc.want)
continue
}
sortOpt := cmp.Transformer("SortMsg", func(in []*InProgressTask) []*InProgressTask {
@@ -343,12 +401,67 @@ func TestListInProgress(t *testing.T) {
return out
})
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("r.ListInProgress() = %v, %v, want %v, nil; (-want, +got)\n%s", got, err, tc.want, diff)
t.Errorf("%s = %v, %v, want %v, nil; (-want, +got)\n%s", op, got, err, tc.want, diff)
continue
}
}
}
func TestListInProgressPagination(t *testing.T) {
r := setup(t)
var msgs []*base.TaskMessage
for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil)
msgs = append(msgs, msg)
}
h.SeedInProgressQueue(t, r.client, msgs)
tests := []struct {
desc string
page int
size int
wantSize int
wantFirst string
wantLast string
}{
{"first page", 0, 20, 20, "task 0", "task 19"},
{"second page", 1, 20, 20, "task 20", "task 39"},
{"different page size", 2, 30, 30, "task 60", "task 89"},
{"last page", 3, 30, 10, "task 90", "task 99"},
{"out of range", 4, 30, 0, "", ""},
}
for _, tc := range tests {
got, err := r.ListInProgress(Pagination{Size: tc.size, Page: tc.page})
op := fmt.Sprintf("r.ListInProgress(Pagination{Size: %d, Page: %d})", tc.size, tc.page)
if err != nil {
t.Errorf("%s; %s returned error %v", tc.desc, op, err)
continue
}
if len(got) != tc.wantSize {
t.Errorf("%s; %s returned list of size %d, want %d", tc.desc, op, len(got), tc.wantSize)
continue
}
if tc.wantSize == 0 {
continue
}
first := got[0]
if first.Type != tc.wantFirst {
t.Errorf("%s; %s returned a list with first message %q, want %q",
tc.desc, op, first.Type, tc.wantFirst)
}
last := got[len(got)-1]
if last.Type != tc.wantLast {
t.Errorf("%s; %s returned a list with the last message %q, want %q",
tc.desc, op, last.Type, tc.wantLast)
}
}
}
func TestListScheduled(t *testing.T) {
r := setup(t)
m1 := h.NewTaskMessage("send_email", map[string]interface{}{"subject": "hello"})
@@ -379,9 +492,10 @@ func TestListScheduled(t *testing.T) {
h.FlushDB(t, r.client) // clean up db before each test case
h.SeedScheduledQueue(t, r.client, tc.scheduled)
got, err := r.ListScheduled()
got, err := r.ListScheduled(Pagination{Size: 20, Page: 0})
op := "r.ListScheduled(Pagination{Size: 20, Page: 0})"
if err != nil {
t.Errorf("r.ListScheduled() = %v, %v, want %v, nil", got, err, tc.want)
t.Errorf("%s = %v, %v, want %v, nil", op, got, err, tc.want)
continue
}
sortOpt := cmp.Transformer("SortMsg", func(in []*ScheduledTask) []*ScheduledTask {
@@ -392,12 +506,68 @@ func TestListScheduled(t *testing.T) {
return out
})
if diff := cmp.Diff(tc.want, got, sortOpt, timeCmpOpt); diff != "" {
t.Errorf("r.ListScheduled() = %v, %v, want %v, nil; (-want, +got)\n%s", got, err, tc.want, diff)
t.Errorf("%s = %v, %v, want %v, nil; (-want, +got)\n%s", op, got, err, tc.want, diff)
continue
}
}
}
func TestListScheduledPagination(t *testing.T) {
r := setup(t)
// create 100 tasks with an increasing number of wait time.
for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil)
if err := r.Schedule(msg, time.Now().Add(time.Duration(i)*time.Second)); err != nil {
t.Fatal(err)
}
}
tests := []struct {
desc string
page int
size int
wantSize int
wantFirst string
wantLast string
}{
{"first page", 0, 20, 20, "task 0", "task 19"},
{"second page", 1, 20, 20, "task 20", "task 39"},
{"different page size", 2, 30, 30, "task 60", "task 89"},
{"last page", 3, 30, 10, "task 90", "task 99"},
{"out of range", 4, 30, 0, "", ""},
}
for _, tc := range tests {
got, err := r.ListScheduled(Pagination{Size: tc.size, Page: tc.page})
op := fmt.Sprintf("r.ListScheduled(Pagination{Size: %d, Page: %d})", tc.size, tc.page)
if err != nil {
t.Errorf("%s; %s returned error %v", tc.desc, op, err)
continue
}
if len(got) != tc.wantSize {
t.Errorf("%s; %s returned list of size %d, want %d", tc.desc, op, len(got), tc.wantSize)
continue
}
if tc.wantSize == 0 {
continue
}
first := got[0]
if first.Type != tc.wantFirst {
t.Errorf("%s; %s returned a list with first message %q, want %q",
tc.desc, op, first.Type, tc.wantFirst)
}
last := got[len(got)-1]
if last.Type != tc.wantLast {
t.Errorf("%s; %s returned a list with the last message %q, want %q",
tc.desc, op, last.Type, tc.wantLast)
}
}
}
func TestListRetry(t *testing.T) {
r := setup(t)
m1 := &base.TaskMessage{
@@ -464,9 +634,10 @@ func TestListRetry(t *testing.T) {
h.FlushDB(t, r.client) // clean up db before each test case
h.SeedRetryQueue(t, r.client, tc.retry)
got, err := r.ListRetry()
got, err := r.ListRetry(Pagination{Size: 20, Page: 0})
op := "r.ListRetry(Pagination{Size: 20, Page: 0})"
if err != nil {
t.Errorf("r.ListRetry() = %v, %v, want %v, nil", got, err, tc.want)
t.Errorf("%s = %v, %v, want %v, nil", op, got, err, tc.want)
continue
}
sortOpt := cmp.Transformer("SortMsg", func(in []*RetryTask) []*RetryTask {
@@ -478,12 +649,68 @@ func TestListRetry(t *testing.T) {
})
if diff := cmp.Diff(tc.want, got, sortOpt, timeCmpOpt); diff != "" {
t.Errorf("r.ListRetry() = %v, %v, want %v, nil; (-want, +got)\n%s", got, err, tc.want, diff)
t.Errorf("%s = %v, %v, want %v, nil; (-want, +got)\n%s", op, got, err, tc.want, diff)
continue
}
}
}
func TestListRetryPagination(t *testing.T) {
r := setup(t)
// create 100 tasks with an increasing number of wait time.
for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil)
if err := r.Retry(msg, time.Now().Add(time.Duration(i)*time.Second), "error"); err != nil {
t.Fatal(err)
}
}
tests := []struct {
desc string
page int
size int
wantSize int
wantFirst string
wantLast string
}{
{"first page", 0, 20, 20, "task 0", "task 19"},
{"second page", 1, 20, 20, "task 20", "task 39"},
{"different page size", 2, 30, 30, "task 60", "task 89"},
{"last page", 3, 30, 10, "task 90", "task 99"},
{"out of range", 4, 30, 0, "", ""},
}
for _, tc := range tests {
got, err := r.ListRetry(Pagination{Size: tc.size, Page: tc.page})
op := fmt.Sprintf("r.ListRetry(Pagination{Size: %d, Page: %d})", tc.size, tc.page)
if err != nil {
t.Errorf("%s; %s returned error %v", tc.desc, op, err)
continue
}
if len(got) != tc.wantSize {
t.Errorf("%s; %s returned list of size %d, want %d", tc.desc, op, len(got), tc.wantSize)
continue
}
if tc.wantSize == 0 {
continue
}
first := got[0]
if first.Type != tc.wantFirst {
t.Errorf("%s; %s returned a list with first message %q, want %q",
tc.desc, op, first.Type, tc.wantFirst)
}
last := got[len(got)-1]
if last.Type != tc.wantLast {
t.Errorf("%s; %s returned a list with the last message %q, want %q",
tc.desc, op, last.Type, tc.wantLast)
}
}
}
func TestListDead(t *testing.T) {
r := setup(t)
m1 := &base.TaskMessage{
@@ -542,9 +769,10 @@ func TestListDead(t *testing.T) {
h.FlushDB(t, r.client) // clean up db before each test case
h.SeedDeadQueue(t, r.client, tc.dead)
got, err := r.ListDead()
got, err := r.ListDead(Pagination{Size: 20, Page: 0})
op := "r.ListDead(Pagination{Size: 20, Page: 0})"
if err != nil {
t.Errorf("r.ListDead() = %v, %v, want %v, nil", got, err, tc.want)
t.Errorf("%s = %v, %v, want %v, nil", op, got, err, tc.want)
continue
}
sortOpt := cmp.Transformer("SortMsg", func(in []*DeadTask) []*DeadTask {
@@ -555,12 +783,67 @@ func TestListDead(t *testing.T) {
return out
})
if diff := cmp.Diff(tc.want, got, sortOpt, timeCmpOpt); diff != "" {
t.Errorf("r.ListDead() = %v, %v, want %v, nil; (-want, +got)\n%s", got, err, tc.want, diff)
t.Errorf("%s = %v, %v, want %v, nil; (-want, +got)\n%s", op, got, err, tc.want, diff)
continue
}
}
}
func TestListDeadPagination(t *testing.T) {
r := setup(t)
var entries []h.ZSetEntry
for i := 0; i < 100; i++ {
msg := h.NewTaskMessage(fmt.Sprintf("task %d", i), nil)
entries = append(entries, h.ZSetEntry{Msg: msg, Score: float64(i)})
}
h.SeedDeadQueue(t, r.client, entries)
tests := []struct {
desc string
page int
size int
wantSize int
wantFirst string
wantLast string
}{
{"first page", 0, 20, 20, "task 0", "task 19"},
{"second page", 1, 20, 20, "task 20", "task 39"},
{"different page size", 2, 30, 30, "task 60", "task 89"},
{"last page", 3, 30, 10, "task 90", "task 99"},
{"out of range", 4, 30, 0, "", ""},
}
for _, tc := range tests {
got, err := r.ListDead(Pagination{Size: tc.size, Page: tc.page})
op := fmt.Sprintf("r.ListDead(Pagination{Size: %d, Page: %d})", tc.size, tc.page)
if err != nil {
t.Errorf("%s; %s returned error %v", tc.desc, op, err)
continue
}
if len(got) != tc.wantSize {
t.Errorf("%s; %s returned list of size %d, want %d", tc.desc, op, len(got), tc.wantSize)
continue
}
if tc.wantSize == 0 {
continue
}
first := got[0]
if first.Type != tc.wantFirst {
t.Errorf("%s; %s returned a list with first message %q, want %q",
tc.desc, op, first.Type, tc.wantFirst)
}
last := got[len(got)-1]
if last.Type != tc.wantLast {
t.Errorf("%s; %s returned a list with the last message %q, want %q",
tc.desc, op, last.Type, tc.wantLast)
}
}
}
var timeCmpOpt = cmpopts.EquateApproxTime(time.Second)
func TestEnqueueDeadTask(t *testing.T) {
@@ -1767,3 +2050,140 @@ func TestRemoveQueueError(t *testing.T) {
}
}
}
func TestListProcesses(t *testing.T) {
r := setup(t)
started1 := time.Now().Add(-time.Hour)
ps1 := base.NewProcessState("do.droplet1", 1234, 10, map[string]int{"default": 1}, false)
ps1.SetStarted(started1)
ps1.SetStatus(base.StatusRunning)
info1 := &base.ProcessInfo{
Concurrency: 10,
Queues: map[string]int{"default": 1},
Host: "do.droplet1",
PID: 1234,
Status: "running",
Started: started1,
ActiveWorkerCount: 0,
}
started2 := time.Now().Add(-2 * time.Hour)
ps2 := base.NewProcessState("do.droplet2", 9876, 20, map[string]int{"email": 1}, false)
ps2.SetStarted(started2)
ps2.SetStatus(base.StatusStopped)
ps2.AddWorkerStats(h.NewTaskMessage("send_email", nil), time.Now())
info2 := &base.ProcessInfo{
Concurrency: 20,
Queues: map[string]int{"email": 1},
Host: "do.droplet2",
PID: 9876,
Status: "stopped",
Started: started2,
ActiveWorkerCount: 1,
}
tests := []struct {
processes []*base.ProcessState
want []*base.ProcessInfo
}{
{
processes: []*base.ProcessState{},
want: []*base.ProcessInfo{},
},
{
processes: []*base.ProcessState{ps1},
want: []*base.ProcessInfo{info1},
},
{
processes: []*base.ProcessState{ps1, ps2},
want: []*base.ProcessInfo{info1, info2},
},
}
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
for _, tc := range tests {
h.FlushDB(t, r.client)
for _, ps := range tc.processes {
if err := r.WriteProcessState(ps, 5*time.Second); err != nil {
t.Fatal(err)
}
}
got, err := r.ListProcesses()
if err != nil {
t.Errorf("r.ListProcesses returned an error: %v", err)
}
if diff := cmp.Diff(tc.want, got, h.SortProcessInfoOpt, ignoreOpt); diff != "" {
t.Errorf("r.ListProcesses returned %v, want %v; (-want,+got)\n%s",
got, tc.processes, diff)
}
}
}
func TestListWorkers(t *testing.T) {
r := setup(t)
const (
host = "127.0.0.1"
pid = 4567
)
m1 := h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "abc123"})
m2 := h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/image/file"})
m3 := h.NewTaskMessage("reindex", map[string]interface{}{})
t1 := time.Now().Add(-time.Second)
t2 := time.Now().Add(-10 * time.Second)
t3 := time.Now().Add(-time.Minute)
type workerStats struct {
msg *base.TaskMessage
started time.Time
}
tests := []struct {
workers []*workerStats
want []*base.WorkerInfo
}{
{
workers: []*workerStats{
{m1, t1},
{m2, t2},
{m3, t3},
},
want: []*base.WorkerInfo{
{Host: host, PID: pid, ID: m1.ID, Type: m1.Type, Queue: m1.Queue, Payload: m1.Payload, Started: t1},
{Host: host, PID: pid, ID: m2.ID, Type: m2.Type, Queue: m2.Queue, Payload: m2.Payload, Started: t2},
{Host: host, PID: pid, ID: m3.ID, Type: m3.Type, Queue: m3.Queue, Payload: m3.Payload, Started: t3},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r.client)
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false)
for _, w := range tc.workers {
ps.AddWorkerStats(w.msg, w.started)
}
err := r.WriteProcessState(ps, time.Minute)
if err != nil {
t.Errorf("could not write process state to redis: %v", err)
continue
}
got, err := r.ListWorkers()
if err != nil {
t.Errorf("(*RDB).ListWorkers() returned an error: %v", err)
continue
}
if diff := cmp.Diff(tc.want, got, h.SortWorkerInfoOpt); diff != "" {
t.Errorf("(*RDB).ListWorkers() = %v, want = %v; (-want,+got)\n%s", got, tc.want, diff)
}
}
}

View File

@@ -22,6 +22,9 @@ var (
// ErrTaskNotFound indicates that a task that matches the given identifier was not found.
ErrTaskNotFound = errors.New("could not find a task")
// ErrDuplicateTask indicates that another task with the same unique key holds the uniqueness lock.
ErrDuplicateTask = errors.New("task already exists")
)
const statsTTL = 90 * 24 * time.Hour // 90 days
@@ -41,6 +44,14 @@ func (r *RDB) Close() error {
return r.client.Close()
}
// KEYS[1] -> asynq:queues:<qname>
// KEYS[2] -> asynq:queues
// ARGV[1] -> task message data
var enqueueCmd = redis.NewScript(`
redis.call("LPUSH", KEYS[1], ARGV[1])
redis.call("SADD", KEYS[2], KEYS[1])
return 1`)
// Enqueue inserts the given task to the tail of the queue.
func (r *RDB) Enqueue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
@@ -48,17 +59,51 @@ func (r *RDB) Enqueue(msg *base.TaskMessage) error {
return err
}
key := base.QueueKey(msg.Queue)
script := redis.NewScript(`
redis.call("LPUSH", KEYS[1], ARGV[1])
redis.call("SADD", KEYS[2], KEYS[1])
return 1
`)
return script.Run(r.client, []string{key, base.AllQueues}, string(bytes)).Err()
return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, bytes).Err()
}
// Dequeue queries given queues in order and pops a task message if there
// is one and returns it. If all queues are empty, ErrNoProcessableTask
// error is returned.
// KEYS[1] -> unique key in the form <type>:<payload>:<qname>
// KEYS[2] -> asynq:queues:<qname>
// KEYS[2] -> asynq:queues
// ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> task message data
var enqueueUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then
return 0
end
redis.call("LPUSH", KEYS[2], ARGV[3])
redis.call("SADD", KEYS[3], KEYS[2])
return 1
`)
// EnqueueUnique inserts the given task if the task's uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
key := base.QueueKey(msg.Queue)
res, err := enqueueUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, key, base.AllQueues},
msg.ID.String(), int(ttl.Seconds()), bytes).Result()
if err != nil {
return err
}
n, ok := res.(int64)
if !ok {
return fmt.Errorf("could not cast %v to int64", res)
}
if n == 0 {
return ErrDuplicateTask
}
return nil
}
// Dequeue queries given queues in order and pops a task message if there is one and returns it.
// If all queues are empty, ErrNoProcessableTask error is returned.
func (r *RDB) Dequeue(qnames ...string) (*base.TaskMessage, error) {
var data string
var err error
@@ -90,87 +135,172 @@ func (r *RDB) dequeueSingle(queue string) (data string, err error) {
return r.client.BRPopLPush(queue, base.InProgressQueue, time.Second).Result()
}
// KEYS[1] -> asynq:in_progress
// ARGV -> List of queues to query in order
var dequeueCmd = redis.NewScript(`
local res
for _, qkey in ipairs(ARGV) do
res = redis.call("RPOPLPUSH", qkey, KEYS[1])
if res then
return res
end
end
return res`)
func (r *RDB) dequeue(queues ...string) (data string, err error) {
var args []interface{}
for _, qkey := range queues {
args = append(args, qkey)
}
script := redis.NewScript(`
local res
for _, qkey in ipairs(ARGV) do
res = redis.call("RPOPLPUSH", qkey, KEYS[1])
if res then
return res
end
end
return res
`)
res, err := script.Run(r.client, []string{base.InProgressQueue}, args...).Result()
res, err := dequeueCmd.Run(r.client, []string{base.InProgressQueue}, args...).Result()
if err != nil {
return "", err
}
return cast.ToStringE(res)
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:processed:<yyyy-mm-dd>
// KEYS[3] -> unique key in the format <type>:<payload>:<qname>
// ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp
// ARGV[3] -> task ID
// Note: LREM count ZERO means "remove all elements equal to val"
var doneCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
local n = redis.call("INCR", KEYS[2])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[2], ARGV[2])
end
if string.len(KEYS[3]) > 0 and redis.call("GET", KEYS[3]) == ARGV[3] then
redis.call("DEL", KEYS[3])
end
return redis.status_reply("OK")
`)
// Done removes the task from in-progress queue to mark the task as done.
// It removes a uniqueness lock acquired by the task, if any.
func (r *RDB) Done(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
// Note: LREM count ZERO means "remove all elements equal to val"
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:processed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp
script := redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
local n = redis.call("INCR", KEYS[2])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[2], ARGV[2])
end
return redis.status_reply("OK")
`)
now := time.Now()
processedKey := base.ProcessedKey(now)
expireAt := now.Add(statsTTL)
return script.Run(r.client,
[]string{base.InProgressQueue, processedKey},
string(bytes), expireAt.Unix()).Err()
return doneCmd.Run(r.client,
[]string{base.InProgressQueue, processedKey, msg.UniqueKey},
bytes, expireAt.Unix(), msg.ID.String()).Err()
}
// Requeue moves the task from in-progress queue to the default
// queue.
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:queues:<qname>
// ARGV[1] -> base.TaskMessage value
// Note: Use RPUSH to push to the head of the queue.
var requeueCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("RPUSH", KEYS[2], ARGV[1])
return redis.status_reply("OK")`)
// Requeue moves the task from in-progress queue to the specified queue.
func (r *RDB) Requeue(msg *base.TaskMessage) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
// Note: Use RPUSH to push to the head of the queue.
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:queues:default
// ARGV[1] -> base.TaskMessage value
script := redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("RPUSH", KEYS[2], ARGV[1])
return redis.status_reply("OK")
`)
return script.Run(r.client,
[]string{base.InProgressQueue, base.DefaultQueue},
return requeueCmd.Run(r.client,
[]string{base.InProgressQueue, base.QueueKey(msg.Queue)},
string(bytes)).Err()
}
// KEYS[1] -> asynq:scheduled
// KEYS[2] -> asynq:queues
// ARGV[1] -> score (process_at timestamp)
// ARGV[2] -> task message
// ARGV[3] -> queue key
var scheduleCmd = redis.NewScript(`
redis.call("ZADD", KEYS[1], ARGV[1], ARGV[2])
redis.call("SADD", KEYS[2], ARGV[3])
return 1
`)
// Schedule adds the task to the backlog queue to be processed in the future.
func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
qkey := base.QueueKey(msg.Queue)
score := float64(processAt.Unix())
return r.client.ZAdd(base.ScheduledQueue,
&redis.Z{Member: string(bytes), Score: score}).Err()
return scheduleCmd.Run(r.client,
[]string{base.ScheduledQueue, base.AllQueues},
score, bytes, qkey).Err()
}
// KEYS[1] -> unique key in the format <type>:<payload>:<qname>
// KEYS[2] -> asynq:scheduled
// KEYS[3] -> asynq:queues
// ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> score (process_at timestamp)
// ARGV[4] -> task message
// ARGV[5] -> queue key
var scheduleUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then
return 0
end
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[4])
redis.call("SADD", KEYS[3], ARGV[5])
return 1
`)
// ScheduleUnique adds the task to the backlog queue to be processed in the future if the uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired.
func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
bytes, err := json.Marshal(msg)
if err != nil {
return err
}
qkey := base.QueueKey(msg.Queue)
score := float64(processAt.Unix())
res, err := scheduleUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, base.ScheduledQueue, base.AllQueues},
msg.ID.String(), int(ttl.Seconds()), score, bytes, qkey).Result()
if err != nil {
return err
}
n, ok := res.(int64)
if !ok {
return fmt.Errorf("could not cast %v to int64", res)
}
if n == 0 {
return ErrDuplicateTask
}
return nil
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:retry
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq:failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp
var retryCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[4])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[4])
end
return redis.status_reply("OK")`)
// Retry moves the task from in-progress to retry queue, incrementing retry count
// and assigning error message to the task message.
func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
@@ -185,32 +315,11 @@ func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) e
if err != nil {
return err
}
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:retry
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq:failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp
script := redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[4])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[4])
end
return redis.status_reply("OK")
`)
now := time.Now()
processedKey := base.ProcessedKey(now)
failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL)
return script.Run(r.client,
return retryCmd.Run(r.client,
[]string{base.InProgressQueue, base.RetryQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), processAt.Unix(), expireAt.Unix()).Err()
}
@@ -220,6 +329,31 @@ const (
deadExpirationInDays = 90
)
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:dead
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq.failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Dead queue
// ARGV[3] -> died_at UNIX timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
// ARGV[6] -> stats expiration timestamp
var killCmd = redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[6])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[6])
end
return redis.status_reply("OK")`)
// Kill sends the task to "dead" queue from in-progress queue, assigning
// the error message to the task.
// It also trims the set by timestamp and set size.
@@ -239,47 +373,27 @@ func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
processedKey := base.ProcessedKey(now)
failureKey := base.FailureKey(now)
expireAt := now.Add(statsTTL)
// KEYS[1] -> asynq:in_progress
// KEYS[2] -> asynq:dead
// KEYS[3] -> asynq:processed:<yyyy-mm-dd>
// KEYS[4] -> asynq.failure:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue
// ARGV[2] -> base.TaskMessage value to add to Dead queue
// ARGV[3] -> died_at UNIX timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[5] -> max number of tasks in dead queue (e.g., 100)
// ARGV[6] -> stats expiration timestamp
script := redis.NewScript(`
redis.call("LREM", KEYS[1], 0, ARGV[1])
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[4])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[5])
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[6])
end
local m = redis.call("INCR", KEYS[4])
if tonumber(m) == 1 then
redis.call("EXPIREAT", KEYS[4], ARGV[6])
end
return redis.status_reply("OK")
`)
return script.Run(r.client,
return killCmd.Run(r.client,
[]string{base.InProgressQueue, base.DeadQueue, processedKey, failureKey},
string(bytesToRemove), string(bytesToAdd), now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err()
}
// RestoreUnfinished moves all tasks from in-progress list to the queue
// KEYS[1] -> asynq:in_progress
// ARGV[1] -> queue prefix
var requeueAllCmd = redis.NewScript(`
local msgs = redis.call("LRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[1] .. decoded["Queue"]
redis.call("RPUSH", qkey, msg)
redis.call("LREM", KEYS[1], 0, msg)
end
return table.getn(msgs)`)
// RequeueAll moves all tasks from in-progress list to the queue
// and reports the number of tasks restored.
func (r *RDB) RestoreUnfinished() (int64, error) {
script := redis.NewScript(`
local len = redis.call("LLEN", KEYS[1])
for i = len, 1, -1 do
redis.call("RPOPLPUSH", KEYS[1], KEYS[2])
end
return len
`)
res, err := script.Run(r.client, []string{base.InProgressQueue, base.DefaultQueue}).Result()
func (r *RDB) RequeueAll() (int64, error) {
res, err := requeueAllCmd.Run(r.client, []string{base.InProgressQueue}, base.QueuePrefix).Result()
if err != nil {
return 0, err
}
@@ -310,36 +424,124 @@ func (r *RDB) CheckAndEnqueue(qnames ...string) error {
return nil
}
// forward moves all tasks with a score less than the current unix time
// from the src zset.
func (r *RDB) forward(src string) error {
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
redis.call("ZREM", KEYS[1], msg)
// KEYS[1] -> source queue (e.g. scheduled or retry queue)
// ARGV[1] -> current unix time
// ARGV[2] -> queue prefix
var forwardCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg)
local qkey = ARGV[2] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
end
return msgs
`)
redis.call("ZREM", KEYS[1], msg)
end
return msgs`)
// forward moves all tasks with a score less than the current unix time
// from the src zset.
func (r *RDB) forward(src string) error {
now := float64(time.Now().Unix())
return script.Run(r.client,
return forwardCmd.Run(r.client,
[]string{src}, now, base.QueuePrefix).Err()
}
// KEYS[1] -> source queue (e.g. scheduled or retry queue)
// KEYS[2] -> destination queue
var forwardSingleCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
redis.call("LPUSH", KEYS[2], msg)
redis.call("ZREM", KEYS[1], msg)
end
return msgs`)
// forwardSingle moves all tasks with a score less than the current unix time
// from the src zset to dst list.
func (r *RDB) forwardSingle(src, dst string) error {
script := redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1])
for _, msg in ipairs(msgs) do
redis.call("ZREM", KEYS[1], msg)
redis.call("LPUSH", KEYS[2], msg)
end
return msgs
`)
now := float64(time.Now().Unix())
return script.Run(r.client,
return forwardSingleCmd.Run(r.client,
[]string{src, dst}, now).Err()
}
// KEYS[1] -> asynq:ps:<host:pid>
// KEYS[2] -> asynq:ps
// KEYS[3] -> asynq:workers<host:pid>
// keys[4] -> asynq:workers
// ARGV[1] -> expiration time
// ARGV[2] -> TTL in seconds
// ARGV[3] -> process info
// ARGV[4:] -> alternate key-value pair of (worker id, worker data)
// Note: Add key to ZSET with expiration time as score.
// ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996
var writeProcessInfoCmd = redis.NewScript(`
redis.call("SETEX", KEYS[1], ARGV[2], ARGV[3])
redis.call("ZADD", KEYS[2], ARGV[1], KEYS[1])
redis.call("DEL", KEYS[3])
for i = 4, table.getn(ARGV)-1, 2 do
redis.call("HSET", KEYS[3], ARGV[i], ARGV[i+1])
end
redis.call("EXPIRE", KEYS[3], ARGV[2])
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
return redis.status_reply("OK")`)
// WriteProcessState writes process state data to redis with expiration set to the value ttl.
func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error {
info := ps.Get()
bytes, err := json.Marshal(info)
if err != nil {
return err
}
var args []interface{} // args to the lua script
exp := time.Now().Add(ttl).UTC()
workers := ps.GetWorkers()
args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes)
for _, w := range workers {
bytes, err := json.Marshal(w)
if err != nil {
continue // skip bad data
}
args = append(args, w.ID.String(), bytes)
}
pkey := base.ProcessInfoKey(info.Host, info.PID)
wkey := base.WorkersKey(info.Host, info.PID)
return writeProcessInfoCmd.Run(r.client,
[]string{pkey, base.AllProcesses, wkey, base.AllWorkers},
args...).Err()
}
// KEYS[1] -> asynq:ps
// KEYS[2] -> asynq:ps:<host:pid>
// KEYS[3] -> asynq:workers
// KEYS[4] -> asynq:workers<host:pid>
var clearProcessInfoCmd = redis.NewScript(`
redis.call("ZREM", KEYS[1], KEYS[2])
redis.call("DEL", KEYS[2])
redis.call("ZREM", KEYS[3], KEYS[4])
redis.call("DEL", KEYS[4])
return redis.status_reply("OK")`)
// ClearProcessState deletes process state data from redis.
func (r *RDB) ClearProcessState(ps *base.ProcessState) error {
info := ps.Get()
host, pid := info.Host, info.PID
pkey := base.ProcessInfoKey(host, pid)
wkey := base.WorkersKey(host, pid)
return clearProcessInfoCmd.Run(r.client,
[]string{base.AllProcesses, pkey, base.AllWorkers, wkey}).Err()
}
// CancelationPubSub returns a pubsub for cancelation messages.
func (r *RDB) CancelationPubSub() (*redis.PubSub, error) {
pubsub := r.client.Subscribe(base.CancelChannel)
_, err := pubsub.Receive()
if err != nil {
return nil, err
}
return pubsub, nil
}
// PublishCancelation publish cancelation message to all subscribers.
// The message is the ID for the task to be canceled.
func (r *RDB) PublishCancelation(id string) error {
return r.client.Publish(base.CancelChannel, id).Err()
}

View File

@@ -5,14 +5,18 @@
package rdb
import (
"encoding/json"
"fmt"
"sync"
"testing"
"time"
"github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
)
// TODO(hibiken): Get Redis address and db number from ENV variables.
@@ -66,6 +70,48 @@ func TestEnqueue(t *testing.T) {
}
}
func TestEnqueueUnique(t *testing.T) {
r := setup(t)
m1 := base.TaskMessage{
ID: xid.New(),
Type: "email",
Payload: map[string]interface{}{"user_id": 123},
Queue: base.DefaultQueueName,
UniqueKey: "email:user_id=123:default",
}
tests := []struct {
msg *base.TaskMessage
ttl time.Duration // uniqueness ttl
}{
{&m1, time.Minute},
}
for _, tc := range tests {
h.FlushDB(t, r.client) // clean up db before each test case.
err := r.EnqueueUnique(tc.msg, tc.ttl)
if err != nil {
t.Errorf("First message: (*RDB).EnqueueUnique(%v, %v) = %v, want nil",
tc.msg, tc.ttl, err)
continue
}
got := r.EnqueueUnique(tc.msg, tc.ttl)
if got != ErrDuplicateTask {
t.Errorf("Second message: (*RDB).EnqueueUnique(%v, %v) = %v, want %v",
tc.msg, tc.ttl, got, ErrDuplicateTask)
continue
}
gotTTL := r.client.TTL(tc.msg.UniqueKey).Val()
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL %q = %v, want %v", tc.msg.UniqueKey, gotTTL, tc.ttl)
continue
}
}
}
func TestDequeue(t *testing.T) {
r := setup(t)
t1 := h.NewTaskMessage("send_email", map[string]interface{}{"subject": "hello!"})
@@ -185,6 +231,13 @@ func TestDone(t *testing.T) {
r := setup(t)
t1 := h.NewTaskMessage("send_email", nil)
t2 := h.NewTaskMessage("export_csv", nil)
t3 := &base.TaskMessage{
ID: xid.New(),
Type: "reindex",
Payload: nil,
UniqueKey: "reindex:nil:default",
Queue: "default",
}
tests := []struct {
inProgress []*base.TaskMessage // initial state of the in-progress list
@@ -201,11 +254,25 @@ func TestDone(t *testing.T) {
target: t1,
wantInProgress: []*base.TaskMessage{},
},
{
inProgress: []*base.TaskMessage{t1, t2, t3},
target: t3,
wantInProgress: []*base.TaskMessage{t1, t2},
},
}
for _, tc := range tests {
h.FlushDB(t, r.client) // clean up db before each test case
h.SeedInProgressQueue(t, r.client, tc.inProgress)
for _, msg := range tc.inProgress {
// Set uniqueness lock if unique key is present.
if len(msg.UniqueKey) > 0 {
err := r.client.SetNX(msg.UniqueKey, msg.ID.String(), time.Minute).Err()
if err != nil {
t.Fatal(err)
}
}
}
err := r.Done(tc.target)
if err != nil {
@@ -229,6 +296,10 @@ func TestDone(t *testing.T) {
if gotTTL > statsTTL {
t.Errorf("TTL %q = %v, want less than or equal to %v", processedKey, gotTTL, statsTTL)
}
if len(tc.target.UniqueKey) > 0 && r.client.Exists(tc.target.UniqueKey).Val() != 0 {
t.Errorf("Uniqueness lock %q still exists", tc.target.UniqueKey)
}
}
}
@@ -236,33 +307,57 @@ func TestRequeue(t *testing.T) {
r := setup(t)
t1 := h.NewTaskMessage("send_email", nil)
t2 := h.NewTaskMessage("export_csv", nil)
t3 := h.NewTaskMessageWithQueue("send_email", nil, "critical")
tests := []struct {
enqueued []*base.TaskMessage // initial state of the default queue
enqueued map[string][]*base.TaskMessage // initial state of queues
inProgress []*base.TaskMessage // initial state of the in-progress list
target *base.TaskMessage // task to requeue
wantEnqueued []*base.TaskMessage // final state of the default queue
wantEnqueued map[string][]*base.TaskMessage // final state of queues
wantInProgress []*base.TaskMessage // final state of the in-progress list
}{
{
enqueued: []*base.TaskMessage{},
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {},
},
inProgress: []*base.TaskMessage{t1, t2},
target: t1,
wantEnqueued: []*base.TaskMessage{t1},
wantEnqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1},
},
wantInProgress: []*base.TaskMessage{t2},
},
{
enqueued: []*base.TaskMessage{t1},
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1},
},
inProgress: []*base.TaskMessage{t2},
target: t2,
wantEnqueued: []*base.TaskMessage{t1, t2},
wantEnqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1, t2},
},
wantInProgress: []*base.TaskMessage{},
},
{
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1},
"critical": {},
},
inProgress: []*base.TaskMessage{t2, t3},
target: t3,
wantEnqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1},
"critical": {t3},
},
wantInProgress: []*base.TaskMessage{t2},
},
}
for _, tc := range tests {
h.FlushDB(t, r.client) // clean up db before each test case
h.SeedEnqueuedQueue(t, r.client, tc.enqueued)
for qname, msgs := range tc.enqueued {
h.SeedEnqueuedQueue(t, r.client, msgs, qname)
}
h.SeedInProgressQueue(t, r.client, tc.inProgress)
err := r.Requeue(tc.target)
@@ -271,9 +366,11 @@ func TestRequeue(t *testing.T) {
continue
}
gotEnqueued := h.GetEnqueuedMessages(t, r.client)
if diff := cmp.Diff(tc.wantEnqueued, gotEnqueued, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q: (-want, +got):\n%s", base.DefaultQueue, diff)
for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r.client, qname)
if diff := cmp.Diff(want, gotEnqueued, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q; (-want, +got)\n%s", base.QueueKey(qname), diff)
}
}
gotInProgress := h.GetInProgressMessages(t, r.client)
@@ -315,6 +412,58 @@ func TestSchedule(t *testing.T) {
}
}
func TestScheduleUnique(t *testing.T) {
r := setup(t)
m1 := base.TaskMessage{
ID: xid.New(),
Type: "email",
Payload: map[string]interface{}{"user_id": 123},
Queue: base.DefaultQueueName,
UniqueKey: "email:user_id=123:default",
}
tests := []struct {
msg *base.TaskMessage
processAt time.Time
ttl time.Duration // uniqueness lock ttl
}{
{&m1, time.Now().Add(15 * time.Minute), time.Minute},
}
for _, tc := range tests {
h.FlushDB(t, r.client) // clean up db before each test case
desc := fmt.Sprintf("(*RDB).ScheduleUnique(%v, %v, %v)", tc.msg, tc.processAt, tc.ttl)
err := r.ScheduleUnique(tc.msg, tc.processAt, tc.ttl)
if err != nil {
t.Errorf("Frist task: %s = %v, want nil", desc, err)
continue
}
gotScheduled := h.GetScheduledEntries(t, r.client)
if len(gotScheduled) != 1 {
t.Errorf("%s inserted %d items to %q, want 1 items inserted", desc, len(gotScheduled), base.ScheduledQueue)
continue
}
if int64(gotScheduled[0].Score) != tc.processAt.Unix() {
t.Errorf("%s inserted an item with score %d, want %d", desc, int64(gotScheduled[0].Score), tc.processAt.Unix())
continue
}
got := r.ScheduleUnique(tc.msg, tc.processAt, tc.ttl)
if got != ErrDuplicateTask {
t.Errorf("Second task: %s = %v, want %v",
desc, got, ErrDuplicateTask)
}
gotTTL := r.client.TTL(tc.msg.UniqueKey).Val()
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL %q = %v, want %v", tc.msg.UniqueKey, gotTTL, tc.ttl)
continue
}
}
}
func TestRetry(t *testing.T) {
r := setup(t)
t1 := h.NewTaskMessage("send_email", map[string]interface{}{"subject": "Hola!"})
@@ -513,50 +662,81 @@ func TestKill(t *testing.T) {
}
}
func TestRestoreUnfinished(t *testing.T) {
func TestRequeueAll(t *testing.T) {
r := setup(t)
t1 := h.NewTaskMessage("send_email", nil)
t2 := h.NewTaskMessage("export_csv", nil)
t3 := h.NewTaskMessage("sync_stuff", nil)
t4 := h.NewTaskMessageWithQueue("important", nil, "critical")
t5 := h.NewTaskMessageWithQueue("minor", nil, "low")
tests := []struct {
inProgress []*base.TaskMessage
enqueued []*base.TaskMessage
enqueued map[string][]*base.TaskMessage
want int64
wantInProgress []*base.TaskMessage
wantEnqueued []*base.TaskMessage
wantEnqueued map[string][]*base.TaskMessage
}{
{
inProgress: []*base.TaskMessage{t1, t2, t3},
enqueued: []*base.TaskMessage{},
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {},
},
want: 3,
wantInProgress: []*base.TaskMessage{},
wantEnqueued: []*base.TaskMessage{t1, t2, t3},
wantEnqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1, t2, t3},
},
},
{
inProgress: []*base.TaskMessage{},
enqueued: []*base.TaskMessage{t1, t2, t3},
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1, t2, t3},
},
want: 0,
wantInProgress: []*base.TaskMessage{},
wantEnqueued: []*base.TaskMessage{t1, t2, t3},
wantEnqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1, t2, t3},
},
},
{
inProgress: []*base.TaskMessage{t2, t3},
enqueued: []*base.TaskMessage{t1},
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1},
},
want: 2,
wantInProgress: []*base.TaskMessage{},
wantEnqueued: []*base.TaskMessage{t1, t2, t3},
wantEnqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1, t2, t3},
},
},
{
inProgress: []*base.TaskMessage{t2, t3, t4, t5},
enqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1},
"critical": {},
"low": {},
},
want: 4,
wantInProgress: []*base.TaskMessage{},
wantEnqueued: map[string][]*base.TaskMessage{
base.DefaultQueueName: {t1, t2, t3},
"critical": {t4},
"low": {t5},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r.client) // clean up db before each test case
h.SeedInProgressQueue(t, r.client, tc.inProgress)
h.SeedEnqueuedQueue(t, r.client, tc.enqueued)
for qname, msgs := range tc.enqueued {
h.SeedEnqueuedQueue(t, r.client, msgs, qname)
}
got, err := r.RestoreUnfinished()
got, err := r.RequeueAll()
if got != tc.want || err != nil {
t.Errorf("(*RDB).RestoreUnfinished() = %v %v, want %v nil", got, err, tc.want)
t.Errorf("(*RDB).RequeueAll() = %v %v, want %v nil", got, err, tc.want)
continue
}
@@ -565,9 +745,11 @@ func TestRestoreUnfinished(t *testing.T) {
t.Errorf("mismatch found in %q: (-want, +got):\n%s", base.InProgressQueue, diff)
}
gotEnqueued := h.GetEnqueuedMessages(t, r.client)
if diff := cmp.Diff(tc.wantEnqueued, gotEnqueued, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q: (-want, +got):\n%s", base.DefaultQueue, diff)
for qname, want := range tc.wantEnqueued {
gotEnqueued := h.GetEnqueuedMessages(t, r.client, qname)
if diff := cmp.Diff(want, gotEnqueued, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q: (-want, +got):\n%s", base.QueueKey(qname), diff)
}
}
}
}
@@ -679,3 +861,277 @@ func TestCheckAndEnqueue(t *testing.T) {
}
}
}
func TestWriteProcessState(t *testing.T) {
r := setup(t)
host, pid := "localhost", 98765
queues := map[string]int{"default": 2, "email": 5, "low": 1}
started := time.Now()
ps := base.NewProcessState(host, pid, 10, queues, false)
ps.SetStarted(started)
ps.SetStatus(base.StatusRunning)
ttl := 5 * time.Second
h.FlushDB(t, r.client)
err := r.WriteProcessState(ps, ttl)
if err != nil {
t.Errorf("r.WriteProcessState returned an error: %v", err)
}
// Check ProcessInfo was written correctly
pkey := base.ProcessInfoKey(host, pid)
data := r.client.Get(pkey).Val()
var got base.ProcessInfo
err = json.Unmarshal([]byte(data), &got)
if err != nil {
t.Fatalf("could not decode json: %v", err)
}
want := base.ProcessInfo{
Host: "localhost",
PID: 98765,
Concurrency: 10,
Queues: map[string]int{"default": 2, "email": 5, "low": 1},
StrictPriority: false,
Status: "running",
Started: started,
ActiveWorkerCount: 0,
}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("persisted ProcessInfo was %v, want %v; (-want,+got)\n%s",
got, want, diff)
}
// Check ProcessInfo TTL was set correctly
gotTTL := r.client.TTL(pkey).Val()
if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL of %q was %v, want %v", pkey, gotTTL, ttl)
}
// Check ProcessInfo key was added to the set correctly
gotProcesses := r.client.ZRange(base.AllProcesses, 0, -1).Val()
wantProcesses := []string{pkey}
if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcesses, wantProcesses)
}
// Check WorkersInfo was written correctly
wkey := base.WorkersKey(host, pid)
workerExist := r.client.Exists(wkey).Val()
if workerExist != 0 {
t.Errorf("%q key exists", wkey)
}
// Check WorkersInfo key was added to the set correctly
gotWorkerKeys := r.client.ZRange(base.AllWorkers, 0, -1).Val()
wantWorkerKeys := []string{wkey}
if diff := cmp.Diff(wantWorkerKeys, gotWorkerKeys); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllWorkers, gotWorkerKeys, wantWorkerKeys)
}
}
func TestWriteProcessStateWithWorkers(t *testing.T) {
r := setup(t)
host, pid := "localhost", 98765
queues := map[string]int{"default": 2, "email": 5, "low": 1}
concurrency := 10
started := time.Now().Add(-10 * time.Minute)
w1Started := time.Now().Add(-time.Minute)
w2Started := time.Now().Add(-time.Second)
msg1 := h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "123"})
msg2 := h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/imgfile"})
ps := base.NewProcessState(host, pid, concurrency, queues, false)
ps.SetStarted(started)
ps.SetStatus(base.StatusRunning)
ps.AddWorkerStats(msg1, w1Started)
ps.AddWorkerStats(msg2, w2Started)
ttl := 5 * time.Second
h.FlushDB(t, r.client)
err := r.WriteProcessState(ps, ttl)
if err != nil {
t.Errorf("r.WriteProcessState returned an error: %v", err)
}
// Check ProcessInfo was written correctly
pkey := base.ProcessInfoKey(host, pid)
data := r.client.Get(pkey).Val()
var got base.ProcessInfo
err = json.Unmarshal([]byte(data), &got)
if err != nil {
t.Fatalf("could not decode json: %v", err)
}
want := base.ProcessInfo{
Host: host,
PID: pid,
Concurrency: concurrency,
Queues: queues,
StrictPriority: false,
Status: "running",
Started: started,
ActiveWorkerCount: 2,
}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("persisted ProcessInfo was %v, want %v; (-want,+got)\n%s",
got, want, diff)
}
// Check ProcessInfo TTL was set correctly
gotTTL := r.client.TTL(pkey).Val()
if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL of %q was %v, want %v", pkey, gotTTL, ttl)
}
// Check ProcessInfo key was added to the set correctly
gotProcesses := r.client.ZRange(base.AllProcesses, 0, -1).Val()
wantProcesses := []string{pkey}
if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcesses, wantProcesses)
}
// Check WorkersInfo was written correctly
wkey := base.WorkersKey(host, pid)
wdata := r.client.HGetAll(wkey).Val()
if len(wdata) != 2 {
t.Fatalf("HGETALL %q returned a hash of size %d, want 2", wkey, len(wdata))
}
gotWorkers := make(map[string]*base.WorkerInfo)
for key, val := range wdata {
var w base.WorkerInfo
if err := json.Unmarshal([]byte(val), &w); err != nil {
t.Fatalf("could not unmarshal worker's data: %v", err)
}
gotWorkers[key] = &w
}
wantWorkers := map[string]*base.WorkerInfo{
msg1.ID.String(): {
Host: host,
PID: pid,
ID: msg1.ID,
Type: msg1.Type,
Queue: msg1.Queue,
Payload: msg1.Payload,
Started: w1Started,
},
msg2.ID.String(): {
Host: host,
PID: pid,
ID: msg2.ID,
Type: msg2.Type,
Queue: msg2.Queue,
Payload: msg2.Payload,
Started: w2Started,
},
}
if diff := cmp.Diff(wantWorkers, gotWorkers); diff != "" {
t.Errorf("persisted workers info was %v, want %v; (-want,+got)\n%s",
gotWorkers, wantWorkers, diff)
}
// Check WorkersInfo TTL was set correctly
gotTTL = r.client.TTL(wkey).Val()
if !cmp.Equal(ttl, gotTTL, timeCmpOpt) {
t.Errorf("TTL of %q was %v, want %v", wkey, gotTTL, ttl)
}
// Check WorkersInfo key was added to the set correctly
gotWorkerKeys := r.client.ZRange(base.AllWorkers, 0, -1).Val()
wantWorkerKeys := []string{wkey}
if diff := cmp.Diff(wantWorkerKeys, gotWorkerKeys); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllWorkers, gotWorkerKeys, wantWorkerKeys)
}
}
func TestClearProcessState(t *testing.T) {
r := setup(t)
host, pid := "127.0.0.1", 1234
h.FlushDB(t, r.client)
pkey := base.ProcessInfoKey(host, pid)
wkey := base.WorkersKey(host, pid)
otherPKey := base.ProcessInfoKey("otherhost", 12345)
otherWKey := base.WorkersKey("otherhost", 12345)
// Populate the keys.
if err := r.client.Set(pkey, "process-info", 0).Err(); err != nil {
t.Fatal(err)
}
if err := r.client.HSet(wkey, "worker-key", "worker-info").Err(); err != nil {
t.Fatal(err)
}
if err := r.client.ZAdd(base.AllProcesses, &redis.Z{Member: pkey}).Err(); err != nil {
t.Fatal(err)
}
if err := r.client.ZAdd(base.AllProcesses, &redis.Z{Member: otherPKey}).Err(); err != nil {
t.Fatal(err)
}
if err := r.client.ZAdd(base.AllWorkers, &redis.Z{Member: wkey}).Err(); err != nil {
t.Fatal(err)
}
if err := r.client.ZAdd(base.AllWorkers, &redis.Z{Member: otherWKey}).Err(); err != nil {
t.Fatal(err)
}
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false)
err := r.ClearProcessState(ps)
if err != nil {
t.Fatalf("(*RDB).ClearProcessState failed: %v", err)
}
// Check all keys are cleared
if r.client.Exists(pkey).Val() != 0 {
t.Errorf("Redis key %q exists", pkey)
}
if r.client.Exists(wkey).Val() != 0 {
t.Errorf("Redis key %q exists", wkey)
}
gotProcessKeys := r.client.ZRange(base.AllProcesses, 0, -1).Val()
wantProcessKeys := []string{otherPKey}
if diff := cmp.Diff(wantProcessKeys, gotProcessKeys); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcessKeys, wantProcessKeys)
}
gotWorkerKeys := r.client.ZRange(base.AllWorkers, 0, -1).Val()
wantWorkerKeys := []string{otherWKey}
if diff := cmp.Diff(wantWorkerKeys, gotWorkerKeys); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllWorkers, gotWorkerKeys, wantWorkerKeys)
}
}
func TestCancelationPubSub(t *testing.T) {
r := setup(t)
pubsub, err := r.CancelationPubSub()
if err != nil {
t.Fatalf("(*RDB).CancelationPubSub() returned an error: %v", err)
}
cancelCh := pubsub.Channel()
var (
mu sync.Mutex
received []string
)
go func() {
for msg := range cancelCh {
mu.Lock()
received = append(received, msg.Payload)
mu.Unlock()
}
}()
publish := []string{"one", "two", "three"}
for _, msg := range publish {
r.PublishCancelation(msg)
}
// allow for message to reach subscribers.
time.Sleep(time.Second)
pubsub.Close()
mu.Lock()
if diff := cmp.Diff(publish, received, h.SortStringSliceOpt); diff != "" {
t.Errorf("subscriber received %v, want %v; (-want,+got)\n%s", received, publish, diff)
}
mu.Unlock()
}

View File

@@ -14,333 +14,626 @@ import (
"github.com/hibiken/asynq/internal/base"
)
func TestPayloadGet(t *testing.T) {
names := []string{"luke", "anakin", "rey"}
primes := []int{2, 3, 5, 7, 11, 13, 17}
user := map[string]interface{}{"name": "Ken", "score": 3.14}
location := map[string]string{"address": "123 Main St.", "state": "NY", "zipcode": "10002"}
favs := map[string][]string{
"movies": []string{"forrest gump", "star wars"},
"tv_shows": []string{"game of thrones", "HIMYM", "breaking bad"},
}
counter := map[string]int{
"a": 1,
"b": 101,
"c": 42,
}
features := map[string]bool{
"A": false,
"B": true,
"C": true,
}
now := time.Now()
duration := 15 * time.Minute
type payloadTest struct {
data map[string]interface{}
key string
nonkey string
}
data := map[string]interface{}{
"greeting": "Hello",
"user_id": 9876,
"pi": 3.1415,
"enabled": false,
"names": names,
"primes": primes,
"user": user,
"location": location,
"favs": favs,
"counter": counter,
"features": features,
"timestamp": now,
"duration": duration,
func TestPayloadString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"name": "gopher"},
key: "name",
nonkey: "unknown",
},
}
payload := Payload{data}
gotStr, err := payload.GetString("greeting")
if gotStr != "Hello" || err != nil {
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil",
"greeting", gotStr, err, "Hello")
tc.key, got, err, tc.data[tc.key])
}
gotInt, err := payload.GetInt("user_id")
if gotInt != 9876 || err != nil {
t.Errorf("Payload.GetInt(%q) = %v, %v, want, %v, nil",
"user_id", gotInt, err, 9876)
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotFloat, err := payload.GetFloat64("pi")
if gotFloat != 3.1415 || err != nil {
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want, %v, nil",
"pi", gotFloat, err, 3.141592)
// access non-existent key.
got, err = payload.GetString(tc.nonkey)
if err == nil || got != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
tc.key, got, err)
}
gotBool, err := payload.GetBool("enabled")
if gotBool != false || err != nil {
t.Errorf("Payload.GetBool(%q) = %v, %v, want, %v, nil",
"enabled", gotBool, err, false)
}
gotStrSlice, err := payload.GetStringSlice("names")
if diff := cmp.Diff(gotStrSlice, names); diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"names", gotStrSlice, err, names, diff)
}
gotIntSlice, err := payload.GetIntSlice("primes")
if diff := cmp.Diff(gotIntSlice, primes); diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"primes", gotIntSlice, err, primes, diff)
}
gotStrMap, err := payload.GetStringMap("user")
if diff := cmp.Diff(gotStrMap, user); diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"user", gotStrMap, err, user, diff)
}
gotStrMapStr, err := payload.GetStringMapString("location")
if diff := cmp.Diff(gotStrMapStr, location); diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"location", gotStrMapStr, err, location, diff)
}
gotStrMapStrSlice, err := payload.GetStringMapStringSlice("favs")
if diff := cmp.Diff(gotStrMapStrSlice, favs); diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"favs", gotStrMapStrSlice, err, favs, diff)
}
gotStrMapInt, err := payload.GetStringMapInt("counter")
if diff := cmp.Diff(gotStrMapInt, counter); diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"counter", gotStrMapInt, err, counter, diff)
}
gotStrMapBool, err := payload.GetStringMapBool("features")
if diff := cmp.Diff(gotStrMapBool, features); diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"features", gotStrMapBool, err, features, diff)
}
gotTime, err := payload.GetTime("timestamp")
if !gotTime.Equal(now) {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
"timestamp", gotTime, err, now)
}
gotDuration, err := payload.GetDuration("duration")
if gotDuration != duration {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
"duration", gotDuration, err, duration)
}
}
func TestPayloadGetWithMarshaling(t *testing.T) {
names := []string{"luke", "anakin", "rey"}
primes := []int{2, 3, 5, 7, 11, 13, 17}
user := map[string]interface{}{"name": "Ken", "score": 3.14}
location := map[string]string{"address": "123 Main St.", "state": "NY", "zipcode": "10002"}
favs := map[string][]string{
"movies": []string{"forrest gump", "star wars"},
"tv_shows": []string{"game of throwns", "HIMYM", "breaking bad"},
func TestPayloadInt(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user_id": 42},
key: "user_id",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetInt(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadFloat64(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"pi": 3.14},
key: "pi",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetFloat64(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadBool(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"enabled": true},
key: "enabled",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetBool(tc.nonkey)
if err == nil || got != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
tc.key, got, err)
}
}
}
func TestPayloadStringSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"names": []string{"luke", "rey", "anakin"}},
key: "names",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadIntSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"nums": []int{9, 8, 7}},
key: "nums",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetIntSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetIntSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetIntSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMap(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user": map[string]interface{}{"name": "Jon Doe", "score": 2.2}},
key: "user",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMap(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMap(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMap(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"address": map[string]string{"line": "123 Main St", "city": "San Francisco", "state": "CA"}},
key: "address",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapString(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapString(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapString(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapStringSlice(t *testing.T) {
favs := map[string][]string{
"movies": {"forrest gump", "star wars"},
"tv_shows": {"game of thrones", "HIMYM", "breaking bad"},
}
tests := []payloadTest{
{
data: map[string]interface{}{"favorites": favs},
key: "favorites",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapInt(t *testing.T) {
counter := map[string]int{
"a": 1,
"b": 101,
"c": 42,
}
tests := []payloadTest{
{
data: map[string]interface{}{"counts": counter},
key: "counts",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapInt(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapInt(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapInt(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapBool(t *testing.T) {
features := map[string]bool{
"A": false,
"B": true,
"C": true,
}
now := time.Now()
duration := 15 * time.Minute
tests := []payloadTest{
{
data: map[string]interface{}{"features": features},
key: "features",
nonkey: "unknown",
},
}
in := Payload{map[string]interface{}{
"subject": "Hello",
"recipient_id": 9876,
"pi": 3.14,
"enabled": true,
"names": names,
"primes": primes,
"user": user,
"location": location,
"favs": favs,
"counter": counter,
"features": features,
"timestamp": now,
"duration": duration,
}}
// encode and then decode task messsage
inMsg := h.NewTaskMessage("testing", in.data)
data, err := json.Marshal(inMsg)
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapBool(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var outMsg base.TaskMessage
err = json.Unmarshal(data, &outMsg)
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
out := Payload{outMsg.Payload}
gotStr, err := out.GetString("subject")
if gotStr != "Hello" || err != nil {
t.Errorf("Payload.GetString(%q) = %v, %v; want %q, nil",
"subject", gotStr, err, "Hello")
payload = Payload{out.Payload}
got, err = payload.GetStringMapBool(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotInt, err := out.GetInt("recipient_id")
if gotInt != 9876 || err != nil {
t.Errorf("Payload.GetInt(%q) = %v, %v; want %v, nil",
"recipient_id", gotInt, err, 9876)
// access non-existent key.
got, err = payload.GetStringMapBool(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
gotFloat, err := out.GetFloat64("pi")
if gotFloat != 3.14 || err != nil {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want %v, nil",
"pi", gotFloat, err, 3.14)
}
gotBool, err := out.GetBool("enabled")
if gotBool != true || err != nil {
t.Errorf("Payload.GetBool(%q) = %v, %v; want %v, nil",
"enabled", gotBool, err, true)
}
gotStrSlice, err := out.GetStringSlice("names")
if diff := cmp.Diff(gotStrSlice, names); diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"names", gotStrSlice, err, names, diff)
}
gotIntSlice, err := out.GetIntSlice("primes")
if diff := cmp.Diff(gotIntSlice, primes); diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"primes", gotIntSlice, err, primes, diff)
}
gotStrMap, err := out.GetStringMap("user")
if diff := cmp.Diff(gotStrMap, user); diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"user", gotStrMap, err, user, diff)
}
gotStrMapStr, err := out.GetStringMapString("location")
if diff := cmp.Diff(gotStrMapStr, location); diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"location", gotStrMapStr, err, location, diff)
}
gotStrMapStrSlice, err := out.GetStringMapStringSlice("favs")
if diff := cmp.Diff(gotStrMapStrSlice, favs); diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"favs", gotStrMapStrSlice, err, favs, diff)
}
gotStrMapInt, err := out.GetStringMapInt("counter")
if diff := cmp.Diff(gotStrMapInt, counter); diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"counter", gotStrMapInt, err, counter, diff)
}
gotStrMapBool, err := out.GetStringMapBool("features")
if diff := cmp.Diff(gotStrMapBool, features); diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil;\n(-want,+got)\n%s",
"features", gotStrMapBool, err, features, diff)
}
gotTime, err := out.GetTime("timestamp")
if !gotTime.Equal(now) {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
"timestamp", gotTime, err, now)
}
gotDuration, err := out.GetDuration("duration")
if gotDuration != duration {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
"duration", gotDuration, err, duration)
}
}
func TestPayloadKeyNotFound(t *testing.T) {
payload := Payload{nil}
key := "something"
gotStr, err := payload.GetString(key)
if err == nil || gotStr != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
key, gotStr, err)
func TestPayloadTime(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"current": time.Now()},
key: "current",
nonkey: "unknown",
},
}
gotInt, err := payload.GetInt(key)
if err == nil || gotInt != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
key, gotInt, err)
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetTime(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotFloat, err := payload.GetFloat64(key)
if err == nil || gotFloat != 0 {
t.Errorf("Payload.GetFloat64(%q = %v, %v; want 0, error",
key, gotFloat, err)
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetTime(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotBool, err := payload.GetBool(key)
if err == nil || gotBool != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
key, gotBool, err)
// access non-existent key.
got, err = payload.GetTime(tc.nonkey)
if err == nil || !got.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Time{})
}
}
}
func TestPayloadDuration(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"duration": 15 * time.Minute},
key: "duration",
nonkey: "unknown",
},
}
gotStrSlice, err := payload.GetStringSlice(key)
if err == nil || gotStrSlice != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
key, gotStrSlice, err)
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetDuration(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotIntSlice, err := payload.GetIntSlice(key)
if err == nil || gotIntSlice != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
key, gotIntSlice, err)
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
b, err := json.Marshal(in)
if err != nil {
t.Fatal(err)
}
var out base.TaskMessage
err = json.Unmarshal(b, &out)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetDuration(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
gotStrMap, err := payload.GetStringMap(key)
if err == nil || gotStrMap != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
key, gotStrMap, err)
// access non-existent key.
got, err = payload.GetDuration(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Duration(0))
}
gotStrMapStr, err := payload.GetStringMapString(key)
if err == nil || gotStrMapStr != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
key, gotStrMapStr, err)
}
gotStrMapStrSlice, err := payload.GetStringMapStringSlice(key)
if err == nil || gotStrMapStrSlice != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
key, gotStrMapStrSlice, err)
}
gotStrMapInt, err := payload.GetStringMapInt(key)
if err == nil || gotStrMapInt != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want nil, error",
key, gotStrMapInt, err)
}
gotStrMapBool, err := payload.GetStringMapBool(key)
if err == nil || gotStrMapBool != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want nil, error",
key, gotStrMapBool, err)
}
gotTime, err := payload.GetTime(key)
if err == nil || !gotTime.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, error",
key, gotTime, err, time.Time{})
}
gotDuration, err := payload.GetDuration(key)
if err == nil || gotDuration != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want 0, error",
key, gotDuration, err)
}
}

View File

@@ -5,8 +5,8 @@
package asynq
import (
"context"
"fmt"
"log"
"math/rand"
"sort"
"sync"
@@ -14,23 +14,32 @@ import (
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"golang.org/x/time/rate"
)
type processor struct {
logger Logger
rdb *rdb.RDB
ps *base.ProcessState
handler Handler
queueConfig map[string]uint
queueConfig map[string]int
// orderedQueues is set only in strict-priority mode.
orderedQueues []string
retryDelayFunc retryDelayFunc
errHandler ErrorHandler
// channel via which to send sync requests to syncer.
syncRequestCh chan<- *syncRequest
// rate limiter to prevent spamming logs with a bunch of errors.
errLogLimiter *rate.Limiter
// sema is a counting semaphore to ensure the number of active workers
// does not exceed the limit.
sema chan struct{}
@@ -45,33 +54,38 @@ type processor struct {
// quit channel communicates to the in-flight worker goroutines to stop.
quit chan struct{}
// cancelations is a set of cancel functions for all in-progress tasks.
cancelations *base.Cancelations
}
type retryDelayFunc func(n int, err error, task *Task) time.Duration
// newProcessor constructs a new processor.
//
// r is an instance of RDB used by the processor.
// n specifies the max number of concurrenct worker goroutines.
// qfcg is a mapping of queue names to associated priority level.
// strict specifies whether queue priority should be treated strictly.
// fn is a function to compute retry delay.
func newProcessor(r *rdb.RDB, n int, qcfg map[string]uint, strict bool, fn retryDelayFunc, syncRequestCh chan<- *syncRequest) *processor {
func newProcessor(l Logger, r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc,
syncCh chan<- *syncRequest, c *base.Cancelations, errHandler ErrorHandler) *processor {
info := ps.Get()
qcfg := normalizeQueueCfg(info.Queues)
orderedQueues := []string(nil)
if strict {
if info.StrictPriority {
orderedQueues = sortByPriority(qcfg)
}
return &processor{
logger: l,
rdb: r,
ps: ps,
queueConfig: qcfg,
orderedQueues: orderedQueues,
retryDelayFunc: fn,
syncRequestCh: syncRequestCh,
sema: make(chan struct{}, n),
syncRequestCh: syncCh,
cancelations: c,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, info.Concurrency),
done: make(chan struct{}),
abort: make(chan struct{}),
quit: make(chan struct{}),
handler: HandlerFunc(func(t *Task) error { return fmt.Errorf("handler not set") }),
errHandler: errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
}
}
@@ -79,7 +93,7 @@ func newProcessor(r *rdb.RDB, n int, qcfg map[string]uint, strict bool, fn retry
// It's safe to call this method multiple times.
func (p *processor) stop() {
p.once.Do(func() {
log.Println("[INFO] Processor shutting down...")
p.logger.Info("Processor shutting down...")
// Unblock if processor is waiting for sema token.
close(p.abort)
// Signal the processor goroutine to stop processing tasks
@@ -95,24 +109,32 @@ func (p *processor) terminate() {
// IDEA: Allow user to customize this timeout value.
const timeout = 8 * time.Second
time.AfterFunc(timeout, func() { close(p.quit) })
log.Println("[INFO] Waiting for all workers to finish...")
p.logger.Info("Waiting for all workers to finish...")
// send cancellation signal to all in-progress task handlers
for _, cancel := range p.cancelations.GetAll() {
cancel()
}
// block until all workers have released the token
for i := 0; i < cap(p.sema); i++ {
p.sema <- struct{}{}
}
log.Println("[INFO] All workers have finished.")
p.logger.Info("All workers have finished")
p.restore() // move any unfinished tasks back to the queue.
}
func (p *processor) start() {
func (p *processor) start(wg *sync.WaitGroup) {
// NOTE: The call to "restore" needs to complete before starting
// the processor goroutine.
p.restore()
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-p.done:
log.Println("[INFO] Processor done.")
p.logger.Info("Processor done")
return
default:
p.exec()
@@ -137,7 +159,9 @@ func (p *processor) exec() {
return
}
if err != nil {
log.Printf("[ERROR] unexpected error while pulling a task out of queue: %v\n", err)
if p.errLogLimiter.Allow() {
p.logger.Error("Dequeue error: %v", err)
}
return
}
@@ -147,19 +171,26 @@ func (p *processor) exec() {
p.requeue(msg)
return
case p.sema <- struct{}{}: // acquire token
p.ps.AddWorkerStats(msg, time.Now())
go func() {
defer func() { <-p.sema /* release token */ }()
defer func() {
p.ps.DeleteWorkerStats(msg)
<-p.sema /* release token */
}()
resCh := make(chan error, 1)
task := NewTask(msg.Type, msg.Payload)
ctx, cancel := createContext(msg)
p.cancelations.Add(msg.ID.String(), cancel)
go func() {
resCh <- perform(p.handler, task)
resCh <- perform(ctx, task, p.handler)
p.cancelations.Delete(msg.ID.String())
}()
select {
case <-p.quit:
// time is up, quit this worker goroutine.
log.Printf("[WARN] Terminating in-progress task %+v\n", msg)
p.logger.Warn("Quitting worker. task id=%s", msg.ID)
return
case resErr := <-resCh:
// Note: One of three things should happen.
@@ -167,6 +198,9 @@ func (p *processor) exec() {
// 2) Retry -> Removes the message from InProgress & Adds the message to Retry
// 3) Kill -> Removes the message from InProgress & Adds the message to Dead
if resErr != nil {
if p.errHandler != nil {
p.errHandler.HandleError(task, resErr, msg.Retried, msg.Retry)
}
if msg.Retried >= msg.Retry {
p.kill(msg, resErr)
} else {
@@ -183,27 +217,27 @@ func (p *processor) exec() {
// restore moves all tasks from "in-progress" back to queue
// to restore all unfinished tasks.
func (p *processor) restore() {
n, err := p.rdb.RestoreUnfinished()
n, err := p.rdb.RequeueAll()
if err != nil {
log.Printf("[ERROR] Could not restore unfinished tasks: %v\n", err)
p.logger.Error("Could not restore unfinished tasks: %v", err)
}
if n > 0 {
log.Printf("[INFO] Restored %d unfinished tasks back to queue.\n", n)
p.logger.Info("Restored %d unfinished tasks back to queue", n)
}
}
func (p *processor) requeue(msg *base.TaskMessage) {
err := p.rdb.Requeue(msg)
if err != nil {
log.Printf("[ERROR] Could not move task from InProgress back to queue: %v\n", err)
p.logger.Error("Could not push task id=%s back to queue: %v", msg.ID, err)
}
}
func (p *processor) markAsDone(msg *base.TaskMessage) {
err := p.rdb.Done(msg)
if err != nil {
errMsg := fmt.Sprintf("could not remove task %+v from %q", msg, base.InProgressQueue)
log.Printf("[WARN] %s; will retry\n", errMsg)
errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue)
p.logger.Warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Done(msg)
@@ -218,8 +252,8 @@ func (p *processor) retry(msg *base.TaskMessage, e error) {
retryAt := time.Now().Add(d)
err := p.rdb.Retry(msg, retryAt, e.Error())
if err != nil {
errMsg := fmt.Sprintf("could not move task %+v from %q to %q", msg, base.InProgressQueue, base.RetryQueue)
log.Printf("[WARN] %s; will retry\n", errMsg)
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue)
p.logger.Warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Retry(msg, retryAt, e.Error())
@@ -230,11 +264,11 @@ func (p *processor) retry(msg *base.TaskMessage, e error) {
}
func (p *processor) kill(msg *base.TaskMessage, e error) {
log.Printf("[WARN] Retry exhausted for task(Type: %q, ID: %v)\n", msg.Type, msg.ID)
p.logger.Warn("Retry exhausted for task id=%s", msg.ID)
err := p.rdb.Kill(msg, e.Error())
if err != nil {
errMsg := fmt.Sprintf("could not move task %+v from %q to %q", msg, base.InProgressQueue, base.DeadQueue)
log.Printf("[WARN] %s; will retry\n", errMsg)
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue)
p.logger.Warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.rdb.Kill(msg, e.Error())
@@ -274,13 +308,13 @@ func (p *processor) queues() []string {
// perform calls the handler with the given task.
// If the call returns without panic, it simply returns the value,
// otherwise, it recovers from panic and returns an error.
func perform(h Handler, task *Task) (err error) {
func perform(ctx context.Context, task *Task, h Handler) (err error) {
defer func() {
if x := recover(); x != nil {
err = fmt.Errorf("panic: %v", x)
}
}()
return h.ProcessTask(task)
return h.ProcessTask(ctx, task)
}
// uniq dedupes elements and returns a slice of unique names of length l.
@@ -302,7 +336,7 @@ func uniq(names []string, l int) []string {
// sortByPriority returns a list of queue names sorted by
// their priority level in descending order.
func sortByPriority(qcfg map[string]uint) []string {
func sortByPriority(qcfg map[string]int) []string {
var queues []*queue
for qname, n := range qcfg {
queues = append(queues, &queue{qname, n})
@@ -317,7 +351,7 @@ func sortByPriority(qcfg map[string]uint) []string {
type queue struct {
name string
priority uint
priority int
}
type byPriority []*queue
@@ -325,3 +359,52 @@ type byPriority []*queue
func (x byPriority) Len() int { return len(x) }
func (x byPriority) Less(i, j int) bool { return x[i].priority < x[j].priority }
func (x byPriority) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
// normalizeQueueCfg divides priority numbers by their
// greatest common divisor.
func normalizeQueueCfg(queueCfg map[string]int) map[string]int {
var xs []int
for _, x := range queueCfg {
xs = append(xs, x)
}
d := gcd(xs...)
res := make(map[string]int)
for q, x := range queueCfg {
res[q] = x / d
}
return res
}
func gcd(xs ...int) int {
fn := func(x, y int) int {
for y > 0 {
x, y = y, x%y
}
return x
}
res := xs[0]
for i := 0; i < len(xs); i++ {
res = fn(xs[i], res)
if res == 1 {
return 1
}
}
return res
}
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage) (ctx context.Context, cancel context.CancelFunc) {
ctx = context.Background()
timeout, err := time.ParseDuration(msg.Timeout)
if err == nil && timeout != 0 {
ctx, cancel = context.WithTimeout(ctx, timeout)
}
deadline, err := time.Parse(time.RFC3339, msg.Deadline)
if err == nil && !deadline.IsZero() {
ctx, cancel = context.WithDeadline(ctx, deadline)
}
if cancel == nil {
ctx, cancel = context.WithCancel(ctx)
}
return ctx, cancel
}

View File

@@ -5,6 +5,7 @@
package asynq
import (
"context"
"fmt"
"sort"
"sync"
@@ -16,6 +17,7 @@ import (
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
)
func TestProcessorSuccess(t *testing.T) {
@@ -59,16 +61,19 @@ func TestProcessorSuccess(t *testing.T) {
// instantiate a new processor
var mu sync.Mutex
var processed []*Task
handler := func(task *Task) error {
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
processed = append(processed, task)
return nil
}
p := newProcessor(rdbClient, 10, defaultQueueConfig, false, defaultDelayFunc, nil)
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations()
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
p.handler = HandlerFunc(handler)
p.start()
var wg sync.WaitGroup
p.start(&wg)
for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg)
if err != nil {
@@ -119,14 +124,19 @@ func TestProcessorRetry(t *testing.T) {
enqueued []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run
delay time.Duration // retry delay duration
handler Handler // task handler
wait time.Duration // wait duration between starting and stopping processor for this test case
wantRetry []h.ZSetEntry // tasks in retry queue at the end
wantDead []*base.TaskMessage // tasks in dead queue at the end
wantErrCount int // number of times error handler should be called
}{
{
enqueued: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{m3, m4},
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return fmt.Errorf(errMsg)
}),
wait: time.Second,
wantRetry: []h.ZSetEntry{
{Msg: &r2, Score: float64(now.Add(time.Minute).Unix())},
@@ -134,6 +144,7 @@ func TestProcessorRetry(t *testing.T) {
{Msg: &r4, Score: float64(now.Add(time.Minute).Unix())},
},
wantDead: []*base.TaskMessage{&r1},
wantErrCount: 4,
},
}
@@ -145,13 +156,22 @@ func TestProcessorRetry(t *testing.T) {
delayFunc := func(n int, e error, t *Task) time.Duration {
return tc.delay
}
handler := func(task *Task) error {
return fmt.Errorf(errMsg)
var (
mu sync.Mutex // guards n
n int // number of times error handler is called
)
errHandler := func(t *Task, err error, retried, maxRetry int) {
mu.Lock()
defer mu.Unlock()
n++
}
p := newProcessor(rdbClient, 10, defaultQueueConfig, false, delayFunc, nil)
p.handler = HandlerFunc(handler)
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations()
p := newProcessor(testLogger, rdbClient, ps, delayFunc, nil, cancelations, ErrorHandlerFunc(errHandler))
p.handler = tc.handler
p.start()
var wg sync.WaitGroup
p.start(&wg)
for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg)
if err != nil {
@@ -176,6 +196,10 @@ func TestProcessorRetry(t *testing.T) {
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
if n != tc.wantErrCount {
t.Errorf("error handler was called %d times, want %d", n, tc.wantErrCount)
}
}
}
@@ -187,11 +211,11 @@ func TestProcessorQueues(t *testing.T) {
})
tests := []struct {
queueCfg map[string]uint
queueCfg map[string]int
want []string
}{
{
queueCfg: map[string]uint{
queueCfg: map[string]int{
"high": 6,
"default": 3,
"low": 1,
@@ -199,7 +223,7 @@ func TestProcessorQueues(t *testing.T) {
want: []string{"high", "default", "low"},
},
{
queueCfg: map[string]uint{
queueCfg: map[string]int{
"default": 1,
},
want: []string{"default"},
@@ -207,7 +231,9 @@ func TestProcessorQueues(t *testing.T) {
}
for _, tc := range tests {
p := newProcessor(nil, 10, tc.queueCfg, false, defaultDelayFunc, nil)
cancelations := base.NewCancelations()
ps := base.NewProcessState("localhost", 1234, 10, tc.queueCfg, false)
p := newProcessor(testLogger, nil, ps, defaultDelayFunc, nil, cancelations, nil)
got := p.queues()
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s",
@@ -261,22 +287,25 @@ func TestProcessorWithStrictPriority(t *testing.T) {
// instantiate a new processor
var mu sync.Mutex
var processed []*Task
handler := func(task *Task) error {
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
processed = append(processed, task)
return nil
}
queueCfg := map[string]uint{
queueCfg := map[string]int{
"critical": 3,
base.DefaultQueueName: 2,
"low": 1,
}
// Note: Set concurrency to 1 to make sure tasks are processed one at a time.
p := newProcessor(rdbClient, 1 /*concurrency */, queueCfg, true /* strict */, defaultDelayFunc, nil)
cancelations := base.NewCancelations()
ps := base.NewProcessState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/)
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
p.handler = HandlerFunc(handler)
p.start()
var wg sync.WaitGroup
p.start(&wg)
time.Sleep(tc.wait)
p.terminate()
@@ -299,7 +328,7 @@ func TestPerform(t *testing.T) {
}{
{
desc: "handler returns nil",
handler: func(t *Task) error {
handler: func(ctx context.Context, t *Task) error {
return nil
},
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
@@ -307,7 +336,7 @@ func TestPerform(t *testing.T) {
},
{
desc: "handler returns error",
handler: func(t *Task) error {
handler: func(ctx context.Context, t *Task) error {
return fmt.Errorf("something went wrong")
},
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
@@ -315,7 +344,7 @@ func TestPerform(t *testing.T) {
},
{
desc: "handler panics",
handler: func(t *Task) error {
handler: func(ctx context.Context, t *Task) error {
panic("something went terribly wrong")
},
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
@@ -324,7 +353,7 @@ func TestPerform(t *testing.T) {
}
for _, tc := range tests {
got := perform(tc.handler, tc.task)
got := perform(context.Background(), tc.task, tc.handler)
if !tc.wantErr && got != nil {
t.Errorf("%s: perform() = %v, want nil", tc.desc, got)
continue
@@ -335,3 +364,85 @@ func TestPerform(t *testing.T) {
}
}
}
func TestCreateContextWithTimeRestrictions(t *testing.T) {
var (
noTimeout = time.Duration(0)
noDeadline = time.Time{}
)
tests := []struct {
desc string
timeout time.Duration
deadline time.Time
wantDeadline time.Time
}{
{"only with timeout", 10 * time.Second, noDeadline, time.Now().Add(10 * time.Second)},
{"only with deadline", noTimeout, time.Now().Add(time.Hour), time.Now().Add(time.Hour)},
{"with timeout and deadline (timeout < deadline)", 10 * time.Second, time.Now().Add(time.Hour), time.Now().Add(10 * time.Second)},
{"with timeout and deadline (timeout > deadline)", 10 * time.Minute, time.Now().Add(30 * time.Second), time.Now().Add(30 * time.Second)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: xid.New(),
Timeout: tc.timeout.String(),
Deadline: tc.deadline.Format(time.RFC3339),
}
ctx, cancel := createContext(msg)
select {
case x := <-ctx.Done():
t.Errorf("%s: <-ctx.Done() == %v, want nothing (it should block)", tc.desc, x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("%s: ctx.Deadline() returned false, want deadline to be set", tc.desc)
}
if !cmp.Equal(tc.wantDeadline, got, cmpopts.EquateApproxTime(time.Second)) {
t.Errorf("%s: ctx.Deadline() returned %v, want %v", tc.desc, got, tc.wantDeadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
}
}
func TestCreateContextWithoutTimeRestrictions(t *testing.T) {
msg := &base.TaskMessage{
Type: "something",
ID: xid.New(),
Timeout: time.Duration(0).String(), // zero value to indicate no timeout
Deadline: time.Time{}.Format(time.RFC3339), // zero value to indicate no deadline
}
ctx, cancel := createContext(msg)
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
_, ok := ctx.Deadline()
if ok {
t.Error("ctx.Deadline() returned true, want deadline to not be set")
}
cancel()
select {
case <-ctx.Done():
default:
t.Error("ctx.Done() blocked, want it to be non-blocking")
}
}

View File

@@ -5,13 +5,14 @@
package asynq
import (
"log"
"sync"
"time"
"github.com/hibiken/asynq/internal/rdb"
)
type scheduler struct {
logger Logger
rdb *rdb.RDB
// channel to communicate back to the long running "scheduler" goroutine.
@@ -24,12 +25,13 @@ type scheduler struct {
qnames []string
}
func newScheduler(r *rdb.RDB, avgInterval time.Duration, qcfg map[string]uint) *scheduler {
func newScheduler(l Logger, r *rdb.RDB, avgInterval time.Duration, qcfg map[string]int) *scheduler {
var qnames []string
for q := range qcfg {
qnames = append(qnames, q)
}
return &scheduler{
logger: l,
rdb: r,
done: make(chan struct{}),
avgInterval: avgInterval,
@@ -38,18 +40,20 @@ func newScheduler(r *rdb.RDB, avgInterval time.Duration, qcfg map[string]uint) *
}
func (s *scheduler) terminate() {
log.Println("[INFO] Scheduler shutting down...")
s.logger.Info("Scheduler shutting down...")
// Signal the scheduler goroutine to stop polling.
s.done <- struct{}{}
}
// start starts the "scheduler" goroutine.
func (s *scheduler) start() {
func (s *scheduler) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-s.done:
log.Println("[INFO] Scheduler done.")
s.logger.Info("Scheduler done")
return
case <-time.After(s.avgInterval):
s.exec()
@@ -60,6 +64,6 @@ func (s *scheduler) start() {
func (s *scheduler) exec() {
if err := s.rdb.CheckAndEnqueue(s.qnames...); err != nil {
log.Printf("[ERROR] could not forward scheduled tasks: %v\n", err)
s.logger.Error("Could not enqueue scheduled tasks: %v", err)
}
}

View File

@@ -5,6 +5,7 @@
package asynq
import (
"sync"
"testing"
"time"
@@ -18,7 +19,7 @@ func TestScheduler(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newScheduler(rdbClient, pollInterval, defaultQueueConfig)
s := newScheduler(testLogger, rdbClient, pollInterval, defaultQueueConfig)
t1 := h.NewTaskMessage("gen_thumbnail", nil)
t2 := h.NewTaskMessage("send_email", nil)
t3 := h.NewTaskMessage("reindex", nil)
@@ -69,7 +70,8 @@ func TestScheduler(t *testing.T) {
h.SeedRetryQueue(t, r, tc.initRetry) // initialize retry queue
h.SeedEnqueuedQueue(t, r, tc.initQueue) // initialize default queue
s.start()
var wg sync.WaitGroup
s.start(&wg)
time.Sleep(tc.wait)
s.terminate()

158
servemux.go Normal file
View File

@@ -0,0 +1,158 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"sort"
"strings"
"sync"
)
// ServeMux is a multiplexer for asynchronous tasks.
// It matches the type of each task against a list of registered patterns
// and calls the handler for the pattern that most closely matches the
// taks's type name.
//
// Longer patterns take precedence over shorter ones, so that if there are
// handlers registered for both "images" and "images:thumbnails",
// the latter handler will be called for tasks with a type name beginning with
// "images:thumbnails" and the former will receive tasks with type name beginning
// with "images".
type ServeMux struct {
mu sync.RWMutex
m map[string]muxEntry
es []muxEntry // slice of entries sorted from longest to shortest.
mws []MiddlewareFunc
}
type muxEntry struct {
h Handler
pattern string
}
// MiddlewareFunc is a function which receives an asynq.Handler and returns another asynq.Handler.
// Typically, the returned handler is a closure which does something with the context and task passed
// to it, and then calls the handler passed as parameter to the MiddlewareFunc.
type MiddlewareFunc func(Handler) Handler
// NewServeMux allocates and returns a new ServeMux.
func NewServeMux() *ServeMux {
return new(ServeMux)
}
// ProcessTask dispatches the task to the handler whose
// pattern most closely matches the task type.
func (mux *ServeMux) ProcessTask(ctx context.Context, task *Task) error {
h, _ := mux.Handler(task)
return h.ProcessTask(ctx, task)
}
// Handler returns the handler to use for the given task.
// It always return a non-nil handler.
//
// Handler also returns the registered pattern that matches the task.
//
// If there is no registered handler that applies to the task,
// handler returns a 'not found' handler which returns an error.
func (mux *ServeMux) Handler(t *Task) (h Handler, pattern string) {
mux.mu.RLock()
defer mux.mu.RUnlock()
h, pattern = mux.match(t.Type)
if h == nil {
h, pattern = NotFoundHandler(), ""
}
for i := len(mux.mws) - 1; i >= 0; i-- {
h = mux.mws[i](h)
}
return h, pattern
}
// Find a handler on a handler map given a typename string.
// Most-specific (longest) pattern wins.
func (mux *ServeMux) match(typename string) (h Handler, pattern string) {
// Check for exact match first.
v, ok := mux.m[typename]
if ok {
return v.h, v.pattern
}
// Check for longest valid match.
// mux.es contains all patterns from longest to shortest.
for _, e := range mux.es {
if strings.HasPrefix(typename, e.pattern) {
return e.h, e.pattern
}
}
return nil, ""
}
// Handle registers the handler for the given pattern.
// If a handler already exists for pattern, Handle panics.
func (mux *ServeMux) Handle(pattern string, handler Handler) {
mux.mu.Lock()
defer mux.mu.Unlock()
if pattern == "" {
panic("asynq: invalid pattern")
}
if handler == nil {
panic("asynq: nil handler")
}
if _, exist := mux.m[pattern]; exist {
panic("asynq: multiple registrations for " + pattern)
}
if mux.m == nil {
mux.m = make(map[string]muxEntry)
}
e := muxEntry{h: handler, pattern: pattern}
mux.m[pattern] = e
mux.es = appendSorted(mux.es, e)
}
func appendSorted(es []muxEntry, e muxEntry) []muxEntry {
n := len(es)
i := sort.Search(n, func(i int) bool {
return len(es[i].pattern) < len(e.pattern)
})
if i == n {
return append(es, e)
}
// we now know that i points at where we want to insert.
es = append(es, muxEntry{}) // try to grow the slice in place, any entry works.
copy(es[i+1:], es[i:]) // shift shorter entries down.
es[i] = e
return es
}
// HandleFunc registers the handler function for the given pattern.
func (mux *ServeMux) HandleFunc(pattern string, handler func(context.Context, *Task) error) {
if handler == nil {
panic("asynq: nil handler")
}
mux.Handle(pattern, HandlerFunc(handler))
}
// Use appends a MiddlewareFunc to the chain.
// Middlewares are executed in the order that they are applied to the ServeMux.
func (mux *ServeMux) Use(mws ...MiddlewareFunc) {
mux.mu.Lock()
defer mux.mu.Unlock()
for _, fn := range mws {
mux.mws = append(mux.mws, fn)
}
}
// NotFound returns an error indicating that the handler was not found for the given task.
func NotFound(ctx context.Context, task *Task) error {
return fmt.Errorf("handler not found for task %q", task.Type)
}
// NotFoundHandler returns a simple task handler that returns a ``not found`` error.
func NotFoundHandler() Handler { return HandlerFunc(NotFound) }

170
servemux_test.go Normal file
View File

@@ -0,0 +1,170 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"github.com/google/go-cmp/cmp"
)
var called string // identity of the handler that was called.
var invoked []string // list of middlewares in the order they were invoked.
// makeFakeHandler returns a handler that updates the global called variable
// to the given identity.
func makeFakeHandler(identity string) Handler {
return HandlerFunc(func(ctx context.Context, t *Task) error {
called = identity
return nil
})
}
// makeFakeMiddleware returns a middleware function that appends the given identity
//to the global invoked slice.
func makeFakeMiddleware(identity string) MiddlewareFunc {
return func(next Handler) Handler {
return HandlerFunc(func(ctx context.Context, t *Task) error {
invoked = append(invoked, identity)
return next.ProcessTask(ctx, t)
})
}
}
// A list of pattern, handler pair that is registered with mux.
var serveMuxRegister = []struct {
pattern string
h Handler
}{
{"email:", makeFakeHandler("default email handler")},
{"email:signup", makeFakeHandler("signup email handler")},
{"csv:export", makeFakeHandler("csv export handler")},
}
var serveMuxTests = []struct {
typename string // task's type name
want string // identifier of the handler that should be called
}{
{"email:signup", "signup email handler"},
{"csv:export", "csv export handler"},
{"email:daily", "default email handler"},
}
func TestServeMux(t *testing.T) {
mux := NewServeMux()
for _, e := range serveMuxRegister {
mux.Handle(e.pattern, e.h)
}
for _, tc := range serveMuxTests {
called = "" // reset to zero value
task := NewTask(tc.typename, nil)
if err := mux.ProcessTask(context.Background(), task); err != nil {
t.Fatal(err)
}
if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
}
}
}
func TestServeMuxRegisterNilHandler(t *testing.T) {
defer func() {
if err := recover(); err == nil {
t.Error("expected call to mux.HandleFunc to panic")
}
}()
mux := NewServeMux()
mux.HandleFunc("email:signup", nil)
}
func TestServeMuxRegisterEmptyPattern(t *testing.T) {
defer func() {
if err := recover(); err == nil {
t.Error("expected call to mux.HandleFunc to panic")
}
}()
mux := NewServeMux()
mux.Handle("", makeFakeHandler("email"))
}
func TestServeMuxRegisterDuplicatePattern(t *testing.T) {
defer func() {
if err := recover(); err == nil {
t.Error("expected call to mux.HandleFunc to panic")
}
}()
mux := NewServeMux()
mux.Handle("email", makeFakeHandler("email"))
mux.Handle("email", makeFakeHandler("email:default"))
}
var notFoundTests = []struct {
typename string // task's type name
}{
{"image:minimize"},
{"csv:"}, // registered patterns match the task's type prefix, not the other way around.
}
func TestServeMuxNotFound(t *testing.T) {
mux := NewServeMux()
for _, e := range serveMuxRegister {
mux.Handle(e.pattern, e.h)
}
for _, tc := range notFoundTests {
task := NewTask(tc.typename, nil)
err := mux.ProcessTask(context.Background(), task)
if err == nil {
t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type)
}
}
}
var middlewareTests = []struct {
typename string // task's type name
middlewares []string // middlewares to use. They should be called in this order.
want string // identifier of the handler that should be called
}{
{"email:signup", []string{"logging", "expiration"}, "signup email handler"},
{"csv:export", []string{}, "csv export handler"},
{"email:daily", []string{"expiration", "logging"}, "default email handler"},
}
func TestServeMuxMiddlewares(t *testing.T) {
for _, tc := range middlewareTests {
mux := NewServeMux()
for _, e := range serveMuxRegister {
mux.Handle(e.pattern, e.h)
}
var mws []MiddlewareFunc
for _, s := range tc.middlewares {
mws = append(mws, makeFakeMiddleware(s))
}
mux.Use(mws...)
invoked = []string{} // reset to empty slice
called = "" // reset to zero value
task := NewTask(tc.typename, nil)
if err := mux.ProcessTask(context.Background(), task); err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(invoked, tc.middlewares); diff != "" {
t.Errorf("invoked middlewares were %v, want %v", invoked, tc.middlewares)
}
if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
}
}
}

64
subscriber.go Normal file
View File

@@ -0,0 +1,64 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
type subscriber struct {
logger Logger
rdb *rdb.RDB
// channel to communicate back to the long running "subscriber" goroutine.
done chan struct{}
// cancelations hold cancel functions for all in-progress tasks.
cancelations *base.Cancelations
}
func newSubscriber(l Logger, rdb *rdb.RDB, cancelations *base.Cancelations) *subscriber {
return &subscriber{
logger: l,
rdb: rdb,
done: make(chan struct{}),
cancelations: cancelations,
}
}
func (s *subscriber) terminate() {
s.logger.Info("Subscriber shutting down...")
// Signal the subscriber goroutine to stop.
s.done <- struct{}{}
}
func (s *subscriber) start(wg *sync.WaitGroup) {
pubsub, err := s.rdb.CancelationPubSub()
cancelCh := pubsub.Channel()
if err != nil {
s.logger.Error("cannot subscribe to cancelation channel: %v", err)
return
}
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-s.done:
pubsub.Close()
s.logger.Info("Subscriber done")
return
case msg := <-cancelCh:
cancel, ok := s.cancelations.Get(msg.Payload)
if ok {
cancel()
}
}
}
}()
}

64
subscriber_test.go Normal file
View File

@@ -0,0 +1,64 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestSubscriber(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
tests := []struct {
registeredID string // ID for which cancel func is registered
publishID string // ID to be published
wantCalled bool // whether cancel func should be called
}{
{"abc123", "abc123", true},
{"abc456", "abc123", false},
}
for _, tc := range tests {
var mu sync.Mutex
called := false
fakeCancelFunc := func() {
mu.Lock()
defer mu.Unlock()
called = true
}
cancelations := base.NewCancelations()
cancelations.Add(tc.registeredID, fakeCancelFunc)
subscriber := newSubscriber(testLogger, rdbClient, cancelations)
var wg sync.WaitGroup
subscriber.start(&wg)
if err := rdbClient.PublishCancelation(tc.publishID); err != nil {
subscriber.terminate()
t.Fatalf("could not publish cancelation message: %v", err)
}
// allow for redis to publish message
time.Sleep(time.Second)
mu.Lock()
if called != tc.wantCalled {
if tc.wantCalled {
t.Errorf("fakeCancelFunc was not called, want the function to be called")
} else {
t.Errorf("fakeCancelFunc was called, want the function to not be called")
}
}
mu.Unlock()
subscriber.terminate()
}
}

View File

@@ -5,13 +5,15 @@
package asynq
import (
"log"
"sync"
"time"
)
// syncer is responsible for queuing up failed requests to redis and retry
// those requests to sync state between the background process and redis.
type syncer struct {
logger Logger
requestsCh <-chan *syncRequest
// channel to communicate back to the long running "syncer" goroutine.
@@ -26,8 +28,9 @@ type syncRequest struct {
errMsg string // error message
}
func newSyncer(requestsCh <-chan *syncRequest, interval time.Duration) *syncer {
func newSyncer(l Logger, requestsCh <-chan *syncRequest, interval time.Duration) *syncer {
return &syncer{
logger: l,
requestsCh: requestsCh,
done: make(chan struct{}),
interval: interval,
@@ -35,13 +38,15 @@ func newSyncer(requestsCh <-chan *syncRequest, interval time.Duration) *syncer {
}
func (s *syncer) terminate() {
log.Println("[INFO] Syncer shutting down...")
s.logger.Info("Syncer shutting down...")
// Signal the syncer goroutine to stop.
s.done <- struct{}{}
}
func (s *syncer) start() {
func (s *syncer) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
var requests []*syncRequest
for {
select {
@@ -49,10 +54,10 @@ func (s *syncer) start() {
// Try sync one last time before shutting down.
for _, req := range requests {
if err := req.fn(); err != nil {
log.Printf("[ERROR] %s\n", req.errMsg)
s.logger.Error(req.errMsg)
}
}
log.Println("[INFO] Syncer done.")
s.logger.Info("Syncer done")
return
case req := <-s.requestsCh:
requests = append(requests, req)

View File

@@ -5,10 +5,11 @@
package asynq
import (
"fmt"
"sync"
"testing"
"time"
"github.com/go-redis/redis/v7"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
@@ -26,8 +27,9 @@ func TestSyncer(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncRequestCh, interval)
syncer.start()
syncer := newSyncer(testLogger, syncRequestCh, interval)
var wg sync.WaitGroup
syncer.start(&wg)
defer syncer.terminate()
for _, msg := range inProgress {
@@ -48,52 +50,43 @@ func TestSyncer(t *testing.T) {
}
func TestSyncerRetry(t *testing.T) {
inProgress := []*base.TaskMessage{
h.NewTaskMessage("send_email", nil),
h.NewTaskMessage("reindex", nil),
h.NewTaskMessage("gen_thumbnail", nil),
}
goodClient := setup(t)
h.SeedInProgressQueue(t, goodClient, inProgress)
// Simulate the situation where redis server is down
// by connecting to a wrong port.
badClient := redis.NewClient(&redis.Options{
Addr: "localhost:6390",
})
rdbClient := rdb.NewRDB(badClient)
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncRequestCh, interval)
syncer.start()
syncer := newSyncer(testLogger, syncRequestCh, interval)
var wg sync.WaitGroup
syncer.start(&wg)
defer syncer.terminate()
for _, msg := range inProgress {
m := msg
var (
mu sync.Mutex
counter int
)
// Increment the counter for each call.
// Initial call will fail and second call will succeed.
requestFunc := func() error {
mu.Lock()
defer mu.Unlock()
if counter == 0 {
counter++
return fmt.Errorf("zero")
}
counter++
return nil
}
syncRequestCh <- &syncRequest{
fn: func() error {
return rdbClient.Done(m)
},
}
fn: requestFunc,
errMsg: "error",
}
time.Sleep(2 * interval) // ensure that syncer runs at least once
// allow syncer to retry
time.Sleep(3 * interval)
// Sanity check to ensure that message was not successfully deleted
// from in-progress list.
gotInProgress := h.GetInProgressMessages(t, goodClient)
if l := len(gotInProgress); l != len(inProgress) {
t.Errorf("%q has length %d; want %d", base.InProgressQueue, l, len(inProgress))
}
// simualate failover.
rdbClient = rdb.NewRDB(goodClient)
time.Sleep(2 * interval) // ensure that syncer runs at least once
gotInProgress = h.GetInProgressMessages(t, goodClient)
if l := len(gotInProgress); l != 0 {
t.Errorf("%q has length %d; want 0", base.InProgressQueue, l)
mu.Lock()
if counter != 2 {
t.Errorf("counter = %d, want 2", counter)
}
mu.Unlock()
}

View File

@@ -1,6 +1,6 @@
# Asynqmon
Asynqmon is a CLI tool to monitor the queues managed by `asynq` package.
Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
## Table of Contents
@@ -8,10 +8,12 @@ Asynqmon is a CLI tool to monitor the queues managed by `asynq` package.
- [Quick Start](#quick-start)
- [Stats](#stats)
- [History](#history)
- [Process Status](#process-status)
- [List](#list)
- [Enqueue](#enqueue)
- [Delete](#delete)
- [Kill](#kill)
- [Cancel](#cancel)
- [Config File](#config-file)
## Installation
@@ -22,9 +24,9 @@ In order to use the tool, compile it using the following command:
This will create the asynqmon executable under your `$GOPATH/bin` directory.
## Quick Start
## Quickstart
Asynqmon tool has a few commands to inspect the state of tasks and queues.
The tool has a few commands to inspect the state of tasks and queues.
Run `asynqmon help` to see all the available commands.
@@ -34,7 +36,7 @@ By default, Asynqmon will try to connect to a redis server running at `localhost
### Stats
Stats command gives the overview of the current state of tasks and queues. Run it in conjunction with `watch` command to repeatedly run `stats`.
Stats command gives the overview of the current state of tasks and queues. You can run it in conjunction with `watch` command to repeatedly run `stats`.
Example:
@@ -46,35 +48,115 @@ This will run `asynqmon stats` command every 3 seconds.
### History
TODO: Add discription
History command shows the number of processed and failed tasks from the last x days.
By default, it shows the stats from the last 10 days. Use `--days` to specify the number of days.
Example:
asynqmon history --days=30
![Gif](/docs/assets/asynqmon_history.gif)
### Process Status
PS (ProcessStatus) command shows the list of running worker processes.
Example:
asynqmon ps
![Gif](/docs/assets/asynqmon_ps.gif)
### List
TODO: Add discription
List command shows all tasks in the specified state in a table format
Example:
asynqmon ls retry
asynqmon ls scheduled
asynqmon ls dead
asynqmon ls enqueued:default
asynqmon ls inprogress
### Enqueue
TODO: Add discription
There are two commands to enqueue tasks.
Command `enq` takes a task ID and moves the task to **Enqueued** state. You can obtain the task ID by running `ls` command.
Example:
asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example:
asynqmon enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state.
### Delete
TODO: Add discription
There are two commands for task deletion.
Command `del` takes a task ID and deletes the task. You can obtain the task ID by running `ls` command.
Example:
asynqmon del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state.
Example:
asynqmon delall retry
Running the above command will delete all **Retry** tasks.
### Kill
TODO: Add discription
There are two commands to kill (i.e. move to dead state) tasks.
Command `kill` takes a task ID and kills the task. You can obtain the task ID by running `ls` command.
Example:
asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state.
Example:
asynqmon killall retry
Running the above command will move all **Retry** tasks to **Dead** state.
### Cancel
Command `cancel` takes a task ID and sends a cancelation signal to the goroutine processing the specified task.
You can obtain the task ID by running `ls` command.
The task should be in "in-progress" state.
Handler implementation needs to be context aware in order to actually stop processing.
Example:
asynqmon cancel bnogo8gt6toe23vhef0g
## Config File
You can use a config file to set default values for flags.
You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server.
By default, `asynqmon` will try to read config file located in
`$HOME/.asynqmon.(yml|json)`. You can specify the file location via `--config` flag.
`$HOME/.asynqmon.(yaml|json)`. You can specify the file location via `--config` flag.
Config file example:
```yml
```yaml
uri: 127.0.0.1:6379
db: 2
password: mypassword

View File

@@ -0,0 +1,53 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// cancelCmd represents the cancel command
var cancelCmd = &cobra.Command{
Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynqmon cancel) will send a cancelation signal to the goroutine processing
the specified task.
The command takes one argument which specifies the task to cancel.
The task should be in in-progress state.
Identifier for a task should be obtained by running "asynqmon ls" command.
Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing.
Example: asynqmon cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: cancel,
}
func init() {
rootCmd.AddCommand(cancelCmd)
}
func cancel(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
err := r.PublishCancelation(args[0])
if err != nil {
fmt.Printf("could not send cancelation signal: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully sent cancelation siganl for task %s\n", args[0])
}

View File

@@ -19,7 +19,7 @@ var delallValidArgs = []string{"scheduled", "retry", "dead"}
// delallCmd represents the delall command
var delallCmd = &cobra.Command{
Use: "delall [state]",
Short: "Deletes all tasks from the specified state",
Short: "Deletes all tasks in the specified state",
Long: `Delall (asynqmon delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".

View File

@@ -19,7 +19,7 @@ var killallValidArgs = []string{"scheduled", "retry"}
// killallCmd represents the killall command
var killallCmd = &cobra.Command{
Use: "killall [state]",
Short: "Update all tasks to dead state from the specified state",
Short: "Kills all tasks in the specified state",
Long: `Killall (asynqmon killall) will update all tasks from the specified state to dead state.
The argument should be either "scheduled" or "retry".

View File

@@ -10,7 +10,6 @@ import (
"os"
"strconv"
"strings"
"text/tabwriter"
"time"
"github.com/go-redis/redis/v7"
@@ -35,29 +34,34 @@ The argument value should be one of "enqueued", "inprogress", "scheduled",
Example:
asynqmon ls dead -> Lists all tasks in dead state
Enqueued tasks can optionally be filtered by providing queue names after ":"
Enqueued tasks requires a queue name after ":"
Example:
asynqmon ls enqueued:critical -> List tasks from critical queue only
asynqmon ls enqueued:default -> List tasks from default queue
asynqmon ls enqueued:critical -> List tasks from critical queue
`,
Args: cobra.ExactValidArgs(1),
Run: ls,
}
// Flags
var pageSize int
var pageNum int
func init() {
rootCmd.AddCommand(lsCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// lsCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// lsCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
lsCmd.Flags().IntVar(&pageSize, "size", 30, "page size")
lsCmd.Flags().IntVar(&pageNum, "page", 0, "page number - zero indexed (default 0)")
}
func ls(cmd *cobra.Command, args []string) {
if pageSize < 0 {
fmt.Println("page size cannot be negative.")
os.Exit(1)
}
if pageNum < 0 {
fmt.Println("page number cannot be negative.")
os.Exit(1)
}
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
@@ -67,7 +71,11 @@ func ls(cmd *cobra.Command, args []string) {
parts := strings.Split(args[0], ":")
switch parts[0] {
case "enqueued":
listEnqueued(r, parts[1:]...)
if len(parts) != 2 {
fmt.Printf("error: Missing queue name\n`asynqmon ls enqueued:[queue name]`\n")
os.Exit(1)
}
listEnqueued(r, parts[1])
case "inprogress":
listInProgress(r)
case "scheduled":
@@ -77,7 +85,7 @@ func ls(cmd *cobra.Command, args []string) {
case "dead":
listDead(r)
default:
fmt.Printf("error: `asynqmon ls [state]` only accepts %v as the argument.\n", lsValidArgs)
fmt.Printf("error: `asynqmon ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
os.Exit(1)
}
}
@@ -113,24 +121,14 @@ func parseQueryID(queryID string) (id xid.ID, score int64, qtype string, err err
return id, score, qtype, nil
}
func listEnqueued(r *rdb.RDB, qnames ...string) {
tasks, err := r.ListEnqueued(qnames...)
func listEnqueued(r *rdb.RDB, qname string) {
tasks, err := r.ListEnqueued(qname, rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
msg := "No enqueued tasks"
if len(qnames) > 0 {
msg += " in"
for i, q := range qnames {
msg += fmt.Sprintf(" %q queue", q)
if i != len(qnames)-1 {
msg += ","
}
}
}
fmt.Println(msg)
fmt.Printf("No enqueued tasks in %q queue\n", qname)
return
}
cols := []string{"ID", "Type", "Payload", "Queue"}
@@ -140,10 +138,11 @@ func listEnqueued(r *rdb.RDB, qnames ...string) {
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listInProgress(r *rdb.RDB) {
tasks, err := r.ListInProgress()
tasks, err := r.ListInProgress(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -159,10 +158,11 @@ func listInProgress(r *rdb.RDB) {
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listScheduled(r *rdb.RDB) {
tasks, err := r.ListScheduled()
tasks, err := r.ListScheduled(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -179,10 +179,11 @@ func listScheduled(r *rdb.RDB) {
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listRetry(r *rdb.RDB) {
tasks, err := r.ListRetry()
tasks, err := r.ListRetry(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -191,18 +192,24 @@ func listRetry(r *rdb.RDB) {
fmt.Println("No retry tasks")
return
}
cols := []string{"ID", "Type", "Payload", "Retry In", "Last Error", "Retried", "Max Retry", "Queue"}
cols := []string{"ID", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry", "Queue"}
printRows := func(w io.Writer, tmpl string) {
for _, t := range tasks {
retryIn := fmt.Sprintf("%.0f seconds", t.ProcessAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "r"), t.Type, t.Payload, retryIn, t.ErrorMsg, t.Retried, t.Retry, t.Queue)
var nextRetry string
if d := t.ProcessAt.Sub(time.Now()); d > 0 {
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
} else {
nextRetry = "right now"
}
fmt.Fprintf(w, tmpl, queryID(t.ID, t.Score, "r"), t.Type, t.Payload, nextRetry, t.ErrorMsg, t.Retried, t.Retry, t.Queue)
}
}
printTable(cols, printRows)
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listDead(r *rdb.RDB) {
tasks, err := r.ListDead()
tasks, err := r.ListDead(rdb.Pagination{Size: pageSize, Page: pageNum})
if err != nil {
fmt.Println(err)
os.Exit(1)
@@ -218,19 +225,5 @@ func listDead(r *rdb.RDB) {
}
}
printTable(cols, printRows)
}
func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
format := strings.Repeat("%v\t", len(cols)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
var headers []interface{}
var seps []interface{}
for _, name := range cols {
headers = append(headers, name)
seps = append(seps, strings.Repeat("-", len(name)))
}
fmt.Fprintf(tw, format, headers...)
fmt.Fprintf(tw, format, seps...)
printRows(tw, format)
tw.Flush()
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}

118
tools/asynqmon/cmd/ps.go Normal file
View File

@@ -0,0 +1,118 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// psCmd represents the ps command
var psCmd = &cobra.Command{
Use: "ps",
Short: "Shows all background worker processes",
Long: `Ps (asynqmon ps) will show all background worker processes
backed by the specified redis instance.
The command shows the following for each process:
* Host and PID of the process
* Number of active workers out of worker pool
* Queue configuration
* State of the worker process ("running" | "stopped")
* Time the process was started
A "running" process is processing tasks in queues.
A "stopped" process is no longer processing new tasks.`,
Args: cobra.NoArgs,
Run: ps,
}
func init() {
rootCmd.AddCommand(psCmd)
}
func ps(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
processes, err := r.ListProcesses()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(processes) == 0 {
fmt.Println("No processes")
return
}
// sort by hostname and pid
sort.Slice(processes, func(i, j int) bool {
x, y := processes[i], processes[j]
if x.Host != y.Host {
return x.Host < y.Host
}
return x.PID < y.PID
})
// print processes
cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"}
printRows := func(w io.Writer, tmpl string) {
for _, ps := range processes {
fmt.Fprintf(w, tmpl,
ps.Host, ps.PID, ps.Status,
fmt.Sprintf("%d/%d", ps.ActiveWorkerCount, ps.Concurrency),
formatQueues(ps.Queues), timeAgo(ps.Started))
}
}
printTable(cols, printRows)
}
// timeAgo takes a time and returns a string of the format "<duration> ago".
func timeAgo(since time.Time) string {
d := time.Since(since).Round(time.Second)
return fmt.Sprintf("%v ago", d)
}
func formatQueues(qmap map[string]int) string {
// sort queues by priority and name
type queue struct {
name string
priority int
}
var queues []*queue
for qname, p := range qmap {
queues = append(queues, &queue{qname, p})
}
sort.Slice(queues, func(i, j int) bool {
x, y := queues[i], queues[j]
if x.priority != y.priority {
return x.priority > y.priority
}
return x.name < y.name
})
var b strings.Builder
l := len(queues)
for _, q := range queues {
fmt.Fprintf(&b, "%s:%d", q.name, q.priority)
l--
if l > 0 {
b.WriteString(" ")
}
}
return b.String()
}

View File

@@ -6,7 +6,10 @@ package cmd
import (
"fmt"
"io"
"os"
"strings"
"text/tabwriter"
"github.com/spf13/cobra"
@@ -25,14 +28,7 @@ var password string
var rootCmd = &cobra.Command{
Use: "asynqmon",
Short: "A monitoring tool for asynq queues",
Long: `Asynqmon is a CLI tool to inspect tasks and queues managed by asynq package.
Use commands to query and mutate the current state of tasks and queues.
Monitoring commands such as "stats" and "ls" can be used in conjunction with the
"watch" command to continuously run the command at a certain interval.
Example: watch -n 5 asynqmon stats`,
Long: `Asynqmon is a montoring CLI to inspect tasks and queues managed by asynq.`,
}
// Execute adds all child commands to the root command and sets flags appropriately.
@@ -81,3 +77,36 @@ func initConfig() {
fmt.Println("Using config file:", viper.ConfigFileUsed())
}
}
// printTable is a helper function to print data in table format.
//
// cols is a list of headers and printRow specifies how to print rows.
//
// Example:
// type User struct {
// Name string
// Addr string
// Age int
// }
// data := []*User{{"user1", "addr1", 24}, {"user2", "addr2", 42}, ...}
// cols := []string{"Name", "Addr", "Age"}
// printRows := func(w io.Writer, tmpl string) {
// for _, u := range data {
// fmt.Fprintf(w, tmpl, u.Name, u.Addr, u.Age)
// }
// }
// printTable(cols, printRows)
func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
format := strings.Repeat("%v\t", len(cols)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
var headers []interface{}
var seps []interface{}
for _, name := range cols {
headers = append(headers, name)
seps = append(seps, strings.Repeat("-", len(name)))
}
fmt.Fprintf(tw, format, headers...)
fmt.Fprintf(tw, format, seps...)
printRows(tw, format)
tw.Flush()
}

View File

@@ -0,0 +1,75 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// workersCmd represents the workers command
var workersCmd = &cobra.Command{
Use: "workers",
Short: "Shows all running workers information",
Long: `Workers (asynqmon workers) will show all running workers information.
The command shows the following for each worker:
* Process in which the worker is running
* ID of the task worker is processing
* Type of the task worker is processing
* Payload of the task worker is processing
* Queue that the task was pulled from.
* Time the worker started processing the task`,
Args: cobra.NoArgs,
Run: workers,
}
func init() {
rootCmd.AddCommand(workersCmd)
}
func workers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
workers, err := r.ListWorkers()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(workers) == 0 {
fmt.Println("No workers")
return
}
// sort by started timestamp or ID.
sort.Slice(workers, func(i, j int) bool {
x, y := workers[i], workers[j]
if x.Started != y.Started {
return x.Started.Before(y.Started)
}
return x.ID.String() < y.ID.String()
})
cols := []string{"Process", "ID", "Type", "Payload", "Queue", "Started"}
printRows := func(w io.Writer, tmpl string) {
for _, wk := range workers {
fmt.Fprintf(w, tmpl,
fmt.Sprintf("%s:%d", wk.Host, wk.PID), wk.ID, wk.Type, wk.Payload, wk.Queue, timeAgo(wk.Started))
}
}
printTable(cols, printRows)
}

14
tools/go.mod Normal file
View File

@@ -0,0 +1,14 @@
module github.com/hibiken/asynq/tools
go 1.13
require (
github.com/go-redis/redis/v7 v7.2.0
github.com/hibiken/asynq v0.4.0
github.com/mitchellh/go-homedir v1.1.0
github.com/rs/xid v1.2.1
github.com/spf13/cobra v0.0.5
github.com/spf13/viper v1.6.2
)
replace github.com/hibiken/asynq => ./..

195
tools/go.sum Normal file
View File

@@ -0,0 +1,195 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis v6.15.7+incompatible h1:3skhDh95XQMpnqeqNftPkQD9jL9e5e36z/1SUm6dy1U=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.1.0 h1:I4C4a8UGbFejiVjtYVTRVOiMIJ5pm5Yru6ibvDX/OS0=
github.com/go-redis/redis/v7 v7.1.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hibiken/asynq v0.4.0 h1:NvAfYX0DRe04WgGMKRg5oX7bs6ktv2fu9YwB6O356FI=
github.com/hibiken/asynq v0.4.0/go.mod h1:dtrVkxCsGPVhVNHMDXAH7lFq64kbj43+G6lt4FQZfW4=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.6.0/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E=
github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=