2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-10-20 09:16:12 +08:00

Compare commits

..

91 Commits

Author SHA1 Message Date
Ken Hibino
684a7e0c98 v0.18.2 2021-07-15 06:56:53 -07:00
Ken Hibino
46b23d6495 Allow upper case characters in queue name 2021-07-15 06:55:47 -07:00
Ken Hibino
c0ae62499f v0.18.1 2021-07-04 06:39:54 -07:00
Ken Hibino
7744ade362 Update changelog 2021-07-04 06:38:36 -07:00
Ken Hibino
f532c95394 Update recoverer to recover tasks on server startup 2021-07-04 06:38:36 -07:00
Ken Hibino
ff6768f9bb Fix recoverer to run task recovering logic every minute 2021-07-04 06:38:36 -07:00
Ken Hibino
d5e9f3b1bd Update readme 2021-06-30 06:26:14 -07:00
Ken Hibino
d02b722d8a v0.18.0 2021-06-29 16:36:52 -07:00
Ken Hibino
99c7ebeef2 Add migration command in CLI 2021-06-29 16:34:21 -07:00
Ken Hibino
bf54621196 Update example code in README 2021-06-29 16:34:21 -07:00
Ken Hibino
27baf6de0d Fix error in readme 2021-06-29 16:34:21 -07:00
Ken Hibino
1bd0bee1e5 Fix CLI build 2021-06-29 16:34:21 -07:00
Ken Hibino
a9feec5967 Change TaskInfo to use public fields instead of methods 2021-06-29 16:34:21 -07:00
Ken Hibino
e01c6379c8 Fix lua script for redis-cluster mode 2021-06-29 16:34:21 -07:00
Ken Hibino
a0df047f71 Use md5 to generate checksum for unique key 2021-06-29 16:34:21 -07:00
Ken Hibino
68dd6d9a9d (fix): Clear unique lock when task is deleted via Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
6cce31a134 Fix recoverer test 2021-06-29 16:34:21 -07:00
Ken Hibino
f9d7af3def Update ProcessorRetry test 2021-06-29 16:34:21 -07:00
Ken Hibino
b0321fb465 Format payload bytes in CLI output 2021-06-29 16:34:21 -07:00
Ken Hibino
7776c7ae53 Rename cli subcommand to not to use dash 2021-06-29 16:34:21 -07:00
Ken Hibino
709ca79a2b Add task inspect command 2021-06-29 16:34:21 -07:00
Ken Hibino
08d8f0b37c Add String method to TaskState 2021-06-29 16:34:21 -07:00
Ken Hibino
385323b679 Minor fix in queue command 2021-06-29 16:34:21 -07:00
Ken Hibino
77604af265 Fix asynq CLI build 2021-06-29 16:34:21 -07:00
Ken Hibino
4765742e8a Add Inspector.GetTaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
68839dc9d3 Fix lua scripts for redis cluster 2021-06-29 16:34:21 -07:00
Ken Hibino
8922d2423a Define RDB.GetTaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
b358de907e Rename Inspector.CurrentStats to GetQueueInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
8ee1825e67 Rename Inspector.CancelActiveTask to CancelProcessing 2021-06-29 16:34:21 -07:00
Ken Hibino
c8bda26bed Make NodeCluster fields read-only 2021-06-29 16:34:21 -07:00
Ken Hibino
8aeeb61c9d Misc cleanup 2021-06-29 16:34:21 -07:00
Ken Hibino
96c51fdc23 Update WorkerInfo and remove unnecessary types 2021-06-29 16:34:21 -07:00
Ken Hibino
ea9086fd8b Update Inspector.List*Task methods to return ErrQueueNotFound 2021-06-29 16:34:21 -07:00
Ken Hibino
e63d51da0c Update Inspector.ListArchivedTasks 2021-06-29 16:34:21 -07:00
Ken Hibino
cd351d49b9 Add LastFailedAt to TaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
87264b66f3 Record last_failed_at time on Retry or Archive event 2021-06-29 16:34:21 -07:00
Ken Hibino
62168b8d0d Add LastFailedAt field to TaskMessage 2021-06-29 16:34:21 -07:00
Ken Hibino
840f7245b1 Update List methods (expect for ListArchived) 2021-06-29 16:34:21 -07:00
Ken Hibino
12f4c7cf6e Move inspeq package content to asynq package 2021-06-29 16:34:21 -07:00
Ken Hibino
0ec3b55e6b Replace ArchiveTaskByKey with ArchiveTask in Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
4bcc5ab6aa Replace DeleteTaskByKey with DeleteTask in Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
456edb6b71 Replace RunTaskByKey with RunTask in Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
b835090ad8 Update Client.Enqueue to return TaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
09cbea66f6 Define TaskInfo type 2021-06-29 16:34:21 -07:00
Ken Hibino
b9c2572203 Refactor redis keys and store messages in protobuf
Changes:
- Task messages are stored under "asynq:{<qname>}:t:<task_id>" key in redis, value is a HASH type and message are stored under "msg" key in the hash. The hash also stores "deadline", "timeout".
- Redis LIST and ZSET stores task message IDs
- Task messages are serialized using protocol buffer
2021-06-29 16:34:21 -07:00
Ken Hibino
0bf767cf21 Add TaskState type to base package 2021-06-29 16:34:21 -07:00
Ken Hibino
1812d05d21 Fix build 2021-06-29 16:34:21 -07:00
Ken Hibino
4af65d5fa5 Update RDB methods with new errors package 2021-06-29 16:34:21 -07:00
Ken Hibino
a19ad19382 Update RDB.Dequeue with new errors package 2021-06-29 16:34:21 -07:00
Ken Hibino
8117ce8972 Minor fixes 2021-06-29 16:34:21 -07:00
Ken Hibino
d98ecdebb4 Update RDB.EnqueueUnique and RDB.ScheduleUnique with specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
ffe9aa74b3 Add errors.RedisCommandError type 2021-06-29 16:34:21 -07:00
Ken Hibino
d2d4029aba Update RDB.CurrentStats and RDB.HistoricalStats with specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
76bd865ebc Update RDB.RemoveQueue with specific error types 2021-06-29 16:34:21 -07:00
Ken Hibino
136d1c9ea9 Update rdb.List* methods with specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
52e04355d3 Return QueueNotFoundError from DeleteAll* methods 2021-06-29 16:34:21 -07:00
Ken Hibino
cde3e57c6c Update RDB.RunAll* methods with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
dd66acef1b Return QueueNotFoundError from ArchiveAll* methods 2021-06-29 16:34:21 -07:00
Ken Hibino
30a3d9641a Update tests for RDB.DeleteTask and RDB.ArchiveTask 2021-06-29 16:34:21 -07:00
Ken Hibino
961582cba6 Update RDB.RunTask with more specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
430dbb298e Update RDB.DeleteTask with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
675826be5f Update RDB.ArchiveAll methods with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
62f4e46b73 Update RDB.ArchiveAllPendingTasks with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
a500f8a534 Reorganize test for RDB.ArchiveTask 2021-06-29 16:34:21 -07:00
Ken Hibino
bcfeff38ed Update errors package with detailed comments 2021-06-29 16:34:21 -07:00
Ken Hibino
12a90f6a8d Update RDB.ArchiveTask with custom errors 2021-06-29 16:34:21 -07:00
Ken Hibino
807624e7dd Create internal errors package 2021-06-29 16:34:21 -07:00
Ken Hibino
4d65024bd7 Update rdb.ArchiveTask with more specific error types 2021-06-29 16:34:21 -07:00
Ken Hibino
76486b5cb4 Rename error types 2021-06-29 16:34:21 -07:00
Ken Hibino
1db516c53c Add a list of canonical errors in base package 2021-06-29 16:34:21 -07:00
Ken Hibino
cb5bdf245c Update RDB.ArchiveTask with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
267493ccef Update RDB.RunTask with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
5d7f1b6a80 Update RDB.Requeue with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
77ded502ab Update RDB.Retry, RDB.Archive with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
f2284be43d Update RDB.Dequeue with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
3cadab55cb Update RDB.ForwardIfReady with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
298a420f9f Update RDB.ScheduleUnique with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
b1d717c842 Update RDB.Schedule with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
56e5762eea Update RDB.EnqueueUnique with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
5ec41e388b Update RDB.Enqueue with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
9c95c41651 Change Server API
* Rename ServerStatus to ServerState internally

* Rename terminate to shutdown internally

* Update Scheduler API to match Server API
2021-06-29 16:34:21 -07:00
Ken Hibino
476812475e Change payload to byte slice 2021-06-29 16:34:21 -07:00
Ken Hibino
7af3981929 Refactor redis keys and store messages in protobuf
Changes:
- Task messages are stored under "asynq:{<qname>}:t:<task_id>" key in redis, value is a HASH type and message are stored under "msg" key in the hash. The hash also stores "deadline", "timeout".
- Redis LIST and ZSET stores task message IDs
- Task messages are serialized using protocol buffer
2021-06-29 16:34:21 -07:00
Ken Hibino
2516c4baba v0.17.2 2021-06-06 06:51:30 -07:00
Ken Hibino
ebe482a65c Free uniqueness lock when task is deleted 2021-06-06 06:48:59 -07:00
Vic Shóstak
3e9fc2f972 Update README 2021-04-28 10:25:34 -07:00
Vic Shóstak
63ce9ed0f9 Update README with a new logo 2021-04-14 10:21:47 -07:00
Ken Hibino
32d3f329b9 v0.17.1 2021-04-04 12:51:00 -07:00
Ken Hibino
544c301a8b Fix bug in RDB.memoryUsage 2021-04-04 12:49:19 -07:00
Ken Hibino
8b997d2fab v0.17.0 2021-03-24 16:51:59 -07:00
Ken Hibino
901105a8d7 Add dial, read, write timeout options to RedisConnOpt 2021-03-24 16:49:04 -07:00
61 changed files with 7461 additions and 3813 deletions

5
.gitignore vendored
View File

@@ -18,4 +18,7 @@
/tools/asynq/asynq /tools/asynq/asynq
# Ignore asynq config file # Ignore asynq config file
.asynq.* .asynq.*
# Ignore editor config files
.vscode

View File

@@ -7,6 +7,57 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [0.18.2] - 2020-07-15
### Changed
- Changed `Queue` function to not to convert the provided queue name to lowercase. Queue names are now case-sensitive.
## [0.18.1] - 2020-07-04
### Changed
- Changed to execute task recovering logic when server starts up; Previously it needed to wait for a minute for task recovering logic to exeucte.
### Fixed
- Fixed task recovering logic to execute every minute
## [0.18.0] - 2021-06-29
### Changed
- NewTask function now takes array of bytes as payload.
- Task `Type` and `Payload` should be accessed by a method call.
- `Server` API has changed. Renamed `Quiet` to `Stop`. Renamed `Stop` to `Shutdown`. _Note:_ As a result of this renaming, the behavior of `Stop` has changed. Please update the exising code to call `Shutdown` where it used to call `Stop`.
- `Scheduler` API has changed. Renamed `Stop` to `Shutdown`.
- Requires redis v4.0+ for multiple field/value pair support
- `Client.Enqueue` now returns `TaskInfo`
- `Inspector.RunTaskByKey` is replaced with `Inspector.RunTask`
- `Inspector.DeleteTaskByKey` is replaced with `Inspector.DeleteTask`
- `Inspector.ArchiveTaskByKey` is replaced with `Inspector.ArchiveTask`
- `inspeq` package is removed. All types and functions from the package is moved to `asynq` package.
- `WorkerInfo` field names have changed.
- `Inspector.CancelActiveTask` is renamed to `Inspector.CancelProcessing`
## [0.17.2] - 2021-06-06
### Fixed
- Free unique lock when task is deleted (https://github.com/hibiken/asynq/issues/275).
## [0.17.1] - 2021-04-04
### Fixed
- Fix bug in internal `RDB.memoryUsage` method.
## [0.17.0] - 2021-03-24
### Added
- `DialTimeout`, `ReadTimeout`, and `WriteTimeout` options are added to `RedisConnOpt`.
## [0.16.1] - 2021-03-20 ## [0.16.1] - 2021-03-20
### Fixed ### Fixed

7
Makefile Normal file
View File

@@ -0,0 +1,7 @@
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
proto: internal/proto/asynq.proto
protoc -I=$(ROOT_DIR)/internal/proto \
--go_out=$(ROOT_DIR)/internal/proto \
--go_opt=module=github.com/hibiken/asynq/internal/proto \
$(ROOT_DIR)/internal/proto/asynq.proto

220
README.md
View File

@@ -1,37 +1,31 @@
# Asynq <img src="https://user-images.githubusercontent.com/11155743/114697792-ffbfa580-9d26-11eb-8e5b-33bef69476dc.png" alt="Asynq logo" width="360px" />
# Simple, reliable & efficient distributed task queue in Go
![Build Status](https://github.com/hibiken/asynq/workflows/build/badge.svg)
[![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq) [![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq)
[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq) [![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)
![Build Status](https://github.com/hibiken/asynq/workflows/build/badge.svg)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community) [![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
## Overview Asynq is a Go library for queueing tasks and processing them asynchronously with workers. It's backed by [Redis](https://redis.io/) and is designed to be scalable yet easy to get started.
Asynq is a Go library for queueing tasks and processing them asynchronously with workers. It's backed by Redis and is designed to be scalable yet easy to get started.
Highlevel overview of how Asynq works: Highlevel overview of how Asynq works:
- Client puts task on a queue - Client puts tasks on a queue
- Server pulls task off queues and starts a worker goroutine for each task - Server pulls tasks off queues and starts a worker goroutine for each task
- Tasks are processed concurrently by multiple workers - Tasks are processed concurrently by multiple workers
Task queues are used as a mechanism to distribute work across multiple machines. Task queues are used as a mechanism to distribute work across multiple machines. A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
![Task Queue Diagram](/docs/assets/overview.png) **Example use case**
## Stability and Compatibility ![Task Queue Diagram](https://user-images.githubusercontent.com/11155743/116358505-656f5f80-a806-11eb-9c16-94e49dab0f99.jpg)
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users (Feedback on APIs are appreciated!). The public API could change without a major version update before v1.0.0 release.
**Status**: The library is currently undergoing heavy development with frequent, breaking API changes.
## Features ## Features
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task - Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
- Scheduling of tasks - Scheduling of tasks
- Durability since tasks are written to Redis
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks - [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
- Automatic recovery of tasks in the event of a worker crash - Automatic recovery of tasks in the event of a worker crash
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues) - [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
@@ -47,14 +41,24 @@ A system can consist of multiple worker servers and brokers, giving way to high
- [Web UI](#web-ui) to inspect and remote-control queues and tasks - [Web UI](#web-ui) to inspect and remote-control queues and tasks
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks - [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Stability and Compatibility
**Status**: The library is currently undergoing **heavy development** with frequent, breaking API changes.
> ☝️ **Important Note**: Current major version is zero (`v0.x.x`) to accomodate rapid development and fast iteration while getting early feedback from users (_feedback on APIs are appreciated!_). The public API could change without a major version update before `v1.0.0` release.
## Quickstart ## Quickstart
First, make sure you are running a Redis server locally. Make sure you have Go installed ([download](https://golang.org/dl/)). Version `1.13` or higher is required.
Initialize your project by creating a folder and then running `go mod init github.com/your/repo` ([learn more](https://blog.golang.org/using-go-modules)) inside the folder. Then install Asynq library with the [`go get`](https://golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them) command:
```sh ```sh
$ redis-server go get -u github.com/hibiken/asynq
``` ```
Make sure you're running a Redis server locally or from a [Docker](https://hub.docker.com/_/redis) container. Version `4.0` or higher is required.
Next, write a package that encapsulates task creation and task handling. Next, write a package that encapsulates task creation and task handling.
```go ```go
@@ -72,19 +76,34 @@ const (
TypeImageResize = "image:resize" TypeImageResize = "image:resize"
) )
type EmailDeliveryPayload struct {
UserID int
TemplateID string
}
type ImageResizePayload struct {
SourceURL string
}
//---------------------------------------------- //----------------------------------------------
// Write a function NewXXXTask to create a task. // Write a function NewXXXTask to create a task.
// A task consists of a type and a payload. // A task consists of a type and a payload.
//---------------------------------------------- //----------------------------------------------
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task { func NewEmailDeliveryTask(userID int, tmplID string) (*asynq.Task, error) {
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID} payload, err := json.Marshal(EmailDeliveryPayload{UserID: userID, TemplateID: templID})
return asynq.NewTask(TypeEmailDelivery, payload) if err != nil {
return nil, err
}
return asynq.NewTask(TypeEmailDelivery, payload), nil
} }
func NewImageResizeTask(src string) *asynq.Task { func NewImageResizeTask(src string) (*asynq.Task, error) {
payload := map[string]interface{}{"src": src} payload, err := json.Marshal(ImageResizePayload{SourceURL: src})
return asynq.NewTask(TypeImageResize, payload) if err != nil {
return nil, err
}
return asynq.NewTask(TypeImageResize, payload), nil
} }
//--------------------------------------------------------------- //---------------------------------------------------------------
@@ -96,15 +115,11 @@ func NewImageResizeTask(src string) *asynq.Task {
//--------------------------------------------------------------- //---------------------------------------------------------------
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error { func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
userID, err := t.Payload.GetInt("user_id") var p EmailDeliveryPayload
if err != nil { if err := json.Unmarshal(t.Payload(), &p); err != nil {
return err return fmt.Errorf("json.Unmarshal failed: %v: %w", err, asynq.SkipRetry)
} }
tmplID, err := t.Payload.GetString("template_id") log.Printf("Sending Email to User: user_id=%d, template_id=%s", p.UserID, p.TemplateID)
if err != nil {
return err
}
fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID)
// Email delivery code ... // Email delivery code ...
return nil return nil
} }
@@ -115,11 +130,11 @@ type ImageProcessor struct {
} }
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error { func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
src, err := t.Payload.GetString("src") var p ImageResizePayload
if err != nil { if err := json.Unmarshal(t.Payload(), &p); err != nil {
return err return fmt.Errorf("json.Unmarshal failed: %v: %w", err, asynq.SkipRetry)
} }
fmt.Printf("Resize image: src = %s\n", src) log.Printf("Resizing image: src=%s", p.SourceURL)
// Image resizing code ... // Image resizing code ...
return nil return nil
} }
@@ -129,13 +144,12 @@ func NewImageProcessor() *ImageProcessor {
} }
``` ```
In your application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue. In your application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on queues.
```go ```go
package main package main
import ( import (
"fmt"
"log" "log"
"time" "time"
@@ -146,21 +160,23 @@ import (
const redisAddr = "127.0.0.1:6379" const redisAddr = "127.0.0.1:6379"
func main() { func main() {
r := asynq.RedisClientOpt{Addr: redisAddr} client := asynq.NewClient(asynq.RedisClientOpt{Addr: redisAddr})
c := asynq.NewClient(r) defer client.Close()
defer c.Close()
// ------------------------------------------------------ // ------------------------------------------------------
// Example 1: Enqueue task to be processed immediately. // Example 1: Enqueue task to be processed immediately.
// Use (*Client).Enqueue method. // Use (*Client).Enqueue method.
// ------------------------------------------------------ // ------------------------------------------------------
t := tasks.NewEmailDeliveryTask(42, "some:template:id") task, err := tasks.NewEmailDeliveryTask(42, "some:template:id")
res, err := c.Enqueue(t)
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatalf("could not create task: %v", err)
} }
fmt.Printf("Enqueued Result: %+v\n", res) info, err := client.Enqueue(task)
if err != nil {
log.Fatalf("could not enqueue task: %v", err)
}
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
// ------------------------------------------------------------ // ------------------------------------------------------------
@@ -168,12 +184,11 @@ func main() {
// Use ProcessIn or ProcessAt option. // Use ProcessIn or ProcessAt option.
// ------------------------------------------------------------ // ------------------------------------------------------------
t = tasks.NewEmailDeliveryTask(42, "other:template:id") info, err = client.Enqueue(task, asynq.ProcessIn(24*time.Hour))
res, err = c.Enqueue(t, asynq.ProcessIn(24*time.Hour))
if err != nil { if err != nil {
log.Fatal("could not schedule task: %v", err) log.Fatalf("could not schedule task: %v", err)
} }
fmt.Printf("Enqueued Result: %+v\n", res) log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
@@ -181,33 +196,34 @@ func main() {
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc. // Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
c.SetDefaultOptions(tasks.TypeImageResize, asynq.MaxRetry(10), asynq.Timeout(3*time.Minute)) client.SetDefaultOptions(tasks.TypeImageResize, asynq.MaxRetry(10), asynq.Timeout(3*time.Minute))
t = tasks.NewImageResizeTask("some/blobstore/path") task, err = tasks.NewImageResizeTask("https://example.com/myassets/image.jpg")
res, err = c.Enqueue(t)
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatalf("could not create task: %v", err)
} }
fmt.Printf("Enqueued Result: %+v\n", res) info, err = client.Enqueue(task)
if err != nil {
log.Fatalf("could not enqueue task: %v", err)
}
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Example 4: Pass options to tune task processing behavior at enqueue time. // Example 4: Pass options to tune task processing behavior at enqueue time.
// Options passed at enqueue time override default ones, if any. // Options passed at enqueue time override default ones.
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
t = tasks.NewImageResizeTask("some/blobstore/path") info, err = client.Enqueue(task, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
res, err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatal("could not enqueue task: %v", err)
} }
fmt.Printf("Enqueued Result: %+v\n", res) log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
} }
``` ```
Next, start a worker server to process these tasks in the background. Next, start a worker server to process these tasks in the background. To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler. You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`net/http`](https://golang.org/pkg/net/http/) Handler.
```go ```go
package main package main
@@ -222,19 +238,20 @@ import (
const redisAddr = "127.0.0.1:6379" const redisAddr = "127.0.0.1:6379"
func main() { func main() {
r := asynq.RedisClientOpt{Addr: redisAddr} srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: redisAddr}
srv := asynq.NewServer(r, asynq.Config{ asynq.Config{
// Specify how many concurrent workers to use // Specify how many concurrent workers to use
Concurrency: 10, Concurrency: 10,
// Optionally specify multiple queues with different priority. // Optionally specify multiple queues with different priority.
Queues: map[string]int{ Queues: map[string]int{
"critical": 6, "critical": 6,
"default": 3, "default": 3,
"low": 1, "low": 1,
},
// See the godoc for other configuration options
}, },
// See the godoc for other configuration options )
})
// mux maps a type to a handler // mux maps a type to a handler
mux := asynq.NewServeMux() mux := asynq.NewServeMux()
@@ -248,65 +265,52 @@ func main() {
} }
``` ```
For a more detailed walk-through of the library, see our [Getting Started Guide](https://github.com/hibiken/asynq/wiki/Getting-Started). For a more detailed walk-through of the library, see our [Getting Started](https://github.com/hibiken/asynq/wiki/Getting-Started) guide.
To Learn more about `asynq` features and APIs, see our [Wiki](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq). To learn more about `asynq` features and APIs, see the package [godoc](https://godoc.org/github.com/hibiken/asynq).
## Web UI ## Web UI
[Asynqmon](https://github.com/hibiken/asynqmon) is a web based tool for monitoring and administrating Asynq queues and tasks. [Asynqmon](https://github.com/hibiken/asynqmon) is a web based tool for monitoring and administrating Asynq queues and tasks.
Please see the tool's [README](https://github.com/hibiken/asynqmon) for details.
Here's a few screenshots of the web UI. Here's a few screenshots of the Web UI:
**Queues view** **Queues view**
![Web UI QueuesView](/docs/assets/asynqmon-queues-view.png)
**Tasks view** ![Web UI Queues View](https://user-images.githubusercontent.com/11155743/114697016-07327f00-9d26-11eb-808c-0ac841dc888e.png)
![Web UI TasksView](/docs/assets/asynqmon-task-view.png)
**Tasks view**
![Web UI TasksView](https://user-images.githubusercontent.com/11155743/114697070-1f0a0300-9d26-11eb-855c-d3ec263865b7.png)
**Settings and adaptive dark mode**
![Web UI Settings and adaptive dark mode](https://user-images.githubusercontent.com/11155743/114697149-3517c380-9d26-11eb-9f7a-ae2dd00aad5b.png)
For details on how to use the tool, refer to the tool's [README](https://github.com/hibiken/asynqmon#readme).
## Command Line Tool ## Command Line Tool
Asynq ships with a command line tool to inspect the state of queues and tasks. Asynq ships with a command line tool to inspect the state of queues and tasks.
Here's an example of running the `stats` command.
![Gif](/docs/assets/demo.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
## Installation
To install `asynq` library, run the following command:
```sh
go get -u github.com/hibiken/asynq
```
To install the CLI tool, run the following command: To install the CLI tool, run the following command:
```sh ```sh
go get -u github.com/hibiken/asynq/tools/asynq go get -u github.com/hibiken/asynq/tools/asynq
``` ```
## Requirements Here's an example of running the `asynq stats` command:
| Dependency | Version | ![Gif](/docs/assets/demo.gif)
| -------------------------- | ------- |
| [Redis](https://redis.io/) | v3.0+ | For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
| [Go](https://golang.org/) | v1.13+ |
## Contributing ## Contributing
We are open to, and grateful for, any contributions (Github issues/pull-requests, feedback on Gitter channel, etc) made by the community. We are open to, and grateful for, any contributions (GitHub issues/PRs, feedback on [Gitter channel](https://gitter.im/go-asynq/community), etc) made by the community.
Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing. Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
## Acknowledgements
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynq CLI is built with cobra
## License ## License
Asynq is released under the MIT license. See [LICENSE](https://github.com/hibiken/asynq/blob/master/LICENSE). Copyright (c) 2019-present [Ken Hibino](https://github.com/hibiken) and [Contributors](https://github.com/hibiken/asynq/graphs/contributors). `Asynq` is free and open-source software licensed under the [MIT License](https://github.com/hibiken/asynq/blob/master/LICENSE). Official logo was created by [Vic Shóstak](https://github.com/koddr) and distributed under [Creative Commons](https://creativecommons.org/publicdomain/zero/1.0/) license (CC0 1.0 Universal).

223
asynq.go
View File

@@ -10,29 +10,151 @@ import (
"net/url" "net/url"
"strconv" "strconv"
"strings" "strings"
"time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
) )
// Task represents a unit of work to be performed. // Task represents a unit of work to be performed.
type Task struct { type Task struct {
// Type indicates the type of task to be performed. // typename indicates the type of task to be performed.
Type string typename string
// Payload holds data needed to perform the task. // payload holds data needed to perform the task.
Payload Payload payload []byte
} }
func (t *Task) Type() string { return t.typename }
func (t *Task) Payload() []byte { return t.payload }
// NewTask returns a new Task given a type name and payload data. // NewTask returns a new Task given a type name and payload data.
// func NewTask(typename string, payload []byte) *Task {
// The payload values must be serializable.
func NewTask(typename string, payload map[string]interface{}) *Task {
return &Task{ return &Task{
Type: typename, typename: typename,
Payload: Payload{payload}, payload: payload,
} }
} }
// A TaskInfo describes a task and its metadata.
type TaskInfo struct {
// ID is the identifier of the task.
ID string
// Queue is the name of the queue in which the task belongs.
Queue string
// Type is the type name of the task.
Type string
// Payload is the payload data of the task.
Payload []byte
// State indicates the task state.
State TaskState
// MaxRetry is the maximum number of times the task can be retried.
MaxRetry int
// Retried is the number of times the task has retried so far.
Retried int
// LastErr is the error message from the last failure.
LastErr string
// LastFailedAt is the time time of the last failure if any.
// If the task has no failures, LastFailedAt is zero time (i.e. time.Time{}).
LastFailedAt time.Time
// Timeout is the duration the task can be processed by Handler before being retried,
// zero if not specified
Timeout time.Duration
// Deadline is the deadline for the task, zero value if not specified.
Deadline time.Time
// NextProcessAt is the time the task is scheduled to be processed,
// zero if not applicable.
NextProcessAt time.Time
}
func newTaskInfo(msg *base.TaskMessage, state base.TaskState, nextProcessAt time.Time) *TaskInfo {
info := TaskInfo{
ID: msg.ID.String(),
Queue: msg.Queue,
Type: msg.Type,
Payload: msg.Payload, // Do we need to make a copy?
MaxRetry: msg.Retry,
Retried: msg.Retried,
LastErr: msg.ErrorMsg,
Timeout: time.Duration(msg.Timeout) * time.Second,
NextProcessAt: nextProcessAt,
}
if msg.LastFailedAt == 0 {
info.LastFailedAt = time.Time{}
} else {
info.LastFailedAt = time.Unix(msg.LastFailedAt, 0)
}
if msg.Deadline == 0 {
info.Deadline = time.Time{}
} else {
info.Deadline = time.Unix(msg.Deadline, 0)
}
switch state {
case base.TaskStateActive:
info.State = TaskStateActive
case base.TaskStatePending:
info.State = TaskStatePending
case base.TaskStateScheduled:
info.State = TaskStateScheduled
case base.TaskStateRetry:
info.State = TaskStateRetry
case base.TaskStateArchived:
info.State = TaskStateArchived
default:
panic(fmt.Sprintf("internal error: unknown state: %d", state))
}
return &info
}
// TaskState denotes the state of a task.
type TaskState int
const (
// Indicates that the task is currently being processed by Handler.
TaskStateActive TaskState = iota + 1
// Indicates that the task is ready to be processed by Handler.
TaskStatePending
// Indicates that the task is scheduled to be processed some time in the future.
TaskStateScheduled
// Indicates that the task has previously failed and scheduled to be processed some time in the future.
TaskStateRetry
// Indicates that the task is archived and stored for inspection purposes.
TaskStateArchived
)
func (s TaskState) String() string {
switch s {
case TaskStateActive:
return "active"
case TaskStatePending:
return "pending"
case TaskStateScheduled:
return "scheduled"
case TaskStateRetry:
return "retry"
case TaskStateArchived:
return "archived"
}
panic("asynq: unknown task state")
}
// RedisConnOpt is a discriminated union of types that represent Redis connection configuration option. // RedisConnOpt is a discriminated union of types that represent Redis connection configuration option.
// //
// RedisConnOpt represents a sum of following types: // RedisConnOpt represents a sum of following types:
@@ -68,6 +190,26 @@ type RedisClientOpt struct {
// See: https://redis.io/commands/select. // See: https://redis.io/commands/select.
DB int DB int
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimout.
WriteTimeout time.Duration
// Maximum number of socket connections. // Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU. // Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int PoolSize int
@@ -79,13 +221,16 @@ type RedisClientOpt struct {
func (opt RedisClientOpt) MakeRedisClient() interface{} { func (opt RedisClientOpt) MakeRedisClient() interface{} {
return redis.NewClient(&redis.Options{ return redis.NewClient(&redis.Options{
Network: opt.Network, Network: opt.Network,
Addr: opt.Addr, Addr: opt.Addr,
Username: opt.Username, Username: opt.Username,
Password: opt.Password, Password: opt.Password,
DB: opt.DB, DB: opt.DB,
PoolSize: opt.PoolSize, DialTimeout: opt.DialTimeout,
TLSConfig: opt.TLSConfig, ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
PoolSize: opt.PoolSize,
TLSConfig: opt.TLSConfig,
}) })
} }
@@ -116,6 +261,26 @@ type RedisFailoverClientOpt struct {
// See: https://redis.io/commands/select. // See: https://redis.io/commands/select.
DB int DB int
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimeout
WriteTimeout time.Duration
// Maximum number of socket connections. // Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU. // Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int PoolSize int
@@ -133,6 +298,9 @@ func (opt RedisFailoverClientOpt) MakeRedisClient() interface{} {
Username: opt.Username, Username: opt.Username,
Password: opt.Password, Password: opt.Password,
DB: opt.DB, DB: opt.DB,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
PoolSize: opt.PoolSize, PoolSize: opt.PoolSize,
TLSConfig: opt.TLSConfig, TLSConfig: opt.TLSConfig,
}) })
@@ -157,6 +325,26 @@ type RedisClusterClientOpt struct {
// See: https://redis.io/commands/auth. // See: https://redis.io/commands/auth.
Password string Password string
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimeout.
WriteTimeout time.Duration
// TLS Config used to connect to a server. // TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set. // TLS will be negotiated only if this field is set.
TLSConfig *tls.Config TLSConfig *tls.Config
@@ -168,6 +356,9 @@ func (opt RedisClusterClientOpt) MakeRedisClient() interface{} {
MaxRedirects: opt.MaxRedirects, MaxRedirects: opt.MaxRedirects,
Username: opt.Username, Username: opt.Username,
Password: opt.Password, Password: opt.Password,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
TLSConfig: opt.TLSConfig, TLSConfig: opt.TLSConfig,
}) })
} }

View File

@@ -85,7 +85,7 @@ func getRedisConnOpt(tb testing.TB) RedisConnOpt {
var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task { var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
out := append([]*Task(nil), in...) // Copy input to avoid mutating it out := append([]*Task(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
return out[i].Type < out[j].Type return out[i].Type() < out[j].Type()
}) })
return out return out
}) })

View File

@@ -6,12 +6,24 @@ package asynq
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"sync" "sync"
"testing" "testing"
"time" "time"
h "github.com/hibiken/asynq/internal/asynqtest"
) )
// Creates a new task of type "task<n>" with payload {"data": n}.
func makeTask(n int) *Task {
b, err := json.Marshal(map[string]int{"data": n})
if err != nil {
panic(err)
}
return NewTask(fmt.Sprintf("task%d", n), b)
}
// Simple E2E Benchmark testing with no scheduled tasks and retries. // Simple E2E Benchmark testing with no scheduled tasks and retries.
func BenchmarkEndToEndSimple(b *testing.B) { func BenchmarkEndToEndSimple(b *testing.B) {
const count = 100000 const count = 100000
@@ -29,8 +41,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i)); err != nil {
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -70,14 +81,12 @@ func BenchmarkEndToEnd(b *testing.B) {
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i)); err != nil {
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i), ProcessIn(1*time.Second)); err != nil {
if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -86,13 +95,18 @@ func BenchmarkEndToEnd(b *testing.B) {
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(count * 2) wg.Add(count * 2)
handler := func(ctx context.Context, t *Task) error { handler := func(ctx context.Context, t *Task) error {
n, err := t.Payload.GetInt("data") var p map[string]int
if err != nil { if err := json.Unmarshal(t.Payload(), &p); err != nil {
b.Logf("internal error: %v", err) b.Logf("internal error: %v", err)
} }
n, ok := p["data"]
if !ok {
n = 1
b.Logf("internal error: could not get data from payload")
}
retried, ok := GetRetryCount(ctx) retried, ok := GetRetryCount(ctx)
if !ok { if !ok {
b.Logf("internal error: %v", err) b.Logf("internal error: could not get retry count from context")
} }
// Fail 1% of tasks for the first attempt. // Fail 1% of tasks for the first attempt.
if retried == 0 && n%100 == 0 { if retried == 0 && n%100 == 0 {
@@ -136,20 +150,17 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
}) })
// Create a bunch of tasks // Create a bunch of tasks
for i := 0; i < highCount; i++ { for i := 0; i < highCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i), Queue("high")); err != nil {
if _, err := client.Enqueue(t, Queue("high")); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < defaultCount; i++ { for i := 0; i < defaultCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i)); err != nil {
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
for i := 0; i < lowCount; i++ { for i := 0; i < lowCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i), Queue("low")); err != nil {
if _, err := client.Enqueue(t, Queue("low")); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -190,15 +201,13 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
}) })
// Enqueue 10,000 tasks. // Enqueue 10,000 tasks.
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i)); err != nil {
if _, err := client.Enqueue(t); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
// Schedule 10,000 tasks. // Schedule 10,000 tasks.
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i}) if _, err := client.Enqueue(makeTask(i), ProcessIn(1*time.Second)); err != nil {
if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -213,7 +222,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
b.Log("Starting enqueueing") b.Log("Starting enqueueing")
enqueued := 0 enqueued := 0
for enqueued < 100000 { for enqueued < 100000 {
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), map[string]interface{}{"data": enqueued}) t := NewTask(fmt.Sprintf("enqueued%d", enqueued), h.JSON(map[string]interface{}{"data": enqueued}))
if _, err := client.Enqueue(t); err != nil { if _, err := client.Enqueue(t); err != nil {
b.Logf("could not enqueue task %d: %v", enqueued, err) b.Logf("could not enqueue task %d: %v", enqueued, err)
continue continue

View File

@@ -5,15 +5,14 @@
package asynq package asynq
import ( import (
"errors"
"fmt" "fmt"
"strings"
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
) )
@@ -93,10 +92,8 @@ func (n retryOption) Type() OptionType { return MaxRetryOpt }
func (n retryOption) Value() interface{} { return int(n) } func (n retryOption) Value() interface{} { return int(n) }
// Queue returns an option to specify the queue to enqueue the task into. // Queue returns an option to specify the queue to enqueue the task into.
//
// Queue name is case-insensitive and the lowercased version is used.
func Queue(qname string) Option { func Queue(qname string) Option {
return queueOption(strings.ToLower(qname)) return queueOption(qname)
} }
func (qname queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(qname)) } func (qname queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(qname)) }
@@ -176,7 +173,6 @@ func (d processInOption) String() string { return fmt.Sprintf("ProcessIn(%v)
func (d processInOption) Type() OptionType { return ProcessInOpt } func (d processInOption) Type() OptionType { return ProcessInOpt }
func (d processInOption) Value() interface{} { return time.Duration(d) } func (d processInOption) Value() interface{} { return time.Duration(d) }
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task. // ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
// //
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option. // ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
@@ -208,11 +204,11 @@ func composeOptions(opts ...Option) (option, error) {
case retryOption: case retryOption:
res.retry = int(opt) res.retry = int(opt)
case queueOption: case queueOption:
trimmed := strings.TrimSpace(string(opt)) qname := string(opt)
if err := base.ValidateQueueName(trimmed); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return option{}, err return option{}, err
} }
res.queue = trimmed res.queue = qname
case timeoutOption: case timeoutOption:
res.timeout = time.Duration(opt) res.timeout = time.Duration(opt)
case deadlineOption: case deadlineOption:
@@ -255,41 +251,6 @@ func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
c.opts[taskType] = opts c.opts[taskType] = opts
} }
// A Result holds enqueued task's metadata.
type Result struct {
// ID is a unique identifier for the task.
ID string
// EnqueuedAt is the time the task was enqueued in UTC.
EnqueuedAt time.Time
// ProcessAt indicates when the task should be processed.
ProcessAt time.Time
// Retry is the maximum number of retry for the task.
Retry int
// Queue is a name of the queue the task is enqueued to.
Queue string
// Timeout is the timeout value for the task.
// Counting for timeout starts when a worker starts processing the task.
// If task processing doesn't complete within the timeout, the task will be retried.
// The value zero means no timeout.
//
// If deadline is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Timeout time.Duration
// Deadline is the deadline value for the task.
// If task processing doesn't complete before the deadline, the task will be retried.
// The value time.Unix(0, 0) means no deadline.
//
// If timeout is set, min(now+timeout, deadline) is used, where the now is the time when
// a worker starts processing the task.
Deadline time.Time
}
// Close closes the connection with redis. // Close closes the connection with redis.
func (c *Client) Close() error { func (c *Client) Close() error {
return c.rdb.Close() return c.rdb.Close()
@@ -297,15 +258,16 @@ func (c *Client) Close() error {
// Enqueue enqueues the given task to be processed asynchronously. // Enqueue enqueues the given task to be processed asynchronously.
// //
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error. // Enqueue returns TaskInfo and nil error if the task is enqueued successfully, otherwise returns a non-nil error.
// //
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
// By deafult, max retry is set to 25 and timeout is set to 30 minutes. // By deafult, max retry is set to 25 and timeout is set to 30 minutes.
// If no ProcessAt or ProcessIn options are passed, the task will be processed immediately. //
func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) { // If no ProcessAt or ProcessIn options are provided, the task will be pending immediately.
func (c *Client) Enqueue(task *Task, opts ...Option) (*TaskInfo, error) {
c.mu.Lock() c.mu.Lock()
if defaults, ok := c.opts[task.Type]; ok { if defaults, ok := c.opts[task.Type()]; ok {
opts = append(defaults, opts...) opts = append(defaults, opts...)
} }
c.mu.Unlock() c.mu.Unlock()
@@ -327,12 +289,12 @@ func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
} }
var uniqueKey string var uniqueKey string
if opt.uniqueTTL > 0 { if opt.uniqueTTL > 0 {
uniqueKey = base.UniqueKey(opt.queue, task.Type, task.Payload.data) uniqueKey = base.UniqueKey(opt.queue, task.Type(), task.Payload())
} }
msg := &base.TaskMessage{ msg := &base.TaskMessage{
ID: uuid.New(), ID: uuid.New(),
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Queue: opt.queue, Queue: opt.queue,
Retry: opt.retry, Retry: opt.retry,
Deadline: deadline.Unix(), Deadline: deadline.Unix(),
@@ -340,27 +302,22 @@ func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
UniqueKey: uniqueKey, UniqueKey: uniqueKey,
} }
now := time.Now() now := time.Now()
var state base.TaskState
if opt.processAt.Before(now) || opt.processAt.Equal(now) { if opt.processAt.Before(now) || opt.processAt.Equal(now) {
opt.processAt = now opt.processAt = now
err = c.enqueue(msg, opt.uniqueTTL) err = c.enqueue(msg, opt.uniqueTTL)
state = base.TaskStatePending
} else { } else {
err = c.schedule(msg, opt.processAt, opt.uniqueTTL) err = c.schedule(msg, opt.processAt, opt.uniqueTTL)
state = base.TaskStateScheduled
} }
switch { switch {
case err == rdb.ErrDuplicateTask: case errors.Is(err, errors.ErrDuplicateTask):
return nil, fmt.Errorf("%w", ErrDuplicateTask) return nil, fmt.Errorf("%w", ErrDuplicateTask)
case err != nil: case err != nil:
return nil, err return nil, err
} }
return &Result{ return newTaskInfo(msg, state, opt.processAt), nil
ID: msg.ID.String(),
EnqueuedAt: time.Now().UTC(),
ProcessAt: opt.processAt,
Queue: msg.Queue,
Retry: msg.Retry,
Timeout: timeout,
Deadline: deadline,
}, nil
} }
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error { func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {

View File

@@ -20,7 +20,7 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
client := NewClient(getRedisConnOpt(t)) client := NewClient(getRedisConnOpt(t))
defer client.Close() defer client.Close()
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
var ( var (
now = time.Now() now = time.Now()
@@ -32,7 +32,7 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
task *Task task *Task
processAt time.Time // value for ProcessAt option processAt time.Time // value for ProcessAt option
opts []Option // other options opts []Option // other options
wantRes *Result wantInfo *TaskInfo
wantPending map[string][]*base.TaskMessage wantPending map[string][]*base.TaskMessage
wantScheduled map[string][]base.Z wantScheduled map[string][]base.Z
}{ }{
@@ -41,19 +41,24 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
task: task, task: task,
processAt: now, processAt: now,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantInfo: &TaskInfo{
EnqueuedAt: now.UTC(), Queue: "default",
ProcessAt: now, Type: task.Type(),
Queue: "default", Payload: task.Payload(),
Retry: defaultMaxRetry, State: TaskStatePending,
Timeout: defaultTimeout, MaxRetry: defaultMaxRetry,
Deadline: noDeadline, Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -70,13 +75,18 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
task: task, task: task,
processAt: oneHourLater, processAt: oneHourLater,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantInfo: &TaskInfo{
EnqueuedAt: now.UTC(), Queue: "default",
ProcessAt: oneHourLater, Type: task.Type(),
Queue: "default", Payload: task.Payload(),
Retry: defaultMaxRetry, State: TaskStateScheduled,
Timeout: defaultTimeout, MaxRetry: defaultMaxRetry,
Deadline: noDeadline, Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: oneHourLater,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": {}, "default": {},
@@ -85,8 +95,8 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
"default": { "default": {
{ {
Message: &base.TaskMessage{ Message: &base.TaskMessage{
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -103,24 +113,24 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
opts := append(tc.opts, ProcessAt(tc.processAt)) opts := append(tc.opts, ProcessAt(tc.processAt))
gotRes, err := client.Enqueue(tc.task, opts...) gotInfo, err := client.Enqueue(tc.task, opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
cmpOptions := []cmp.Option{ cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID"), cmpopts.IgnoreFields(TaskInfo{}, "ID"),
cmpopts.EquateApproxTime(500 * time.Millisecond), cmpopts.EquateApproxTime(500 * time.Millisecond),
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" { if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, ProcessAt(%v)) returned %v, want %v; (-want,+got)\n%s", t.Errorf("%s;\nEnqueue(task, ProcessAt(%v)) returned %v, want %v; (-want,+got)\n%s",
tc.desc, tc.processAt, gotRes, tc.wantRes, diff) tc.desc, tc.processAt, gotInfo, tc.wantInfo, diff)
} }
for qname, want := range tc.wantPending { for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname) gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" { if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
} }
} }
for qname, want := range tc.wantScheduled { for qname, want := range tc.wantScheduled {
@@ -137,14 +147,14 @@ func TestClientEnqueue(t *testing.T) {
client := NewClient(getRedisConnOpt(t)) client := NewClient(getRedisConnOpt(t))
defer client.Close() defer client.Close()
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
now := time.Now() now := time.Now()
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
opts []Option opts []Option
wantRes *Result wantInfo *TaskInfo
wantPending map[string][]*base.TaskMessage wantPending map[string][]*base.TaskMessage
}{ }{
{ {
@@ -153,18 +163,24 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
MaxRetry(3), MaxRetry(3),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "default",
Queue: "default", Type: task.Type(),
Retry: 3, Payload: task.Payload(),
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: 3,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: 3, Retry: 3,
Queue: "default", Queue: "default",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -179,18 +195,24 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
MaxRetry(-2), MaxRetry(-2),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "default",
Queue: "default", Type: task.Type(),
Retry: 0, Payload: task.Payload(),
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: 0, // Retry count should be set to zero
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: 0, // Retry count should be set to zero Retry: 0, // Retry count should be set to zero
Queue: "default", Queue: "default",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -206,18 +228,24 @@ func TestClientEnqueue(t *testing.T) {
MaxRetry(2), MaxRetry(2),
MaxRetry(10), MaxRetry(10),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "default",
Queue: "default", Type: task.Type(),
Retry: 10, Payload: task.Payload(),
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: 10, // Last option takes precedence
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: 10, // Last option takes precedence Retry: 10, // Last option takes precedence
Queue: "default", Queue: "default",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -232,18 +260,24 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Queue("custom"), Queue("custom"),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "custom",
Queue: "custom", Type: task.Type(),
Retry: defaultMaxRetry, Payload: task.Payload(),
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"custom": { "custom": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "custom", Queue: "custom",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -253,25 +287,31 @@ func TestClientEnqueue(t *testing.T) {
}, },
}, },
{ {
desc: "Queue option should be case-insensitive", desc: "Queue option should be case sensitive",
task: task, task: task,
opts: []Option{ opts: []Option{
Queue("HIGH"), Queue("MyQueue"),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "MyQueue",
Queue: "high", Type: task.Type(),
Retry: defaultMaxRetry, Payload: task.Payload(),
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"high": { "MyQueue": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "high", Queue: "MyQueue",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(), Deadline: noDeadline.Unix(),
}, },
@@ -284,18 +324,24 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Timeout(20 * time.Second), Timeout(20 * time.Second),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "default",
Queue: "default", Type: task.Type(),
Retry: defaultMaxRetry, Payload: task.Payload(),
Timeout: 20 * time.Second, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: 20 * time.Second,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: 20, Timeout: 20,
@@ -310,18 +356,24 @@ func TestClientEnqueue(t *testing.T) {
opts: []Option{ opts: []Option{
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)), Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "default",
Queue: "default", Type: task.Type(),
Retry: defaultMaxRetry, Payload: task.Payload(),
Timeout: noTimeout, State: TaskStatePending,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC), MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: noTimeout,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: int64(noTimeout.Seconds()), Timeout: int64(noTimeout.Seconds()),
@@ -337,18 +389,24 @@ func TestClientEnqueue(t *testing.T) {
Timeout(20 * time.Second), Timeout(20 * time.Second),
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)), Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
}, },
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "default",
Queue: "default", Type: task.Type(),
Retry: defaultMaxRetry, Payload: task.Payload(),
Timeout: 20 * time.Second, State: TaskStatePending,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC), MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: 20 * time.Second,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: 20, Timeout: 20,
@@ -362,24 +420,24 @@ func TestClientEnqueue(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
gotRes, err := client.Enqueue(tc.task, tc.opts...) gotInfo, err := client.Enqueue(tc.task, tc.opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
cmpOptions := []cmp.Option{ cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"), cmpopts.IgnoreFields(TaskInfo{}, "ID"),
cmpopts.EquateApproxTime(500 * time.Millisecond), cmpopts.EquateApproxTime(500 * time.Millisecond),
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" { if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s", t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff) tc.desc, gotInfo, tc.wantInfo, diff)
} }
for qname, want := range tc.wantPending { for qname, want := range tc.wantPending {
got := h.GetPendingMessages(t, r, qname) got := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
} }
} }
} }
@@ -390,7 +448,7 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
client := NewClient(getRedisConnOpt(t)) client := NewClient(getRedisConnOpt(t))
defer client.Close() defer client.Close()
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
now := time.Now() now := time.Now()
tests := []struct { tests := []struct {
@@ -398,7 +456,7 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
task *Task task *Task
delay time.Duration // value for ProcessIn option delay time.Duration // value for ProcessIn option
opts []Option // other options opts []Option // other options
wantRes *Result wantInfo *TaskInfo
wantPending map[string][]*base.TaskMessage wantPending map[string][]*base.TaskMessage
wantScheduled map[string][]base.Z wantScheduled map[string][]base.Z
}{ }{
@@ -407,12 +465,18 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
task: task, task: task,
delay: 1 * time.Hour, delay: 1 * time.Hour,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now.Add(1 * time.Hour), Queue: "default",
Queue: "default", Type: task.Type(),
Retry: defaultMaxRetry, Payload: task.Payload(),
Timeout: defaultTimeout, State: TaskStateScheduled,
Deadline: noDeadline, MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: time.Now().Add(1 * time.Hour),
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": {}, "default": {},
@@ -421,8 +485,8 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
"default": { "default": {
{ {
Message: &base.TaskMessage{ Message: &base.TaskMessage{
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -438,18 +502,24 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
task: task, task: task,
delay: 0, delay: 0,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "default",
Queue: "default", Type: task.Type(),
Retry: defaultMaxRetry, Payload: task.Payload(),
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
wantPending: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type(),
Payload: task.Payload.data, Payload: task.Payload(),
Retry: defaultMaxRetry, Retry: defaultMaxRetry,
Queue: "default", Queue: "default",
Timeout: int64(defaultTimeout.Seconds()), Timeout: int64(defaultTimeout.Seconds()),
@@ -467,24 +537,24 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
opts := append(tc.opts, ProcessIn(tc.delay)) opts := append(tc.opts, ProcessIn(tc.delay))
gotRes, err := client.Enqueue(tc.task, opts...) gotInfo, err := client.Enqueue(tc.task, opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
cmpOptions := []cmp.Option{ cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"), cmpopts.IgnoreFields(TaskInfo{}, "ID"),
cmpopts.EquateApproxTime(500 * time.Millisecond), cmpopts.EquateApproxTime(500 * time.Millisecond),
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" { if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, ProcessIn(%v)) returned %v, want %v; (-want,+got)\n%s", t.Errorf("%s;\nEnqueue(task, ProcessIn(%v)) returned %v, want %v; (-want,+got)\n%s",
tc.desc, tc.delay, gotRes, tc.wantRes, diff) tc.desc, tc.delay, gotInfo, tc.wantInfo, diff)
} }
for qname, want := range tc.wantPending { for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname) gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" { if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
} }
} }
for qname, want := range tc.wantScheduled { for qname, want := range tc.wantScheduled {
@@ -501,7 +571,7 @@ func TestClientEnqueueError(t *testing.T) {
client := NewClient(getRedisConnOpt(t)) client := NewClient(getRedisConnOpt(t))
defer client.Close() defer client.Close()
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
tests := []struct { tests := []struct {
desc string desc string
@@ -537,7 +607,7 @@ func TestClientDefaultOptions(t *testing.T) {
defaultOpts []Option // options set at the client level. defaultOpts []Option // options set at the client level.
opts []Option // options used at enqueue time. opts []Option // options used at enqueue time.
task *Task task *Task
wantRes *Result wantInfo *TaskInfo
queue string // queue that the message should go into. queue string // queue that the message should go into.
want *base.TaskMessage want *base.TaskMessage
}{ }{
@@ -546,12 +616,18 @@ func TestClientDefaultOptions(t *testing.T) {
defaultOpts: []Option{Queue("feed")}, defaultOpts: []Option{Queue("feed")},
opts: []Option{}, opts: []Option{},
task: NewTask("feed:import", nil), task: NewTask("feed:import", nil),
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "feed",
Queue: "feed", Type: "feed:import",
Retry: defaultMaxRetry, Payload: nil,
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
queue: "feed", queue: "feed",
want: &base.TaskMessage{ want: &base.TaskMessage{
@@ -568,12 +644,18 @@ func TestClientDefaultOptions(t *testing.T) {
defaultOpts: []Option{Queue("feed"), MaxRetry(5)}, defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{}, opts: []Option{},
task: NewTask("feed:import", nil), task: NewTask("feed:import", nil),
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "feed",
Queue: "feed", Type: "feed:import",
Retry: 5, Payload: nil,
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: 5,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
queue: "feed", queue: "feed",
want: &base.TaskMessage{ want: &base.TaskMessage{
@@ -590,12 +672,17 @@ func TestClientDefaultOptions(t *testing.T) {
defaultOpts: []Option{Queue("feed"), MaxRetry(5)}, defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{Queue("critical")}, opts: []Option{Queue("critical")},
task: NewTask("feed:import", nil), task: NewTask("feed:import", nil),
wantRes: &Result{ wantInfo: &TaskInfo{
ProcessAt: now, Queue: "critical",
Queue: "critical", Type: "feed:import",
Retry: 5, Payload: nil,
Timeout: defaultTimeout, State: TaskStatePending,
Deadline: noDeadline, MaxRetry: 5,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now,
}, },
queue: "critical", queue: "critical",
want: &base.TaskMessage{ want: &base.TaskMessage{
@@ -613,18 +700,18 @@ func TestClientDefaultOptions(t *testing.T) {
h.FlushDB(t, r) h.FlushDB(t, r)
c := NewClient(getRedisConnOpt(t)) c := NewClient(getRedisConnOpt(t))
defer c.Close() defer c.Close()
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...) c.SetDefaultOptions(tc.task.Type(), tc.defaultOpts...)
gotRes, err := c.Enqueue(tc.task, tc.opts...) gotInfo, err := c.Enqueue(tc.task, tc.opts...)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
cmpOptions := []cmp.Option{ cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"), cmpopts.IgnoreFields(TaskInfo{}, "ID"),
cmpopts.EquateApproxTime(500 * time.Millisecond), cmpopts.EquateApproxTime(500 * time.Millisecond),
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" { if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s", t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff) tc.desc, gotInfo, tc.wantInfo, diff)
} }
pending := h.GetPendingMessages(t, r, tc.queue) pending := h.GetPendingMessages(t, r, tc.queue)
if len(pending) != 1 { if len(pending) != 1 {
@@ -650,7 +737,7 @@ func TestClientEnqueueUnique(t *testing.T) {
ttl time.Duration ttl time.Duration
}{ }{
{ {
NewTask("email", map[string]interface{}{"user_id": 123}), NewTask("email", h.JSON(map[string]interface{}{"user_id": 123})),
time.Hour, time.Hour,
}, },
} }
@@ -664,7 +751,7 @@ func TestClientEnqueueUnique(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val() gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val()
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl) t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl)
continue continue
@@ -709,7 +796,7 @@ func TestClientEnqueueUniqueWithProcessInOption(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val() gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val()
wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL) t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
@@ -755,7 +842,7 @@ func TestClientEnqueueUniqueWithProcessAtOption(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val() gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val()
wantTTL := tc.at.Add(tc.ttl).Sub(time.Now()) wantTTL := tc.at.Add(tc.ttl).Sub(time.Now())
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL) t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
@@ -774,4 +861,3 @@ func TestClientEnqueueUniqueWithProcessAtOption(t *testing.T) {
} }
} }
} }

27
doc.go
View File

@@ -11,7 +11,7 @@ specify the connection using one of RedisConnOpt types.
redisConnOpt = asynq.RedisClientOpt{ redisConnOpt = asynq.RedisClientOpt{
Addr: "127.0.0.1:6379", Addr: "127.0.0.1:6379",
Password: "xxxxx", Password: "xxxxx",
DB: 3, DB: 2,
} }
The Client is used to enqueue a task. The Client is used to enqueue a task.
@@ -20,15 +20,19 @@ The Client is used to enqueue a task.
client := asynq.NewClient(redisConnOpt) client := asynq.NewClient(redisConnOpt)
// Task is created with two parameters: its type and payload. // Task is created with two parameters: its type and payload.
t := asynq.NewTask( // Payload data is simply an array of bytes. It can be encoded in JSON, Protocol Buffer, Gob, etc.
"send_email", b, err := json.Marshal(ExamplePayload{UserID: 42})
map[string]interface{}{"user_id": 42}) if err != nil {
log.Fatal(err)
}
task := asynq.NewTask("example", b)
// Enqueue the task to be processed immediately. // Enqueue the task to be processed immediately.
res, err := client.Enqueue(t) info, err := client.Enqueue(task)
// Schedule the task to be processed after one minute. // Schedule the task to be processed after one minute.
res, err = client.Enqueue(t, asynq.ProcessIn(1*time.Minute)) info, err = client.Enqueue(t, asynq.ProcessIn(1*time.Minute))
The Server is used to run the task processing workers with a given The Server is used to run the task processing workers with a given
handler. handler.
@@ -52,10 +56,13 @@ Example of a type that implements the Handler interface.
func (h *TaskHandler) ProcessTask(ctx context.Context, task *asynq.Task) error { func (h *TaskHandler) ProcessTask(ctx context.Context, task *asynq.Task) error {
switch task.Type { switch task.Type {
case "send_email": case "example":
id, err := task.Payload.GetInt("user_id") var data ExamplePayload
// send email if err := json.Unmarshal(task.Payload(), &data); err != nil {
//... return err
}
// perform task with the data
default: default:
return fmt.Errorf("unexpected task type %q", task.Type) return fmt.Errorf("unexpected task type %q", task.Type)
} }

View File

@@ -30,7 +30,7 @@ func ExampleServer_Run() {
} }
} }
func ExampleServer_Stop() { func ExampleServer_Shutdown() {
srv := asynq.NewServer( srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"}, asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20}, asynq.Config{Concurrency: 20},
@@ -47,10 +47,10 @@ func ExampleServer_Stop() {
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT) signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs // wait for termination signal <-sigs // wait for termination signal
srv.Stop() srv.Shutdown()
} }
func ExampleServer_Quiet() { func ExampleServer_Stop() {
srv := asynq.NewServer( srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"}, asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20}, asynq.Config{Concurrency: 20},
@@ -70,13 +70,13 @@ func ExampleServer_Quiet() {
for { for {
s := <-sigs s := <-sigs
if s == unix.SIGTSTP { if s == unix.SIGTSTP {
srv.Quiet() // stop processing new tasks srv.Stop() // stop processing new tasks
continue continue
} }
break break // received SIGTERM or SIGINT signal
} }
srv.Stop() srv.Shutdown()
} }
func ExampleScheduler() { func ExampleScheduler() {

View File

@@ -45,7 +45,7 @@ func newForwarder(params forwarderParams) *forwarder {
} }
} }
func (f *forwarder) terminate() { func (f *forwarder) shutdown() {
f.logger.Debug("Forwarder shutting down...") f.logger.Debug("Forwarder shutting down...")
// Signal the forwarder goroutine to stop polling. // Signal the forwarder goroutine to stop polling.
f.done <- struct{}{} f.done <- struct{}{}
@@ -69,7 +69,7 @@ func (f *forwarder) start(wg *sync.WaitGroup) {
} }
func (f *forwarder) exec() { func (f *forwarder) exec() {
if err := f.broker.CheckAndEnqueue(f.queues...); err != nil { if err := f.broker.ForwardIfReady(f.queues...); err != nil {
f.logger.Errorf("Could not enqueue scheduled tasks: %v", err) f.logger.Errorf("Could not enqueue scheduled tasks: %v", err)
} }
} }

View File

@@ -111,7 +111,7 @@ func TestForwarder(t *testing.T) {
var wg sync.WaitGroup var wg sync.WaitGroup
s.start(&wg) s.start(&wg)
time.Sleep(tc.wait) time.Sleep(tc.wait)
s.terminate() s.shutdown()
for qname, want := range tc.wantScheduled { for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledMessages(t, r, qname) gotScheduled := h.GetScheduledMessages(t, r, qname)
@@ -130,7 +130,7 @@ func TestForwarder(t *testing.T) {
for qname, want := range tc.wantPending { for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname) gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.SortMsgOpt); diff != "" { if diff := cmp.Diff(want, gotPending, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.QueueKey(qname), diff) t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.PendingKey(qname), diff)
} }
} }
} }

6
go.mod
View File

@@ -4,12 +4,14 @@ go 1.13
require ( require (
github.com/go-redis/redis/v7 v7.4.0 github.com/go-redis/redis/v7 v7.4.0
github.com/google/go-cmp v0.4.0 github.com/golang/protobuf v1.4.1
github.com/google/uuid v1.1.1 github.com/google/go-cmp v0.5.0
github.com/google/uuid v1.2.0
github.com/robfig/cron/v3 v3.0.1 github.com/robfig/cron/v3 v3.0.1
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.3.1
go.uber.org/goleak v0.10.0 go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
google.golang.org/protobuf v1.25.0
gopkg.in/yaml.v2 v2.2.7 // indirect gopkg.in/yaml.v2 v2.2.7 // indirect
) )

58
go.sum
View File

@@ -1,18 +1,40 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs= github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4= github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1 h1:ZFgWrT+bLgsYPirOnRfKLYJLvssAegOj/hgyMFdJZe0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0 h1:/QaMHBdZ26BB3SSst0Iwl10Epc+xhTquomWX0oZEB6w=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY= github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
@@ -27,6 +49,7 @@ github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs= github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro= github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
@@ -36,11 +59,23 @@ github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXf
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4= go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI= go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g= golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -54,8 +89,29 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -68,3 +124,5 @@ gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo= gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@@ -45,7 +45,7 @@ func newHealthChecker(params healthcheckerParams) *healthchecker {
} }
} }
func (hc *healthchecker) terminate() { func (hc *healthchecker) shutdown() {
if hc.healthcheckFunc == nil { if hc.healthcheckFunc == nil {
return return
} }

View File

@@ -51,7 +51,7 @@ func TestHealthChecker(t *testing.T) {
} }
mu.Unlock() mu.Unlock()
hc.terminate() hc.shutdown()
} }
func TestHealthCheckerWhenRedisDown(t *testing.T) { func TestHealthCheckerWhenRedisDown(t *testing.T) {
@@ -99,5 +99,5 @@ func TestHealthCheckerWhenRedisDown(t *testing.T) {
} }
mu.Unlock() mu.Unlock()
hc.terminate() hc.shutdown()
} }

View File

@@ -40,8 +40,8 @@ type heartbeater struct {
started time.Time started time.Time
workers map[string]*workerInfo workers map[string]*workerInfo
// status is shared with other goroutine but is concurrency safe. // state is shared with other goroutine but is concurrency safe.
status *base.ServerStatus state *base.ServerState
// channels to receive updates on active workers. // channels to receive updates on active workers.
starting <-chan *workerInfo starting <-chan *workerInfo
@@ -55,7 +55,7 @@ type heartbeaterParams struct {
concurrency int concurrency int
queues map[string]int queues map[string]int
strictPriority bool strictPriority bool
status *base.ServerStatus state *base.ServerState
starting <-chan *workerInfo starting <-chan *workerInfo
finished <-chan *base.TaskMessage finished <-chan *base.TaskMessage
} }
@@ -79,14 +79,14 @@ func newHeartbeater(params heartbeaterParams) *heartbeater {
queues: params.queues, queues: params.queues,
strictPriority: params.strictPriority, strictPriority: params.strictPriority,
status: params.status, state: params.state,
workers: make(map[string]*workerInfo), workers: make(map[string]*workerInfo),
starting: params.starting, starting: params.starting,
finished: params.finished, finished: params.finished,
} }
} }
func (h *heartbeater) terminate() { func (h *heartbeater) shutdown() {
h.logger.Debug("Heartbeater shutting down...") h.logger.Debug("Heartbeater shutting down...")
// Signal the heartbeater goroutine to stop. // Signal the heartbeater goroutine to stop.
h.done <- struct{}{} h.done <- struct{}{}
@@ -142,7 +142,7 @@ func (h *heartbeater) beat() {
Concurrency: h.concurrency, Concurrency: h.concurrency,
Queues: h.queues, Queues: h.queues,
StrictPriority: h.strictPriority, StrictPriority: h.strictPriority,
Status: h.status.String(), Status: h.state.String(),
Started: h.started, Started: h.started,
ActiveWorkerCount: len(h.workers), ActiveWorkerCount: len(h.workers),
} }

View File

@@ -38,7 +38,7 @@ func TestHeartbeater(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
status := base.NewServerStatus(base.StatusIdle) state := base.NewServerState()
hb := newHeartbeater(heartbeaterParams{ hb := newHeartbeater(heartbeaterParams{
logger: testLogger, logger: testLogger,
broker: rdbClient, broker: rdbClient,
@@ -46,7 +46,7 @@ func TestHeartbeater(t *testing.T) {
concurrency: tc.concurrency, concurrency: tc.concurrency,
queues: tc.queues, queues: tc.queues,
strictPriority: false, strictPriority: false,
status: status, state: state,
starting: make(chan *workerInfo), starting: make(chan *workerInfo),
finished: make(chan *base.TaskMessage), finished: make(chan *base.TaskMessage),
}) })
@@ -55,7 +55,7 @@ func TestHeartbeater(t *testing.T) {
hb.host = tc.host hb.host = tc.host
hb.pid = tc.pid hb.pid = tc.pid
status.Set(base.StatusRunning) state.Set(base.StateActive)
var wg sync.WaitGroup var wg sync.WaitGroup
hb.start(&wg) hb.start(&wg)
@@ -65,7 +65,7 @@ func TestHeartbeater(t *testing.T) {
Queues: tc.queues, Queues: tc.queues,
Concurrency: tc.concurrency, Concurrency: tc.concurrency,
Started: time.Now(), Started: time.Now(),
Status: "running", Status: "active",
} }
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
@@ -74,49 +74,49 @@ func TestHeartbeater(t *testing.T) {
ss, err := rdbClient.ListServers() ss, err := rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read server info from redis: %v", err) t.Errorf("could not read server info from redis: %v", err)
hb.terminate() hb.shutdown()
continue continue
} }
if len(ss) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss)) t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
hb.terminate() hb.shutdown()
continue continue
} }
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.shutdown()
continue continue
} }
// status change // status change
status.Set(base.StatusStopped) state.Set(base.StateClosed)
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval * 2)
want.Status = "stopped" want.Status = "closed"
ss, err = rdbClient.ListServers() ss, err = rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("could not read process status from redis: %v", err)
hb.terminate() hb.shutdown()
continue continue
} }
if len(ss) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss)) t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
hb.terminate() hb.shutdown()
continue continue
} }
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.shutdown()
continue continue
} }
hb.terminate() hb.shutdown()
} }
} }
@@ -131,6 +131,8 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
r := rdb.NewRDB(setup(t)) r := rdb.NewRDB(setup(t))
defer r.Close() defer r.Close()
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
state := base.NewServerState()
state.Set(base.StateActive)
hb := newHeartbeater(heartbeaterParams{ hb := newHeartbeater(heartbeaterParams{
logger: testLogger, logger: testLogger,
broker: testBroker, broker: testBroker,
@@ -138,7 +140,7 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
concurrency: 10, concurrency: 10,
queues: map[string]int{"default": 1}, queues: map[string]int{"default": 1},
strictPriority: false, strictPriority: false,
status: base.NewServerStatus(base.StatusRunning), state: state,
starting: make(chan *workerInfo), starting: make(chan *workerInfo),
finished: make(chan *base.TaskMessage), finished: make(chan *base.TaskMessage),
}) })
@@ -150,5 +152,5 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
// wait for heartbeater to try writing data to redis // wait for heartbeater to try writing data to redis
time.Sleep(2 * time.Second) time.Sleep(2 * time.Second)
hb.terminate() hb.shutdown()
} }

View File

@@ -2,7 +2,7 @@
// Use of this source code is governed by a MIT license // Use of this source code is governed by a MIT license
// that can be found in the LICENSE file. // that can be found in the LICENSE file.
package inspeq package asynq
import ( import (
"fmt" "fmt"
@@ -12,8 +12,8 @@ import (
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
) )
@@ -24,7 +24,7 @@ type Inspector struct {
} }
// New returns a new instance of Inspector. // New returns a new instance of Inspector.
func New(r asynq.RedisConnOpt) *Inspector { func NewInspector(r RedisConnOpt) *Inspector {
c, ok := r.MakeRedisClient().(redis.UniversalClient) c, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok { if !ok {
panic(fmt.Sprintf("inspeq: unsupported RedisConnOpt type %T", r)) panic(fmt.Sprintf("inspeq: unsupported RedisConnOpt type %T", r))
@@ -44,15 +44,18 @@ func (i *Inspector) Queues() ([]string, error) {
return i.rdb.AllQueues() return i.rdb.AllQueues()
} }
// QueueStats represents a state of queues at a certain time. // QueueInfo represents a state of queues at a certain time.
type QueueStats struct { type QueueInfo struct {
// Name of the queue. // Name of the queue.
Queue string Queue string
// Total number of bytes that the queue and its tasks require to be stored in redis. // Total number of bytes that the queue and its tasks require to be stored in redis.
MemoryUsage int64 MemoryUsage int64
// Size is the total number of tasks in the queue. // Size is the total number of tasks in the queue.
// The value is the sum of Pending, Active, Scheduled, Retry, and Archived. // The value is the sum of Pending, Active, Scheduled, Retry, and Archived.
Size int Size int
// Number of pending tasks. // Number of pending tasks.
Pending int Pending int
// Number of active tasks. // Number of active tasks.
@@ -63,20 +66,23 @@ type QueueStats struct {
Retry int Retry int
// Number of archived tasks. // Number of archived tasks.
Archived int Archived int
// Total number of tasks being processed during the given date. // Total number of tasks being processed during the given date.
// The number includes both succeeded and failed tasks. // The number includes both succeeded and failed tasks.
Processed int Processed int
// Total number of tasks failed to be processed during the given date. // Total number of tasks failed to be processed during the given date.
Failed int Failed int
// Paused indicates whether the queue is paused. // Paused indicates whether the queue is paused.
// If true, tasks in the queue will not be processed. // If true, tasks in the queue will not be processed.
Paused bool Paused bool
// Time when this stats was taken.
// Time when this queue info snapshot was taken.
Timestamp time.Time Timestamp time.Time
} }
// CurrentStats returns a current stats of the given queue. // GetQueueInfo returns current information of the given queue.
func (i *Inspector) CurrentStats(qname string) (*QueueStats, error) { func (i *Inspector) GetQueueInfo(qname string) (*QueueInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return nil, err return nil, err
} }
@@ -84,7 +90,7 @@ func (i *Inspector) CurrentStats(qname string) (*QueueStats, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &QueueStats{ return &QueueInfo{
Queue: stats.Queue, Queue: stats.Queue,
MemoryUsage: stats.MemoryUsage, MemoryUsage: stats.MemoryUsage,
Size: stats.Size, Size: stats.Size,
@@ -134,23 +140,16 @@ func (i *Inspector) History(qname string, n int) ([]*DailyStats, error) {
return res, nil return res, nil
} }
// ErrQueueNotFound indicates that the specified queue does not exist. var (
type ErrQueueNotFound struct { // ErrQueueNotFound indicates that the specified queue does not exist.
qname string ErrQueueNotFound = errors.New("queue not found")
}
func (e *ErrQueueNotFound) Error() string { // ErrQueueNotEmpty indicates that the specified queue is not empty.
return fmt.Sprintf("queue %q does not exist", e.qname) ErrQueueNotEmpty = errors.New("queue is not empty")
}
// ErrQueueNotEmpty indicates that the specified queue is not empty. // ErrTaskNotFound indicates that the specified task cannot be found in the queue.
type ErrQueueNotEmpty struct { ErrTaskNotFound = errors.New("task not found")
qname string )
}
func (e *ErrQueueNotEmpty) Error() string {
return fmt.Sprintf("queue %q is not empty", e.qname)
}
// DeleteQueue removes the specified queue. // DeleteQueue removes the specified queue.
// //
@@ -164,134 +163,34 @@ func (e *ErrQueueNotEmpty) Error() string {
// returns ErrQueueNotEmpty. // returns ErrQueueNotEmpty.
func (i *Inspector) DeleteQueue(qname string, force bool) error { func (i *Inspector) DeleteQueue(qname string, force bool) error {
err := i.rdb.RemoveQueue(qname, force) err := i.rdb.RemoveQueue(qname, force)
if _, ok := err.(*rdb.ErrQueueNotFound); ok { if errors.IsQueueNotFound(err) {
return &ErrQueueNotFound{qname} return fmt.Errorf("%w: queue=%q", ErrQueueNotFound, qname)
} }
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok { if errors.IsQueueNotEmpty(err) {
return &ErrQueueNotEmpty{qname} return fmt.Errorf("%w: queue=%q", ErrQueueNotEmpty, qname)
} }
return err return err
} }
// PendingTask is a task in a queue and is ready to be processed. // GetTaskInfo retrieves task information given a task id and queue name.
type PendingTask struct { //
*asynq.Task // Returns ErrQueueNotFound if a queue with the given name doesn't exist.
ID string // Returns ErrTaskNotFound if a task with the given id doesn't exist in the queue.
Queue string func (i *Inspector) GetTaskInfo(qname, id string) (*TaskInfo, error) {
MaxRetry int taskid, err := uuid.Parse(id)
Retried int
LastError string
}
// ActiveTask is a task that's currently being processed.
type ActiveTask struct {
*asynq.Task
ID string
Queue string
MaxRetry int
Retried int
LastError string
}
// ScheduledTask is a task scheduled to be processed in the future.
type ScheduledTask struct {
*asynq.Task
ID string
Queue string
MaxRetry int
Retried int
LastError string
NextProcessAt time.Time
score int64
}
// RetryTask is a task scheduled to be retried in the future.
type RetryTask struct {
*asynq.Task
ID string
Queue string
NextProcessAt time.Time
MaxRetry int
Retried int
LastError string
// TODO: LastFailedAt time.Time
score int64
}
// ArchivedTask is a task archived for debugging and inspection purposes, and
// it won't be retried automatically.
// A task can be archived when the task exhausts its retry counts or manually
// archived by a user via the CLI or Inspector.
type ArchivedTask struct {
*asynq.Task
ID string
Queue string
MaxRetry int
Retried int
LastFailedAt time.Time
LastError string
score int64
}
// Format string used for task key.
// Format is <prefix>:<uuid>:<score>.
const taskKeyFormat = "%s:%v:%v"
// Prefix used for task key.
const (
keyPrefixPending = "p"
keyPrefixScheduled = "s"
keyPrefixRetry = "r"
keyPrefixArchived = "a"
allKeyPrefixes = keyPrefixPending + keyPrefixScheduled + keyPrefixRetry + keyPrefixArchived
)
// Key returns a key used to delete, and archive the pending task.
func (t *PendingTask) Key() string {
// Note: Pending tasks are stored in redis LIST, therefore no score.
// Use zero for the score to use the same key format.
return fmt.Sprintf(taskKeyFormat, keyPrefixPending, t.ID, 0)
}
// Key returns a key used to delete, run, and archive the scheduled task.
func (t *ScheduledTask) Key() string {
return fmt.Sprintf(taskKeyFormat, keyPrefixScheduled, t.ID, t.score)
}
// Key returns a key used to delete, run, and archive the retry task.
func (t *RetryTask) Key() string {
return fmt.Sprintf(taskKeyFormat, keyPrefixRetry, t.ID, t.score)
}
// Key returns a key used to delete and run the archived task.
func (t *ArchivedTask) Key() string {
return fmt.Sprintf(taskKeyFormat, keyPrefixArchived, t.ID, t.score)
}
// parseTaskKey parses a key string and returns each part of key with proper
// type if valid, otherwise it reports an error.
func parseTaskKey(key string) (prefix string, id uuid.UUID, score int64, err error) {
parts := strings.Split(key, ":")
if len(parts) != 3 {
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
}
id, err = uuid.Parse(parts[1])
if err != nil { if err != nil {
return "", uuid.Nil, 0, fmt.Errorf("invalid id") return nil, fmt.Errorf("asynq: %s is not a valid task id", id)
} }
score, err = strconv.ParseInt(parts[2], 10, 64) info, err := i.rdb.GetTaskInfo(qname, taskid)
if err != nil { switch {
return "", uuid.Nil, 0, fmt.Errorf("invalid id") case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
case errors.IsTaskNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrTaskNotFound)
case err != nil:
return nil, fmt.Errorf("asynq: %v", err)
} }
prefix = parts[0] return newTaskInfo(info.Message, info.State, info.NextProcessAt), nil
if len(prefix) != 1 || !strings.Contains(allKeyPrefixes, prefix) {
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
}
return prefix, id, score, nil
} }
// ListOption specifies behavior of list operation. // ListOption specifies behavior of list operation.
@@ -358,26 +257,23 @@ func Page(n int) ListOption {
// ListPendingTasks retrieves pending tasks from the specified queue. // ListPendingTasks retrieves pending tasks from the specified queue.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*PendingTask, error) { func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return nil, err return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
msgs, err := i.rdb.ListPending(qname, pgn) msgs, err := i.rdb.ListPending(qname, pgn)
if err != nil { switch {
return nil, err case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
case err != nil:
return nil, fmt.Errorf("asynq: %v", err)
} }
var tasks []*PendingTask now := time.Now()
var tasks []*TaskInfo
for _, m := range msgs { for _, m := range msgs {
tasks = append(tasks, &PendingTask{ tasks = append(tasks, newTaskInfo(m, base.TaskStatePending, now))
Task: asynq.NewTask(m.Type, m.Payload),
ID: m.ID.String(),
Queue: m.Queue,
MaxRetry: m.Retry,
Retried: m.Retried,
LastError: m.ErrorMsg,
})
} }
return tasks, err return tasks, err
} }
@@ -385,124 +281,106 @@ func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*Pendi
// ListActiveTasks retrieves active tasks from the specified queue. // ListActiveTasks retrieves active tasks from the specified queue.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*ActiveTask, error) { func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return nil, err return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
msgs, err := i.rdb.ListActive(qname, pgn) msgs, err := i.rdb.ListActive(qname, pgn)
if err != nil { switch {
return nil, err case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
case err != nil:
return nil, fmt.Errorf("asynq: %v", err)
} }
var tasks []*ActiveTask var tasks []*TaskInfo
for _, m := range msgs { for _, m := range msgs {
tasks = append(tasks, newTaskInfo(m, base.TaskStateActive, time.Time{}))
tasks = append(tasks, &ActiveTask{
Task: asynq.NewTask(m.Type, m.Payload),
ID: m.ID.String(),
Queue: m.Queue,
MaxRetry: m.Retry,
Retried: m.Retried,
LastError: m.ErrorMsg,
})
} }
return tasks, err return tasks, err
} }
// ListScheduledTasks retrieves scheduled tasks from the specified queue. // ListScheduledTasks retrieves scheduled tasks from the specified queue.
// Tasks are sorted by NextProcessAt field in ascending order. // Tasks are sorted by NextProcessAt in ascending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*ScheduledTask, error) { func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return nil, err return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListScheduled(qname, pgn) zs, err := i.rdb.ListScheduled(qname, pgn)
if err != nil { switch {
return nil, err case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
case err != nil:
return nil, fmt.Errorf("asynq: %v", err)
} }
var tasks []*ScheduledTask var tasks []*TaskInfo
for _, z := range zs { for _, z := range zs {
processAt := time.Unix(z.Score, 0) tasks = append(tasks, newTaskInfo(
t := asynq.NewTask(z.Message.Type, z.Message.Payload) z.Message,
tasks = append(tasks, &ScheduledTask{ base.TaskStateScheduled,
Task: t, time.Unix(z.Score, 0),
ID: z.Message.ID.String(), ))
Queue: z.Message.Queue,
MaxRetry: z.Message.Retry,
Retried: z.Message.Retried,
LastError: z.Message.ErrorMsg,
NextProcessAt: processAt,
score: z.Score,
})
} }
return tasks, nil return tasks, nil
} }
// ListRetryTasks retrieves retry tasks from the specified queue. // ListRetryTasks retrieves retry tasks from the specified queue.
// Tasks are sorted by NextProcessAt field in ascending order. // Tasks are sorted by NextProcessAt in ascending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*RetryTask, error) { func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return nil, err return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListRetry(qname, pgn) zs, err := i.rdb.ListRetry(qname, pgn)
if err != nil { switch {
return nil, err case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
case err != nil:
return nil, fmt.Errorf("asynq: %v", err)
} }
var tasks []*RetryTask var tasks []*TaskInfo
for _, z := range zs { for _, z := range zs {
processAt := time.Unix(z.Score, 0) tasks = append(tasks, newTaskInfo(
t := asynq.NewTask(z.Message.Type, z.Message.Payload) z.Message,
tasks = append(tasks, &RetryTask{ base.TaskStateRetry,
Task: t, time.Unix(z.Score, 0),
ID: z.Message.ID.String(), ))
Queue: z.Message.Queue,
NextProcessAt: processAt,
MaxRetry: z.Message.Retry,
Retried: z.Message.Retried,
// TODO: LastFailedAt: z.Message.LastFailedAt
LastError: z.Message.ErrorMsg,
score: z.Score,
})
} }
return tasks, nil return tasks, nil
} }
// ListArchivedTasks retrieves archived tasks from the specified queue. // ListArchivedTasks retrieves archived tasks from the specified queue.
// Tasks are sorted by LastFailedAt field in descending order. // Tasks are sorted by LastFailedAt in descending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListArchivedTasks(qname string, opts ...ListOption) ([]*ArchivedTask, error) { func (i *Inspector) ListArchivedTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return nil, err return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListArchived(qname, pgn) zs, err := i.rdb.ListArchived(qname, pgn)
if err != nil { switch {
return nil, err case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
case err != nil:
return nil, fmt.Errorf("asynq: %v", err)
} }
var tasks []*ArchivedTask var tasks []*TaskInfo
for _, z := range zs { for _, z := range zs {
failedAt := time.Unix(z.Score, 0) tasks = append(tasks, newTaskInfo(
t := asynq.NewTask(z.Message.Type, z.Message.Payload) z.Message,
tasks = append(tasks, &ArchivedTask{ base.TaskStateArchived,
Task: t, time.Time{},
ID: z.Message.ID.String(), ))
Queue: z.Message.Queue,
MaxRetry: z.Message.Retry,
Retried: z.Message.Retried,
LastFailedAt: failedAt,
LastError: z.Message.ErrorMsg,
score: z.Score,
})
} }
return tasks, nil return tasks, nil
} }
@@ -547,27 +425,32 @@ func (i *Inspector) DeleteAllArchivedTasks(qname string) (int, error) {
return int(n), err return int(n), err
} }
// DeleteTaskByKey deletes a task with the given key from the given queue. // DeleteTask deletes a task with the given id from the given queue.
func (i *Inspector) DeleteTaskByKey(qname, key string) error { // The task needs to be in pending, scheduled, retry, or archived state,
// otherwise DeleteTask will return an error.
//
// If a queue with the given name doesn't exist, it returns ErrQueueNotFound.
// If a task with the given id doesn't exist in the queue, it returns ErrTaskNotFound.
// If the task is in active state, it returns a non-nil error.
func (i *Inspector) DeleteTask(qname, id string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return err return fmt.Errorf("asynq: %v", err)
} }
prefix, id, score, err := parseTaskKey(key) taskid, err := uuid.Parse(id)
if err != nil { if err != nil {
return err return fmt.Errorf("asynq: %s is not a valid task id", id)
} }
switch prefix { err = i.rdb.DeleteTask(qname, taskid)
case keyPrefixPending: switch {
return i.rdb.DeletePendingTask(qname, id) case errors.IsQueueNotFound(err):
case keyPrefixScheduled: return fmt.Errorf("asynq: %w", ErrQueueNotFound)
return i.rdb.DeleteScheduledTask(qname, id, score) case errors.IsTaskNotFound(err):
case keyPrefixRetry: return fmt.Errorf("asynq: %w", ErrTaskNotFound)
return i.rdb.DeleteRetryTask(qname, id, score) case err != nil:
case keyPrefixArchived: return fmt.Errorf("asynq: %v", err)
return i.rdb.DeleteArchivedTask(qname, id, score)
default:
return fmt.Errorf("invalid key")
} }
return nil
} }
// RunAllScheduledTasks transition all scheduled tasks to pending state from the given queue, // RunAllScheduledTasks transition all scheduled tasks to pending state from the given queue,
@@ -600,27 +483,31 @@ func (i *Inspector) RunAllArchivedTasks(qname string) (int, error) {
return int(n), err return int(n), err
} }
// RunTaskByKey transition a task to pending state given task key and queue name. // RunTask updates the task to pending state given a queue name and task id.
func (i *Inspector) RunTaskByKey(qname, key string) error { // The task needs to be in scheduled, retry, or archived state, otherwise RunTask
// will return an error.
//
// If a queue with the given name doesn't exist, it returns ErrQueueNotFound.
// If a task with the given id doesn't exist in the queue, it returns ErrTaskNotFound.
// If the task is in pending or active state, it returns a non-nil error.
func (i *Inspector) RunTask(qname, id string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return err return fmt.Errorf("asynq: %v", err)
} }
prefix, id, score, err := parseTaskKey(key) taskid, err := uuid.Parse(id)
if err != nil { if err != nil {
return err return fmt.Errorf("asynq: %s is not a valid task id", id)
} }
switch prefix { err = i.rdb.RunTask(qname, taskid)
case keyPrefixScheduled: switch {
return i.rdb.RunScheduledTask(qname, id, score) case errors.IsQueueNotFound(err):
case keyPrefixRetry: return fmt.Errorf("asynq: %w", ErrQueueNotFound)
return i.rdb.RunRetryTask(qname, id, score) case errors.IsTaskNotFound(err):
case keyPrefixArchived: return fmt.Errorf("asynq: %w", ErrTaskNotFound)
return i.rdb.RunArchivedTask(qname, id, score) case err != nil:
case keyPrefixPending: return fmt.Errorf("asynq: %v", err)
return fmt.Errorf("task is already pending for run")
default:
return fmt.Errorf("invalid key")
} }
return nil
} }
// ArchiveAllPendingTasks archives all pending tasks from the given queue, // ArchiveAllPendingTasks archives all pending tasks from the given queue,
@@ -653,34 +540,38 @@ func (i *Inspector) ArchiveAllRetryTasks(qname string) (int, error) {
return int(n), err return int(n), err
} }
// ArchiveTaskByKey archives a task with the given key in the given queue. // ArchiveTask archives a task with the given id in the given queue.
func (i *Inspector) ArchiveTaskByKey(qname, key string) error { // The task needs to be in pending, scheduled, or retry state, otherwise ArchiveTask
// will return an error.
//
// If a queue with the given name doesn't exist, it returns ErrQueueNotFound.
// If a task with the given id doesn't exist in the queue, it returns ErrTaskNotFound.
// If the task is in already archived, it returns a non-nil error.
func (i *Inspector) ArchiveTask(qname, id string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(qname); err != nil {
return err return fmt.Errorf("asynq: err")
} }
prefix, id, score, err := parseTaskKey(key) taskid, err := uuid.Parse(id)
if err != nil { if err != nil {
return err return fmt.Errorf("asynq: %s is not a valid task id", id)
} }
switch prefix { err = i.rdb.ArchiveTask(qname, taskid)
case keyPrefixPending: switch {
return i.rdb.ArchivePendingTask(qname, id) case errors.IsQueueNotFound(err):
case keyPrefixScheduled: return fmt.Errorf("asynq: %w", ErrQueueNotFound)
return i.rdb.ArchiveScheduledTask(qname, id, score) case errors.IsTaskNotFound(err):
case keyPrefixRetry: return fmt.Errorf("asynq: %w", ErrTaskNotFound)
return i.rdb.ArchiveRetryTask(qname, id, score) case err != nil:
case keyPrefixArchived: return fmt.Errorf("asynq: %v", err)
return fmt.Errorf("task is already archived")
default:
return fmt.Errorf("invalid key")
} }
return nil
} }
// CancelActiveTask sends a signal to cancel processing of the task with // CancelProcessing sends a signal to cancel processing of the task
// the given id. CancelActiveTask is best-effort, which means that it does not // given a task id. CancelProcessing is best-effort, which means that it does not
// guarantee that the task with the given id will be canceled. The return // guarantee that the task with the given id will be canceled. The return
// value only indicates whether the cancelation signal has been sent. // value only indicates whether the cancelation signal has been sent.
func (i *Inspector) CancelActiveTask(id string) error { func (i *Inspector) CancelProcessing(id string) error {
return i.rdb.PublishCancelation(id) return i.rdb.PublishCancelation(id)
} }
@@ -732,13 +623,12 @@ func (i *Inspector) Servers() ([]*ServerInfo, error) {
continue continue
} }
wrkInfo := &WorkerInfo{ wrkInfo := &WorkerInfo{
Started: w.Started, TaskID: w.ID,
Deadline: w.Deadline, TaskType: w.Type,
Task: &ActiveTask{ TaskPayload: w.Payload,
Task: asynq.NewTask(w.Type, w.Payload), Queue: w.Queue,
ID: w.ID, Started: w.Started,
Queue: w.Queue, Deadline: w.Deadline,
},
} }
srvInfo.ActiveWorkers = append(srvInfo.ActiveWorkers, wrkInfo) srvInfo.ActiveWorkers = append(srvInfo.ActiveWorkers, wrkInfo)
} }
@@ -775,8 +665,14 @@ type ServerInfo struct {
// WorkerInfo describes a running worker processing a task. // WorkerInfo describes a running worker processing a task.
type WorkerInfo struct { type WorkerInfo struct {
// The task the worker is processing. // ID of the task the worker is processing.
Task *ActiveTask TaskID string
// Type of the task the worker is processing.
TaskType string
// Payload of the task the worker is processing.
TaskPayload []byte
// Queue from which the worker got its task.
Queue string
// Time the worker started processing the task. // Time the worker started processing the task.
Started time.Time Started time.Time
// Time the worker needs to finish processing the task by. // Time the worker needs to finish processing the task by.
@@ -798,14 +694,16 @@ type ClusterNode struct {
} }
// ClusterNodes returns a list of nodes the given queue belongs to. // ClusterNodes returns a list of nodes the given queue belongs to.
func (i *Inspector) ClusterNodes(qname string) ([]ClusterNode, error) { //
// Only relevant if task queues are stored in redis cluster.
func (i *Inspector) ClusterNodes(qname string) ([]*ClusterNode, error) {
nodes, err := i.rdb.ClusterNodes(qname) nodes, err := i.rdb.ClusterNodes(qname)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var res []ClusterNode var res []*ClusterNode
for _, node := range nodes { for _, node := range nodes {
res = append(res, ClusterNode{ID: node.ID, Addr: node.Addr}) res = append(res, &ClusterNode{ID: node.ID, Addr: node.Addr})
} }
return res, nil return res, nil
} }
@@ -819,10 +717,10 @@ type SchedulerEntry struct {
Spec string Spec string
// Periodic Task registered for this entry. // Periodic Task registered for this entry.
Task *asynq.Task Task *Task
// Opts is the options for the periodic task. // Opts is the options for the periodic task.
Opts []asynq.Option Opts []Option
// Next shows the next time the task will be enqueued. // Next shows the next time the task will be enqueued.
Next time.Time Next time.Time
@@ -841,8 +739,8 @@ func (i *Inspector) SchedulerEntries() ([]*SchedulerEntry, error) {
return nil, err return nil, err
} }
for _, e := range res { for _, e := range res {
task := asynq.NewTask(e.Type, e.Payload) task := NewTask(e.Type, e.Payload)
var opts []asynq.Option var opts []Option
for _, s := range e.Opts { for _, s := range e.Opts {
if o, err := parseOption(s); err == nil { if o, err := parseOption(s); err == nil {
// ignore bad data // ignore bad data
@@ -863,7 +761,7 @@ func (i *Inspector) SchedulerEntries() ([]*SchedulerEntry, error) {
// parseOption interprets a string s as an Option and returns the Option if parsing is successful, // parseOption interprets a string s as an Option and returns the Option if parsing is successful,
// otherwise returns non-nil error. // otherwise returns non-nil error.
func parseOption(s string) (asynq.Option, error) { func parseOption(s string) (Option, error) {
fn, arg := parseOptionFunc(s), parseOptionArg(s) fn, arg := parseOptionFunc(s), parseOptionArg(s)
switch fn { switch fn {
case "Queue": case "Queue":
@@ -871,43 +769,43 @@ func parseOption(s string) (asynq.Option, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
return asynq.Queue(qname), nil return Queue(qname), nil
case "MaxRetry": case "MaxRetry":
n, err := strconv.Atoi(arg) n, err := strconv.Atoi(arg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return asynq.MaxRetry(n), nil return MaxRetry(n), nil
case "Timeout": case "Timeout":
d, err := time.ParseDuration(arg) d, err := time.ParseDuration(arg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return asynq.Timeout(d), nil return Timeout(d), nil
case "Deadline": case "Deadline":
t, err := time.Parse(time.UnixDate, arg) t, err := time.Parse(time.UnixDate, arg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return asynq.Deadline(t), nil return Deadline(t), nil
case "Unique": case "Unique":
d, err := time.ParseDuration(arg) d, err := time.ParseDuration(arg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return asynq.Unique(d), nil return Unique(d), nil
case "ProcessAt": case "ProcessAt":
t, err := time.Parse(time.UnixDate, arg) t, err := time.Parse(time.UnixDate, arg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return asynq.ProcessAt(t), nil return ProcessAt(t), nil
case "ProcessIn": case "ProcessIn":
d, err := time.ParseDuration(arg) d, err := time.ParseDuration(arg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return asynq.ProcessIn(d), nil return ProcessIn(d), nil
default: default:
return nil, fmt.Errorf("cannot not parse option string %q", s) return nil, fmt.Errorf("cannot not parse option string %q", s)
} }

File diff suppressed because it is too large Load Diff

View File

@@ -1,22 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
/*
Package inspeq provides helper types and functions to inspect queues and tasks managed by Asynq.
Inspector is used to query and mutate the state of queues and tasks.
Example:
inspector := inspeq.New(asynq.RedisClientOpt{Addr: "localhost:6379"})
tasks, err := inspector.ListArchivedTasks("my-queue")
for _, t := range tasks {
if err := inspector.DeleteTaskByKey(t.Key()); err != nil {
// handle error
}
}
*/
package inspeq

View File

@@ -10,6 +10,7 @@ import (
"math" "math"
"sort" "sort"
"testing" "testing"
"time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
@@ -94,13 +95,13 @@ var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []
var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID") var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload. // NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage { func NewTaskMessage(taskType string, payload []byte) *base.TaskMessage {
return NewTaskMessageWithQueue(taskType, payload, base.DefaultQueueName) return NewTaskMessageWithQueue(taskType, payload, base.DefaultQueueName)
} }
// NewTaskMessageWithQueue returns a new instance of TaskMessage given a // NewTaskMessageWithQueue returns a new instance of TaskMessage given a
// task type, payload and queue name. // task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage { func NewTaskMessageWithQueue(taskType string, payload []byte, qname string) *base.TaskMessage {
return &base.TaskMessage{ return &base.TaskMessage{
ID: uuid.New(), ID: uuid.New(),
Type: taskType, Type: taskType,
@@ -112,17 +113,28 @@ func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qn
} }
} }
// JSON serializes the given key-value pairs into stream of bytes in JSON.
func JSON(kv map[string]interface{}) []byte {
b, err := json.Marshal(kv)
if err != nil {
panic(err)
}
return b
}
// TaskMessageAfterRetry returns an updated copy of t after retry. // TaskMessageAfterRetry returns an updated copy of t after retry.
// It increments retry count and sets the error message. // It increments retry count and sets the error message and last_failed_at time.
func TaskMessageAfterRetry(t base.TaskMessage, errMsg string) *base.TaskMessage { func TaskMessageAfterRetry(t base.TaskMessage, errMsg string, failedAt time.Time) *base.TaskMessage {
t.Retried = t.Retried + 1 t.Retried = t.Retried + 1
t.ErrorMsg = errMsg t.ErrorMsg = errMsg
t.LastFailedAt = failedAt.Unix()
return &t return &t
} }
// TaskMessageWithError returns an updated copy of t with the given error message. // TaskMessageWithError returns an updated copy of t with the given error message.
func TaskMessageWithError(t base.TaskMessage, errMsg string) *base.TaskMessage { func TaskMessageWithError(t base.TaskMessage, errMsg string, failedAt time.Time) *base.TaskMessage {
t.ErrorMsg = errMsg t.ErrorMsg = errMsg
t.LastFailedAt = failedAt.Unix()
return &t return &t
} }
@@ -130,7 +142,7 @@ func TaskMessageWithError(t base.TaskMessage, errMsg string) *base.TaskMessage {
// Calling test will fail if marshaling errors out. // Calling test will fail if marshaling errors out.
func MustMarshal(tb testing.TB, msg *base.TaskMessage) string { func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
tb.Helper() tb.Helper()
data, err := json.Marshal(msg) data, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
@@ -141,34 +153,11 @@ func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
// Calling test will fail if unmarshaling errors out. // Calling test will fail if unmarshaling errors out.
func MustUnmarshal(tb testing.TB, data string) *base.TaskMessage { func MustUnmarshal(tb testing.TB, data string) *base.TaskMessage {
tb.Helper() tb.Helper()
var msg base.TaskMessage msg, err := base.DecodeMessage([]byte(data))
err := json.Unmarshal([]byte(data), &msg)
if err != nil { if err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
return &msg return msg
}
// MustMarshalSlice marshals a slice of task messages and return a slice of
// json strings. Calling test will fail if marshaling errors out.
func MustMarshalSlice(tb testing.TB, msgs []*base.TaskMessage) []string {
tb.Helper()
var data []string
for _, m := range msgs {
data = append(data, MustMarshal(tb, m))
}
return data
}
// MustUnmarshalSlice unmarshals a slice of strings into a slice of task message structs.
// Calling test will fail if marshaling errors out.
func MustUnmarshalSlice(tb testing.TB, data []string) []*base.TaskMessage {
tb.Helper()
var msgs []*base.TaskMessage
for _, s := range data {
msgs = append(msgs, MustUnmarshal(tb, s))
}
return msgs
} }
// FlushDB deletes all the keys of the currently selected DB. // FlushDB deletes all the keys of the currently selected DB.
@@ -196,48 +185,49 @@ func FlushDB(tb testing.TB, r redis.UniversalClient) {
func SeedPendingQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) { func SeedPendingQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
tb.Helper() tb.Helper()
r.SAdd(base.AllQueues, qname) r.SAdd(base.AllQueues, qname)
seedRedisList(tb, r, base.QueueKey(qname), msgs) seedRedisList(tb, r, base.PendingKey(qname), msgs, base.TaskStatePending)
} }
// SeedActiveQueue initializes the active queue with the given messages. // SeedActiveQueue initializes the active queue with the given messages.
func SeedActiveQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) { func SeedActiveQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
tb.Helper() tb.Helper()
r.SAdd(base.AllQueues, qname) r.SAdd(base.AllQueues, qname)
seedRedisList(tb, r, base.ActiveKey(qname), msgs) seedRedisList(tb, r, base.ActiveKey(qname), msgs, base.TaskStateActive)
} }
// SeedScheduledQueue initializes the scheduled queue with the given messages. // SeedScheduledQueue initializes the scheduled queue with the given messages.
func SeedScheduledQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) { func SeedScheduledQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
r.SAdd(base.AllQueues, qname) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.ScheduledKey(qname), entries) seedRedisZSet(tb, r, base.ScheduledKey(qname), entries, base.TaskStateScheduled)
} }
// SeedRetryQueue initializes the retry queue with the given messages. // SeedRetryQueue initializes the retry queue with the given messages.
func SeedRetryQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) { func SeedRetryQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
r.SAdd(base.AllQueues, qname) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.RetryKey(qname), entries) seedRedisZSet(tb, r, base.RetryKey(qname), entries, base.TaskStateRetry)
} }
// SeedArchivedQueue initializes the archived queue with the given messages. // SeedArchivedQueue initializes the archived queue with the given messages.
func SeedArchivedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) { func SeedArchivedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
r.SAdd(base.AllQueues, qname) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.ArchivedKey(qname), entries) seedRedisZSet(tb, r, base.ArchivedKey(qname), entries, base.TaskStateArchived)
} }
// SeedDeadlines initializes the deadlines set with the given entries. // SeedDeadlines initializes the deadlines set with the given entries.
func SeedDeadlines(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) { func SeedDeadlines(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
r.SAdd(base.AllQueues, qname) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.DeadlinesKey(qname), entries) seedRedisZSet(tb, r, base.DeadlinesKey(qname), entries, base.TaskStateActive)
} }
// SeedAllPendingQueues initializes all of the specified queues with the given messages. // SeedAllPendingQueues initializes all of the specified queues with the given messages.
// //
// pending maps a queue name to a list of messages. // pending maps a queue name to a list of messages.
func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[string][]*base.TaskMessage) { func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[string][]*base.TaskMessage) {
tb.Helper()
for q, msgs := range pending { for q, msgs := range pending {
SeedPendingQueue(tb, r, msgs, q) SeedPendingQueue(tb, r, msgs, q)
} }
@@ -245,6 +235,7 @@ func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[st
// SeedAllActiveQueues initializes all of the specified active queues with the given messages. // SeedAllActiveQueues initializes all of the specified active queues with the given messages.
func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[string][]*base.TaskMessage) { func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[string][]*base.TaskMessage) {
tb.Helper()
for q, msgs := range active { for q, msgs := range active {
SeedActiveQueue(tb, r, msgs, q) SeedActiveQueue(tb, r, msgs, q)
} }
@@ -252,6 +243,7 @@ func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[stri
// SeedAllScheduledQueues initializes all of the specified scheduled queues with the given entries. // SeedAllScheduledQueues initializes all of the specified scheduled queues with the given entries.
func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled map[string][]base.Z) { func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled map[string][]base.Z) {
tb.Helper()
for q, entries := range scheduled { for q, entries := range scheduled {
SeedScheduledQueue(tb, r, entries, q) SeedScheduledQueue(tb, r, entries, q)
} }
@@ -259,6 +251,7 @@ func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled ma
// SeedAllRetryQueues initializes all of the specified retry queues with the given entries. // SeedAllRetryQueues initializes all of the specified retry queues with the given entries.
func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string][]base.Z) { func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string][]base.Z) {
tb.Helper()
for q, entries := range retry { for q, entries := range retry {
SeedRetryQueue(tb, r, entries, q) SeedRetryQueue(tb, r, entries, q)
} }
@@ -266,6 +259,7 @@ func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string
// SeedAllArchivedQueues initializes all of the specified archived queues with the given entries. // SeedAllArchivedQueues initializes all of the specified archived queues with the given entries.
func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[string][]base.Z) { func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[string][]base.Z) {
tb.Helper()
for q, entries := range archived { for q, entries := range archived {
SeedArchivedQueue(tb, r, entries, q) SeedArchivedQueue(tb, r, entries, q)
} }
@@ -273,101 +267,181 @@ func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[
// SeedAllDeadlines initializes all of the deadlines with the given entries. // SeedAllDeadlines initializes all of the deadlines with the given entries.
func SeedAllDeadlines(tb testing.TB, r redis.UniversalClient, deadlines map[string][]base.Z) { func SeedAllDeadlines(tb testing.TB, r redis.UniversalClient, deadlines map[string][]base.Z) {
tb.Helper()
for q, entries := range deadlines { for q, entries := range deadlines {
SeedDeadlines(tb, r, entries, q) SeedDeadlines(tb, r, entries, q)
} }
} }
func seedRedisList(tb testing.TB, c redis.UniversalClient, key string, msgs []*base.TaskMessage) { func seedRedisList(tb testing.TB, c redis.UniversalClient, key string,
data := MustMarshalSlice(tb, msgs) msgs []*base.TaskMessage, state base.TaskState) {
for _, s := range data { tb.Helper()
if err := c.LPush(key, s).Err(); err != nil { for _, msg := range msgs {
encoded := MustMarshal(tb, msg)
if err := c.LPush(key, msg.ID.String()).Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
key := base.TaskKey(msg.Queue, msg.ID.String())
data := map[string]interface{}{
"msg": encoded,
"state": state.String(),
"timeout": msg.Timeout,
"deadline": msg.Deadline,
"unique_key": msg.UniqueKey,
}
if err := c.HSet(key, data).Err(); err != nil {
tb.Fatal(err)
}
if len(msg.UniqueKey) > 0 {
err := c.SetNX(msg.UniqueKey, msg.ID.String(), 1*time.Minute).Err()
if err != nil {
tb.Fatalf("Failed to set unique lock in redis: %v", err)
}
}
} }
} }
func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string, items []base.Z) { func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string,
items []base.Z, state base.TaskState) {
tb.Helper()
for _, item := range items { for _, item := range items {
z := &redis.Z{Member: MustMarshal(tb, item.Message), Score: float64(item.Score)} msg := item.Message
encoded := MustMarshal(tb, msg)
z := &redis.Z{Member: msg.ID.String(), Score: float64(item.Score)}
if err := c.ZAdd(key, z).Err(); err != nil { if err := c.ZAdd(key, z).Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
key := base.TaskKey(msg.Queue, msg.ID.String())
data := map[string]interface{}{
"msg": encoded,
"state": state.String(),
"timeout": msg.Timeout,
"deadline": msg.Deadline,
"unique_key": msg.UniqueKey,
}
if err := c.HSet(key, data).Err(); err != nil {
tb.Fatal(err)
}
if len(msg.UniqueKey) > 0 {
err := c.SetNX(msg.UniqueKey, msg.ID.String(), 1*time.Minute).Err()
if err != nil {
tb.Fatalf("Failed to set unique lock in redis: %v", err)
}
}
} }
} }
// GetPendingMessages returns all pending messages in the given queue. // GetPendingMessages returns all pending messages in the given queue.
// It also asserts the state field of the task.
func GetPendingMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage { func GetPendingMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getListMessages(tb, r, base.QueueKey(qname)) return getMessagesFromList(tb, r, qname, base.PendingKey, base.TaskStatePending)
} }
// GetActiveMessages returns all active messages in the given queue. // GetActiveMessages returns all active messages in the given queue.
// It also asserts the state field of the task.
func GetActiveMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage { func GetActiveMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getListMessages(tb, r, base.ActiveKey(qname)) return getMessagesFromList(tb, r, qname, base.ActiveKey, base.TaskStateActive)
} }
// GetScheduledMessages returns all scheduled task messages in the given queue. // GetScheduledMessages returns all scheduled task messages in the given queue.
// It also asserts the state field of the task.
func GetScheduledMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage { func GetScheduledMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.ScheduledKey(qname)) return getMessagesFromZSet(tb, r, qname, base.ScheduledKey, base.TaskStateScheduled)
} }
// GetRetryMessages returns all retry messages in the given queue. // GetRetryMessages returns all retry messages in the given queue.
// It also asserts the state field of the task.
func GetRetryMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage { func GetRetryMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.RetryKey(qname)) return getMessagesFromZSet(tb, r, qname, base.RetryKey, base.TaskStateRetry)
} }
// GetArchivedMessages returns all archived messages in the given queue. // GetArchivedMessages returns all archived messages in the given queue.
// It also asserts the state field of the task.
func GetArchivedMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage { func GetArchivedMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.ArchivedKey(qname)) return getMessagesFromZSet(tb, r, qname, base.ArchivedKey, base.TaskStateArchived)
} }
// GetScheduledEntries returns all scheduled messages and its score in the given queue. // GetScheduledEntries returns all scheduled messages and its score in the given queue.
// It also asserts the state field of the task.
func GetScheduledEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z { func GetScheduledEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.ScheduledKey(qname)) return getMessagesFromZSetWithScores(tb, r, qname, base.ScheduledKey, base.TaskStateScheduled)
} }
// GetRetryEntries returns all retry messages and its score in the given queue. // GetRetryEntries returns all retry messages and its score in the given queue.
// It also asserts the state field of the task.
func GetRetryEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z { func GetRetryEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.RetryKey(qname)) return getMessagesFromZSetWithScores(tb, r, qname, base.RetryKey, base.TaskStateRetry)
} }
// GetArchivedEntries returns all archived messages and its score in the given queue. // GetArchivedEntries returns all archived messages and its score in the given queue.
// It also asserts the state field of the task.
func GetArchivedEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z { func GetArchivedEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.ArchivedKey(qname)) return getMessagesFromZSetWithScores(tb, r, qname, base.ArchivedKey, base.TaskStateArchived)
} }
// GetDeadlinesEntries returns all task messages and its score in the deadlines set for the given queue. // GetDeadlinesEntries returns all task messages and its score in the deadlines set for the given queue.
// It also asserts the state field of the task.
func GetDeadlinesEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z { func GetDeadlinesEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.DeadlinesKey(qname)) return getMessagesFromZSetWithScores(tb, r, qname, base.DeadlinesKey, base.TaskStateActive)
} }
func getListMessages(tb testing.TB, r redis.UniversalClient, list string) []*base.TaskMessage { // Retrieves all messages stored under `keyFn(qname)` key in redis list.
data := r.LRange(list, 0, -1).Val() func getMessagesFromList(tb testing.TB, r redis.UniversalClient, qname string,
return MustUnmarshalSlice(tb, data) keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage {
} tb.Helper()
ids := r.LRange(keyFn(qname), 0, -1).Val()
func getZSetMessages(tb testing.TB, r redis.UniversalClient, zset string) []*base.TaskMessage { var msgs []*base.TaskMessage
data := r.ZRange(zset, 0, -1).Val() for _, id := range ids {
return MustUnmarshalSlice(tb, data) taskKey := base.TaskKey(qname, id)
} data := r.HGet(taskKey, "msg").Val()
msgs = append(msgs, MustUnmarshal(tb, data))
func getZSetEntries(tb testing.TB, r redis.UniversalClient, zset string) []base.Z { if gotState := r.HGet(taskKey, "state").Val(); gotState != state.String() {
data := r.ZRangeWithScores(zset, 0, -1).Val() tb.Errorf("task (id=%q) is in %q state, want %v", id, gotState, state)
var entries []base.Z }
for _, z := range data {
entries = append(entries, base.Z{
Message: MustUnmarshal(tb, z.Member.(string)),
Score: int64(z.Score),
})
} }
return entries return msgs
}
// Retrieves all messages stored under `keyFn(qname)` key in redis zset (sorted-set).
func getMessagesFromZSet(tb testing.TB, r redis.UniversalClient, qname string,
keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage {
tb.Helper()
ids := r.ZRange(keyFn(qname), 0, -1).Val()
var msgs []*base.TaskMessage
for _, id := range ids {
taskKey := base.TaskKey(qname, id)
msg := r.HGet(taskKey, "msg").Val()
msgs = append(msgs, MustUnmarshal(tb, msg))
if gotState := r.HGet(taskKey, "state").Val(); gotState != state.String() {
tb.Errorf("task (id=%q) is in %q state, want %v", id, gotState, state)
}
}
return msgs
}
// Retrieves all messages along with their scores stored under `keyFn(qname)` key in redis zset (sorted-set).
func getMessagesFromZSetWithScores(tb testing.TB, r redis.UniversalClient,
qname string, keyFn func(qname string) string, state base.TaskState) []base.Z {
tb.Helper()
zs := r.ZRangeWithScores(keyFn(qname), 0, -1).Val()
var res []base.Z
for _, z := range zs {
taskID := z.Member.(string)
taskKey := base.TaskKey(qname, taskID)
msg := r.HGet(taskKey, "msg").Val()
res = append(res, base.Z{Message: MustUnmarshal(tb, msg), Score: int64(z.Score)})
if gotState := r.HGet(taskKey, "state").Val(); gotState != state.String() {
tb.Errorf("task (id=%q) is in %q state, want %v", taskID, gotState, state)
}
}
return res
} }

View File

@@ -7,25 +7,29 @@ package base
import ( import (
"context" "context"
"encoding/json" "crypto/md5"
"encoding/hex"
"fmt" "fmt"
"sort"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/golang/protobuf/ptypes"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/errors"
pb "github.com/hibiken/asynq/internal/proto"
"google.golang.org/protobuf/proto"
) )
// Version of asynq library and CLI. // Version of asynq library and CLI.
const Version = "0.16.1" const Version = "0.18.2"
// DefaultQueueName is the queue name used if none are specified by user. // DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default" const DefaultQueueName = "default"
// DefaultQueue is the redis key for the default queue. // DefaultQueue is the redis key for the default queue.
var DefaultQueue = QueueKey(DefaultQueueName) var DefaultQueue = PendingKey(DefaultQueueName)
// Global Redis keys. // Global Redis keys.
const ( const (
@@ -36,58 +40,116 @@ const (
CancelChannel = "asynq:cancel" // PubSub channel CancelChannel = "asynq:cancel" // PubSub channel
) )
// TaskState denotes the state of a task.
type TaskState int
const (
TaskStateActive TaskState = iota + 1
TaskStatePending
TaskStateScheduled
TaskStateRetry
TaskStateArchived
)
func (s TaskState) String() string {
switch s {
case TaskStateActive:
return "active"
case TaskStatePending:
return "pending"
case TaskStateScheduled:
return "scheduled"
case TaskStateRetry:
return "retry"
case TaskStateArchived:
return "archived"
}
panic(fmt.Sprintf("internal error: unknown task state %d", s))
}
func TaskStateFromString(s string) (TaskState, error) {
switch s {
case "active":
return TaskStateActive, nil
case "pending":
return TaskStatePending, nil
case "scheduled":
return TaskStateScheduled, nil
case "retry":
return TaskStateRetry, nil
case "archived":
return TaskStateArchived, nil
}
return 0, errors.E(errors.FailedPrecondition, fmt.Sprintf("%q is not supported task state", s))
}
// ValidateQueueName validates a given qname to be used as a queue name. // ValidateQueueName validates a given qname to be used as a queue name.
// Returns nil if valid, otherwise returns non-nil error. // Returns nil if valid, otherwise returns non-nil error.
func ValidateQueueName(qname string) error { func ValidateQueueName(qname string) error {
if len(qname) == 0 { if len(strings.TrimSpace(qname)) == 0 {
return fmt.Errorf("queue name must contain one or more characters") return fmt.Errorf("queue name must contain one or more characters")
} }
return nil return nil
} }
// QueueKey returns a redis key for the given queue name. // QueueKeyPrefix returns a prefix for all keys in the given queue.
func QueueKey(qname string) string { func QueueKeyPrefix(qname string) string {
return fmt.Sprintf("asynq:{%s}", qname) return fmt.Sprintf("asynq:{%s}:", qname)
}
// TaskKeyPrefix returns a prefix for task key.
func TaskKeyPrefix(qname string) string {
return fmt.Sprintf("%st:", QueueKeyPrefix(qname))
}
// TaskKey returns a redis key for the given task message.
func TaskKey(qname, id string) string {
return fmt.Sprintf("%s%s", TaskKeyPrefix(qname), id)
}
// PendingKey returns a redis key for the given queue name.
func PendingKey(qname string) string {
return fmt.Sprintf("%spending", QueueKeyPrefix(qname))
} }
// ActiveKey returns a redis key for the active tasks. // ActiveKey returns a redis key for the active tasks.
func ActiveKey(qname string) string { func ActiveKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:active", qname) return fmt.Sprintf("%sactive", QueueKeyPrefix(qname))
} }
// ScheduledKey returns a redis key for the scheduled tasks. // ScheduledKey returns a redis key for the scheduled tasks.
func ScheduledKey(qname string) string { func ScheduledKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:scheduled", qname) return fmt.Sprintf("%sscheduled", QueueKeyPrefix(qname))
} }
// RetryKey returns a redis key for the retry tasks. // RetryKey returns a redis key for the retry tasks.
func RetryKey(qname string) string { func RetryKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:retry", qname) return fmt.Sprintf("%sretry", QueueKeyPrefix(qname))
} }
// ArchivedKey returns a redis key for the archived tasks. // ArchivedKey returns a redis key for the archived tasks.
func ArchivedKey(qname string) string { func ArchivedKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:archived", qname) return fmt.Sprintf("%sarchived", QueueKeyPrefix(qname))
} }
// DeadlinesKey returns a redis key for the deadlines. // DeadlinesKey returns a redis key for the deadlines.
func DeadlinesKey(qname string) string { func DeadlinesKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:deadlines", qname) return fmt.Sprintf("%sdeadlines", QueueKeyPrefix(qname))
} }
// PausedKey returns a redis key to indicate that the given queue is paused. // PausedKey returns a redis key to indicate that the given queue is paused.
func PausedKey(qname string) string { func PausedKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:paused", qname) return fmt.Sprintf("%spaused", QueueKeyPrefix(qname))
} }
// ProcessedKey returns a redis key for processed count for the given day for the queue. // ProcessedKey returns a redis key for processed count for the given day for the queue.
func ProcessedKey(qname string, t time.Time) string { func ProcessedKey(qname string, t time.Time) string {
return fmt.Sprintf("asynq:{%s}:processed:%s", qname, t.UTC().Format("2006-01-02")) return fmt.Sprintf("%sprocessed:%s", QueueKeyPrefix(qname), t.UTC().Format("2006-01-02"))
} }
// FailedKey returns a redis key for failure count for the given day for the queue. // FailedKey returns a redis key for failure count for the given day for the queue.
func FailedKey(qname string, t time.Time) string { func FailedKey(qname string, t time.Time) string {
return fmt.Sprintf("asynq:{%s}:failed:%s", qname, t.UTC().Format("2006-01-02")) return fmt.Sprintf("%sfailed:%s", QueueKeyPrefix(qname), t.UTC().Format("2006-01-02"))
} }
// ServerInfoKey returns a redis key for process info. // ServerInfoKey returns a redis key for process info.
@@ -111,32 +173,12 @@ func SchedulerHistoryKey(entryID string) string {
} }
// UniqueKey returns a redis key with the given type, payload, and queue name. // UniqueKey returns a redis key with the given type, payload, and queue name.
func UniqueKey(qname, tasktype string, payload map[string]interface{}) string { func UniqueKey(qname, tasktype string, payload []byte) string {
return fmt.Sprintf("asynq:{%s}:unique:%s:%s", qname, tasktype, serializePayload(payload))
}
func serializePayload(payload map[string]interface{}) string {
if payload == nil { if payload == nil {
return "nil" return fmt.Sprintf("%sunique:%s:", QueueKeyPrefix(qname), tasktype)
} }
type entry struct { checksum := md5.Sum(payload)
k string return fmt.Sprintf("%sunique:%s:%s", QueueKeyPrefix(qname), tasktype, hex.EncodeToString(checksum[:]))
v interface{}
}
var es []entry
for k, v := range payload {
es = append(es, entry{k, v})
}
// sort entries by key
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
var b strings.Builder
for _, e := range es {
if b.Len() > 0 {
b.WriteString(",")
}
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
}
return b.String()
} }
// TaskMessage is the internal representation of a task with additional metadata fields. // TaskMessage is the internal representation of a task with additional metadata fields.
@@ -146,7 +188,7 @@ type TaskMessage struct {
Type string Type string
// Payload holds data needed to process the task. // Payload holds data needed to process the task.
Payload map[string]interface{} Payload []byte
// ID is a unique identifier for each task. // ID is a unique identifier for each task.
ID uuid.UUID ID uuid.UUID
@@ -163,6 +205,12 @@ type TaskMessage struct {
// ErrorMsg holds the error message from the last failure. // ErrorMsg holds the error message from the last failure.
ErrorMsg string ErrorMsg string
// Time of last failure in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
//
// Use zero to indicate no last failure
LastFailedAt int64
// Timeout specifies timeout in seconds. // Timeout specifies timeout in seconds.
// If task processing doesn't complete within the timeout, the task will be retried // If task processing doesn't complete within the timeout, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the archive. // if retry count is remaining. Otherwise it will be moved to the archive.
@@ -184,24 +232,52 @@ type TaskMessage struct {
UniqueKey string UniqueKey string
} }
// EncodeMessage marshals the given task message in JSON and returns an encoded string. // EncodeMessage marshals the given task message and returns an encoded bytes.
func EncodeMessage(msg *TaskMessage) (string, error) { func EncodeMessage(msg *TaskMessage) ([]byte, error) {
b, err := json.Marshal(msg) if msg == nil {
if err != nil { return nil, fmt.Errorf("cannot encode nil message")
return "", err
} }
return string(b), nil return proto.Marshal(&pb.TaskMessage{
Type: msg.Type,
Payload: msg.Payload,
Id: msg.ID.String(),
Queue: msg.Queue,
Retry: int32(msg.Retry),
Retried: int32(msg.Retried),
ErrorMsg: msg.ErrorMsg,
LastFailedAt: msg.LastFailedAt,
Timeout: msg.Timeout,
Deadline: msg.Deadline,
UniqueKey: msg.UniqueKey,
})
} }
// DecodeMessage unmarshals the given encoded string and returns a decoded task message. // DecodeMessage unmarshals the given bytes and returns a decoded task message.
func DecodeMessage(s string) (*TaskMessage, error) { func DecodeMessage(data []byte) (*TaskMessage, error) {
d := json.NewDecoder(strings.NewReader(s)) var pbmsg pb.TaskMessage
d.UseNumber() if err := proto.Unmarshal(data, &pbmsg); err != nil {
var msg TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, err return nil, err
} }
return &msg, nil return &TaskMessage{
Type: pbmsg.GetType(),
Payload: pbmsg.GetPayload(),
ID: uuid.MustParse(pbmsg.GetId()),
Queue: pbmsg.GetQueue(),
Retry: int(pbmsg.GetRetry()),
Retried: int(pbmsg.GetRetried()),
ErrorMsg: pbmsg.GetErrorMsg(),
LastFailedAt: pbmsg.GetLastFailedAt(),
Timeout: pbmsg.GetTimeout(),
Deadline: pbmsg.GetDeadline(),
UniqueKey: pbmsg.GetUniqueKey(),
}, nil
}
// TaskInfo describes a task message and its metadata.
type TaskInfo struct {
Message *TaskMessage
State TaskState
NextProcessAt time.Time
} }
// Z represents sorted set member. // Z represents sorted set member.
@@ -210,52 +286,55 @@ type Z struct {
Score int64 Score int64
} }
// ServerStatus represents status of a server. // ServerState represents state of a server.
// ServerStatus methods are concurrency safe. // ServerState methods are concurrency safe.
type ServerStatus struct { type ServerState struct {
mu sync.Mutex mu sync.Mutex
val ServerStatusValue val ServerStateValue
} }
// NewServerStatus returns a new status instance given an initial value. // NewServerState returns a new state instance.
func NewServerStatus(v ServerStatusValue) *ServerStatus { // Initial state is set to StateNew.
return &ServerStatus{val: v} func NewServerState() *ServerState {
return &ServerState{val: StateNew}
} }
type ServerStatusValue int type ServerStateValue int
const ( const (
// StatusIdle indicates the server is in idle state. // StateNew represents a new server. Server begins in
StatusIdle ServerStatusValue = iota // this state and then transition to StatusActive when
// Start or Run is callled.
StateNew ServerStateValue = iota
// StatusRunning indicates the server is up and active. // StateActive indicates the server is up and active.
StatusRunning StateActive
// StatusQuiet indicates the server is up but not active. // StateStopped indicates the server is up but no longer processing new tasks.
StatusQuiet StateStopped
// StatusStopped indicates the server server has been stopped. // StateClosed indicates the server has been shutdown.
StatusStopped StateClosed
) )
var statuses = []string{ var serverStates = []string{
"idle", "new",
"running", "active",
"quiet",
"stopped", "stopped",
"closed",
} }
func (s *ServerStatus) String() string { func (s *ServerState) String() string {
s.mu.Lock() s.mu.Lock()
defer s.mu.Unlock() defer s.mu.Unlock()
if StatusIdle <= s.val && s.val <= StatusStopped { if StateNew <= s.val && s.val <= StateClosed {
return statuses[s.val] return serverStates[s.val]
} }
return "unknown status" return "unknown status"
} }
// Get returns the status value. // Get returns the status value.
func (s *ServerStatus) Get() ServerStatusValue { func (s *ServerState) Get() ServerStateValue {
s.mu.Lock() s.mu.Lock()
v := s.val v := s.val
s.mu.Unlock() s.mu.Unlock()
@@ -263,7 +342,7 @@ func (s *ServerStatus) Get() ServerStatusValue {
} }
// Set sets the status value. // Set sets the status value.
func (s *ServerStatus) Set(v ServerStatusValue) { func (s *ServerState) Set(v ServerStateValue) {
s.mu.Lock() s.mu.Lock()
s.val = v s.val = v
s.mu.Unlock() s.mu.Unlock()
@@ -282,6 +361,59 @@ type ServerInfo struct {
ActiveWorkerCount int ActiveWorkerCount int
} }
// EncodeServerInfo marshals the given ServerInfo and returns the encoded bytes.
func EncodeServerInfo(info *ServerInfo) ([]byte, error) {
if info == nil {
return nil, fmt.Errorf("cannot encode nil server info")
}
queues := make(map[string]int32)
for q, p := range info.Queues {
queues[q] = int32(p)
}
started, err := ptypes.TimestampProto(info.Started)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.ServerInfo{
Host: info.Host,
Pid: int32(info.PID),
ServerId: info.ServerID,
Concurrency: int32(info.Concurrency),
Queues: queues,
StrictPriority: info.StrictPriority,
Status: info.Status,
StartTime: started,
ActiveWorkerCount: int32(info.ActiveWorkerCount),
})
}
// DecodeServerInfo decodes the given bytes into ServerInfo.
func DecodeServerInfo(b []byte) (*ServerInfo, error) {
var pbmsg pb.ServerInfo
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
queues := make(map[string]int)
for q, p := range pbmsg.GetQueues() {
queues[q] = int(p)
}
startTime, err := ptypes.Timestamp(pbmsg.GetStartTime())
if err != nil {
return nil, err
}
return &ServerInfo{
Host: pbmsg.GetHost(),
PID: int(pbmsg.GetPid()),
ServerID: pbmsg.GetServerId(),
Concurrency: int(pbmsg.GetConcurrency()),
Queues: queues,
StrictPriority: pbmsg.GetStrictPriority(),
Status: pbmsg.GetStatus(),
Started: startTime,
ActiveWorkerCount: int(pbmsg.GetActiveWorkerCount()),
}, nil
}
// WorkerInfo holds information about a running worker. // WorkerInfo holds information about a running worker.
type WorkerInfo struct { type WorkerInfo struct {
Host string Host string
@@ -289,12 +421,65 @@ type WorkerInfo struct {
ServerID string ServerID string
ID string ID string
Type string Type string
Payload []byte
Queue string Queue string
Payload map[string]interface{}
Started time.Time Started time.Time
Deadline time.Time Deadline time.Time
} }
// EncodeWorkerInfo marshals the given WorkerInfo and returns the encoded bytes.
func EncodeWorkerInfo(info *WorkerInfo) ([]byte, error) {
if info == nil {
return nil, fmt.Errorf("cannot encode nil worker info")
}
startTime, err := ptypes.TimestampProto(info.Started)
if err != nil {
return nil, err
}
deadline, err := ptypes.TimestampProto(info.Deadline)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.WorkerInfo{
Host: info.Host,
Pid: int32(info.PID),
ServerId: info.ServerID,
TaskId: info.ID,
TaskType: info.Type,
TaskPayload: info.Payload,
Queue: info.Queue,
StartTime: startTime,
Deadline: deadline,
})
}
// DecodeWorkerInfo decodes the given bytes into WorkerInfo.
func DecodeWorkerInfo(b []byte) (*WorkerInfo, error) {
var pbmsg pb.WorkerInfo
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
startTime, err := ptypes.Timestamp(pbmsg.GetStartTime())
if err != nil {
return nil, err
}
deadline, err := ptypes.Timestamp(pbmsg.GetDeadline())
if err != nil {
return nil, err
}
return &WorkerInfo{
Host: pbmsg.GetHost(),
PID: int(pbmsg.GetPid()),
ServerID: pbmsg.GetServerId(),
ID: pbmsg.GetTaskId(),
Type: pbmsg.GetTaskType(),
Payload: pbmsg.GetTaskPayload(),
Queue: pbmsg.GetQueue(),
Started: startTime,
Deadline: deadline,
}, nil
}
// SchedulerEntry holds information about a periodic task registered with a scheduler. // SchedulerEntry holds information about a periodic task registered with a scheduler.
type SchedulerEntry struct { type SchedulerEntry struct {
// Identifier of this entry. // Identifier of this entry.
@@ -307,7 +492,7 @@ type SchedulerEntry struct {
Type string Type string
// Payload is the payload of the periodic task. // Payload is the payload of the periodic task.
Payload map[string]interface{} Payload []byte
// Opts is the options for the periodic task. // Opts is the options for the periodic task.
Opts []string Opts []string
@@ -320,6 +505,55 @@ type SchedulerEntry struct {
Prev time.Time Prev time.Time
} }
// EncodeSchedulerEntry marshals the given entry and returns an encoded bytes.
func EncodeSchedulerEntry(entry *SchedulerEntry) ([]byte, error) {
if entry == nil {
return nil, fmt.Errorf("cannot encode nil scheduler entry")
}
next, err := ptypes.TimestampProto(entry.Next)
if err != nil {
return nil, err
}
prev, err := ptypes.TimestampProto(entry.Prev)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.SchedulerEntry{
Id: entry.ID,
Spec: entry.Spec,
TaskType: entry.Type,
TaskPayload: entry.Payload,
EnqueueOptions: entry.Opts,
NextEnqueueTime: next,
PrevEnqueueTime: prev,
})
}
// DecodeSchedulerEntry unmarshals the given bytes and returns a decoded SchedulerEntry.
func DecodeSchedulerEntry(b []byte) (*SchedulerEntry, error) {
var pbmsg pb.SchedulerEntry
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
next, err := ptypes.Timestamp(pbmsg.GetNextEnqueueTime())
if err != nil {
return nil, err
}
prev, err := ptypes.Timestamp(pbmsg.GetPrevEnqueueTime())
if err != nil {
return nil, err
}
return &SchedulerEntry{
ID: pbmsg.GetId(),
Spec: pbmsg.GetSpec(),
Type: pbmsg.GetTaskType(),
Payload: pbmsg.GetTaskPayload(),
Opts: pbmsg.GetEnqueueOptions(),
Next: next,
Prev: prev,
}, nil
}
// SchedulerEnqueueEvent holds information about an enqueue event by a scheduler. // SchedulerEnqueueEvent holds information about an enqueue event by a scheduler.
type SchedulerEnqueueEvent struct { type SchedulerEnqueueEvent struct {
// ID of the task that was enqueued. // ID of the task that was enqueued.
@@ -329,6 +563,39 @@ type SchedulerEnqueueEvent struct {
EnqueuedAt time.Time EnqueuedAt time.Time
} }
// EncodeSchedulerEnqueueEvent marshals the given event
// and returns an encoded bytes.
func EncodeSchedulerEnqueueEvent(event *SchedulerEnqueueEvent) ([]byte, error) {
if event == nil {
return nil, fmt.Errorf("cannot encode nil enqueue event")
}
enqueuedAt, err := ptypes.TimestampProto(event.EnqueuedAt)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.SchedulerEnqueueEvent{
TaskId: event.TaskID,
EnqueueTime: enqueuedAt,
})
}
// DecodeSchedulerEnqueueEvent unmarshals the given bytes
// and returns a decoded SchedulerEnqueueEvent.
func DecodeSchedulerEnqueueEvent(b []byte) (*SchedulerEnqueueEvent, error) {
var pbmsg pb.SchedulerEnqueueEvent
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
enqueuedAt, err := ptypes.Timestamp(pbmsg.GetEnqueueTime())
if err != nil {
return nil, err
}
return &SchedulerEnqueueEvent{
TaskID: pbmsg.GetTaskId(),
EnqueuedAt: enqueuedAt,
}, nil
}
// Cancelations is a collection that holds cancel functions for all active tasks. // Cancelations is a collection that holds cancel functions for all active tasks.
// //
// Cancelations are safe for concurrent use by multipel goroutines. // Cancelations are safe for concurrent use by multipel goroutines.
@@ -380,7 +647,7 @@ type Broker interface {
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
Archive(msg *TaskMessage, errMsg string) error Archive(msg *TaskMessage, errMsg string) error
CheckAndEnqueue(qnames ...string) error ForwardIfReady(qnames ...string) error
ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*TaskMessage, error) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*TaskMessage, error)
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
ClearServerState(host string, pid int, serverID string) error ClearServerState(host string, pid int, serverID string) error

View File

@@ -6,7 +6,10 @@ package base
import ( import (
"context" "context"
"crypto/md5"
"encoding/hex"
"encoding/json" "encoding/json"
"fmt"
"sync" "sync"
"testing" "testing"
"time" "time"
@@ -15,17 +18,36 @@ import (
"github.com/google/uuid" "github.com/google/uuid"
) )
func TestTaskKey(t *testing.T) {
id := uuid.NewString()
tests := []struct {
qname string
id string
want string
}{
{"default", id, fmt.Sprintf("asynq:{default}:t:%s", id)},
}
for _, tc := range tests {
got := TaskKey(tc.qname, tc.id)
if got != tc.want {
t.Errorf("TaskKey(%q, %s) = %q, want %q", tc.qname, tc.id, got, tc.want)
}
}
}
func TestQueueKey(t *testing.T) { func TestQueueKey(t *testing.T) {
tests := []struct { tests := []struct {
qname string qname string
want string want string
}{ }{
{"default", "asynq:{default}"}, {"default", "asynq:{default}:pending"},
{"custom", "asynq:{custom}"}, {"custom", "asynq:{custom}:pending"},
} }
for _, tc := range tests { for _, tc := range tests {
got := QueueKey(tc.qname) got := PendingKey(tc.qname)
if got != tc.want { if got != tc.want {
t.Errorf("QueueKey(%q) = %q, want %q", tc.qname, got, tc.want) t.Errorf("QueueKey(%q) = %q, want %q", tc.qname, got, tc.want)
} }
@@ -247,52 +269,69 @@ func TestSchedulerHistoryKey(t *testing.T) {
} }
} }
func toBytes(m map[string]interface{}) []byte {
b, err := json.Marshal(m)
if err != nil {
panic(err)
}
return b
}
func TestUniqueKey(t *testing.T) { func TestUniqueKey(t *testing.T) {
payload1 := toBytes(map[string]interface{}{"a": 123, "b": "hello", "c": true})
payload2 := toBytes(map[string]interface{}{"b": "hello", "c": true, "a": 123})
payload3 := toBytes(map[string]interface{}{
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
"names": []string{"bob", "mike", "rob"}})
payload4 := toBytes(map[string]interface{}{
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
"duration": time.Hour})
checksum := func(data []byte) string {
sum := md5.Sum(data)
return hex.EncodeToString(sum[:])
}
tests := []struct { tests := []struct {
desc string desc string
qname string qname string
tasktype string tasktype string
payload map[string]interface{} payload []byte
want string want string
}{ }{
{ {
"with primitive types", "with primitive types",
"default", "default",
"email:send", "email:send",
map[string]interface{}{"a": 123, "b": "hello", "c": true}, payload1,
"asynq:{default}:unique:email:send:a=123,b=hello,c=true", fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload1)),
}, },
{ {
"with unsorted keys", "with unsorted keys",
"default", "default",
"email:send", "email:send",
map[string]interface{}{"b": "hello", "c": true, "a": 123}, payload2,
"asynq:{default}:unique:email:send:a=123,b=hello,c=true", fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload2)),
}, },
{ {
"with composite types", "with composite types",
"default", "default",
"email:send", "email:send",
map[string]interface{}{ payload3,
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"}, fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload3)),
"names": []string{"bob", "mike", "rob"}},
"asynq:{default}:unique:email:send:address=map[city:Boston line:123 Main St state:MA],names=[bob mike rob]",
}, },
{ {
"with complex types", "with complex types",
"default", "default",
"email:send", "email:send",
map[string]interface{}{ payload4,
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC), fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload4)),
"duration": time.Hour},
"asynq:{default}:unique:email:send:duration=1h0m0s,time=2020-07-28 00:00:00 +0000 UTC",
}, },
{ {
"with nil payload", "with nil payload",
"default", "default",
"reindex", "reindex",
nil, nil,
"asynq:{default}:unique:reindex:nil", "asynq:{default}:unique:reindex:",
}, },
} }
@@ -313,7 +352,7 @@ func TestMessageEncoding(t *testing.T) {
{ {
in: &TaskMessage{ in: &TaskMessage{
Type: "task1", Type: "task1",
Payload: map[string]interface{}{"a": 1, "b": "hello!", "c": true}, Payload: toBytes(map[string]interface{}{"a": 1, "b": "hello!", "c": true}),
ID: id, ID: id,
Queue: "default", Queue: "default",
Retry: 10, Retry: 10,
@@ -323,7 +362,7 @@ func TestMessageEncoding(t *testing.T) {
}, },
out: &TaskMessage{ out: &TaskMessage{
Type: "task1", Type: "task1",
Payload: map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true}, Payload: toBytes(map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true}),
ID: id, ID: id,
Queue: "default", Queue: "default",
Retry: 10, Retry: 10,
@@ -352,10 +391,149 @@ func TestMessageEncoding(t *testing.T) {
} }
} }
func TestServerInfoEncoding(t *testing.T) {
tests := []struct {
info ServerInfo
}{
{
info: ServerInfo{
Host: "127.0.0.1",
PID: 9876,
ServerID: "abc123",
Concurrency: 10,
Queues: map[string]int{"default": 1, "critical": 2},
StrictPriority: false,
Status: "active",
Started: time.Now().Add(-3 * time.Hour),
ActiveWorkerCount: 8,
},
},
}
for _, tc := range tests {
encoded, err := EncodeServerInfo(&tc.info)
if err != nil {
t.Errorf("EncodeServerInfo(info) returned error: %v", err)
continue
}
decoded, err := DecodeServerInfo(encoded)
if err != nil {
t.Errorf("DecodeServerInfo(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.info, decoded); diff != "" {
t.Errorf("Decoded ServerInfo == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.info, diff)
}
}
}
func TestWorkerInfoEncoding(t *testing.T) {
tests := []struct {
info WorkerInfo
}{
{
info: WorkerInfo{
Host: "127.0.0.1",
PID: 9876,
ServerID: "abc123",
ID: uuid.NewString(),
Type: "taskA",
Payload: toBytes(map[string]interface{}{"foo": "bar"}),
Queue: "default",
Started: time.Now().Add(-3 * time.Hour),
Deadline: time.Now().Add(30 * time.Second),
},
},
}
for _, tc := range tests {
encoded, err := EncodeWorkerInfo(&tc.info)
if err != nil {
t.Errorf("EncodeWorkerInfo(info) returned error: %v", err)
continue
}
decoded, err := DecodeWorkerInfo(encoded)
if err != nil {
t.Errorf("DecodeWorkerInfo(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.info, decoded); diff != "" {
t.Errorf("Decoded WorkerInfo == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.info, diff)
}
}
}
func TestSchedulerEntryEncoding(t *testing.T) {
tests := []struct {
entry SchedulerEntry
}{
{
entry: SchedulerEntry{
ID: uuid.NewString(),
Spec: "* * * * *",
Type: "task_A",
Payload: toBytes(map[string]interface{}{"foo": "bar"}),
Opts: []string{"Queue('email')"},
Next: time.Now().Add(30 * time.Second).UTC(),
Prev: time.Now().Add(-2 * time.Minute).UTC(),
},
},
}
for _, tc := range tests {
encoded, err := EncodeSchedulerEntry(&tc.entry)
if err != nil {
t.Errorf("EncodeSchedulerEntry(entry) returned error: %v", err)
continue
}
decoded, err := DecodeSchedulerEntry(encoded)
if err != nil {
t.Errorf("DecodeSchedulerEntry(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.entry, decoded); diff != "" {
t.Errorf("Decoded SchedulerEntry == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.entry, diff)
}
}
}
func TestSchedulerEnqueueEventEncoding(t *testing.T) {
tests := []struct {
event SchedulerEnqueueEvent
}{
{
event: SchedulerEnqueueEvent{
TaskID: uuid.NewString(),
EnqueuedAt: time.Now().Add(-30 * time.Second).UTC(),
},
},
}
for _, tc := range tests {
encoded, err := EncodeSchedulerEnqueueEvent(&tc.event)
if err != nil {
t.Errorf("EncodeSchedulerEnqueueEvent(event) returned error: %v", err)
continue
}
decoded, err := DecodeSchedulerEnqueueEvent(encoded)
if err != nil {
t.Errorf("DecodeSchedulerEnqueueEvent(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.event, decoded); diff != "" {
t.Errorf("Decoded SchedulerEnqueueEvent == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.event, diff)
}
}
}
// Test for status being accessed by multiple goroutines. // Test for status being accessed by multiple goroutines.
// Run with -race flag to check for data race. // Run with -race flag to check for data race.
func TestStatusConcurrentAccess(t *testing.T) { func TestStatusConcurrentAccess(t *testing.T) {
status := NewServerStatus(StatusIdle) status := NewServerState()
var wg sync.WaitGroup var wg sync.WaitGroup
@@ -369,7 +547,7 @@ func TestStatusConcurrentAccess(t *testing.T) {
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
status.Set(StatusStopped) status.Set(StateClosed)
_ = status.String() _ = status.String()
}() }()

285
internal/errors/errors.go Normal file
View File

@@ -0,0 +1,285 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package errors defines the error type and functions used by
// asynq and its internal packages.
package errors
// Note: This package is inspired by a blog post about error handling in project Upspin
// https://commandcenter.blogspot.com/2017/12/error-handling-in-upspin.html.
import (
"errors"
"fmt"
"log"
"runtime"
"strings"
)
// Error is the type that implements the error interface.
// It contains a number of fields, each of different type.
// An Error value may leave some values unset.
type Error struct {
Code Code
Op Op
Err error
}
func (e *Error) DebugString() string {
var b strings.Builder
if e.Op != "" {
b.WriteString(string(e.Op))
}
if e.Code != Unspecified {
if b.Len() > 0 {
b.WriteString(": ")
}
b.WriteString(e.Code.String())
}
if e.Err != nil {
if b.Len() > 0 {
b.WriteString(": ")
}
b.WriteString(e.Err.Error())
}
return b.String()
}
func (e *Error) Error() string {
var b strings.Builder
if e.Code != Unspecified {
b.WriteString(e.Code.String())
}
if e.Err != nil {
if b.Len() > 0 {
b.WriteString(": ")
}
b.WriteString(e.Err.Error())
}
return b.String()
}
func (e *Error) Unwrap() error {
return e.Err
}
// Code defines the canonical error code.
type Code uint8
// List of canonical error codes.
const (
Unspecified Code = iota
NotFound
FailedPrecondition
Internal
AlreadyExists
Unknown
// Note: If you add a new value here, make sure to update String method.
)
func (c Code) String() string {
switch c {
case Unspecified:
return "ERROR_CODE_UNSPECIFIED"
case NotFound:
return "NOT_FOUND"
case FailedPrecondition:
return "FAILED_PRECONDITION"
case Internal:
return "INTERNAL_ERROR"
case AlreadyExists:
return "ALREADY_EXISTS"
case Unknown:
return "UNKNOWN"
}
panic(fmt.Sprintf("unknown error code %d", c))
}
// Op describes an operation, usually as the package and method,
// such as "rdb.Enqueue".
type Op string
// E builds an error value from its arguments.
// There must be at least one argument or E panics.
// The type of each argument determines its meaning.
// If more than one argument of a given type is presented,
// only the last one is recorded.
//
// The types are:
// errors.Op
// The operation being performed, usually the method
// being invoked (Get, Put, etc.).
// errors.Code
// The canonical error code, such as NOT_FOUND.
// string
// Treated as an error message and assigned to the
// Err field after a call to errors.New.
// error
// The underlying error that triggered this one.
//
// If the error is printed, only those items that have been
// set to non-zero values will appear in the result.
func E(args ...interface{}) error {
if len(args) == 0 {
panic("call to errors.E with no arguments")
}
e := &Error{}
for _, arg := range args {
switch arg := arg.(type) {
case Op:
e.Op = arg
case Code:
e.Code = arg
case error:
e.Err = arg
case string:
e.Err = errors.New(arg)
default:
_, file, line, _ := runtime.Caller(1)
log.Printf("errors.E: bad call from %s:%d: %v", file, line, args)
return fmt.Errorf("unknown type %T, value %v in error call", arg, arg)
}
}
return e
}
// CanonicalCode returns the canonical code of the given error if one is present.
// Otherwise it returns Unspecified.
func CanonicalCode(err error) Code {
if err == nil {
return Unspecified
}
e, ok := err.(*Error)
if !ok {
return Unspecified
}
if e.Code == Unspecified {
return CanonicalCode(e.Err)
}
return e.Code
}
/******************************************
Domin Specific Error Types & Values
*******************************************/
var (
// ErrNoProcessableTask indicates that there are no tasks ready to be processed.
ErrNoProcessableTask = errors.New("no tasks are ready for processing")
// ErrDuplicateTask indicates that another task with the same unique key holds the uniqueness lock.
ErrDuplicateTask = errors.New("task already exists")
)
// TaskNotFoundError indicates that a task with the given ID does not exist
// in the given queue.
type TaskNotFoundError struct {
Queue string // queue name
ID string // task id
}
func (e *TaskNotFoundError) Error() string {
return fmt.Sprintf("cannot find task with id=%s in queue %q", e.ID, e.Queue)
}
// IsTaskNotFound reports whether any error in err's chain is of type TaskNotFoundError.
func IsTaskNotFound(err error) bool {
var target *TaskNotFoundError
return As(err, &target)
}
// QueueNotFoundError indicates that a queue with the given name does not exist.
type QueueNotFoundError struct {
Queue string // queue name
}
func (e *QueueNotFoundError) Error() string {
return fmt.Sprintf("queue %q does not exist", e.Queue)
}
// IsQueueNotFound reports whether any error in err's chain is of type QueueNotFoundError.
func IsQueueNotFound(err error) bool {
var target *QueueNotFoundError
return As(err, &target)
}
// QueueNotEmptyError indicates that the given queue is not empty.
type QueueNotEmptyError struct {
Queue string // queue name
}
func (e *QueueNotEmptyError) Error() string {
return fmt.Sprintf("queue %q is not empty", e.Queue)
}
// IsQueueNotEmpty reports whether any error in err's chain is of type QueueNotEmptyError.
func IsQueueNotEmpty(err error) bool {
var target *QueueNotEmptyError
return As(err, &target)
}
// TaskAlreadyArchivedError indicates that the task in question is already archived.
type TaskAlreadyArchivedError struct {
Queue string // queue name
ID string // task id
}
func (e *TaskAlreadyArchivedError) Error() string {
return fmt.Sprintf("task is already archived: id=%s, queue=%s", e.ID, e.Queue)
}
// IsTaskAlreadyArchived reports whether any error in err's chain is of type TaskAlreadyArchivedError.
func IsTaskAlreadyArchived(err error) bool {
var target *TaskAlreadyArchivedError
return As(err, &target)
}
// RedisCommandError indicates that the given redis command returned error.
type RedisCommandError struct {
Command string // redis command (e.g. LRANGE, ZADD, etc)
Err error // underlying error
}
func (e *RedisCommandError) Error() string {
return fmt.Sprintf("redis command error: %s failed: %v", strings.ToUpper(e.Command), e.Err)
}
func (e *RedisCommandError) Unwrap() error { return e.Err }
// IsRedisCommandError reports whether any error in err's chain is of type RedisCommandError.
func IsRedisCommandError(err error) bool {
var target *RedisCommandError
return As(err, &target)
}
/*************************************************
Standard Library errors package functions
*************************************************/
// New returns an error that formats as the given text.
// Each call to New returns a distinct error value even if the text is identical.
//
// This function is the errors.New function from the standard libarary (https://golang.org/pkg/errors/#New).
// It is exported from this package for import convinience.
func New(text string) error { return errors.New(text) }
// Is reports whether any error in err's chain matches target.
//
// This function is the errors.Is function from the standard libarary (https://golang.org/pkg/errors/#Is).
// It is exported from this package for import convinience.
func Is(err, target error) bool { return errors.Is(err, target) }
// As finds the first error in err's chain that matches target, and if so, sets target to that error value and returns true.
// Otherwise, it returns false.
//
// This function is the errors.As function from the standard libarary (https://golang.org/pkg/errors/#As).
// It is exported from this package for import convinience.
func As(err error, target interface{}) bool { return errors.As(err, target) }
// Unwrap returns the result of calling the Unwrap method on err, if err's type contains an Unwrap method returning error.
// Otherwise, Unwrap returns nil.
//
// This function is the errors.Unwrap function from the standard libarary (https://golang.org/pkg/errors/#Unwrap).
// It is exported from this package for import convinience.
func Unwrap(err error) error { return errors.Unwrap(err) }

View File

@@ -0,0 +1,176 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package errors
import "testing"
func TestErrorDebugString(t *testing.T) {
// DebugString should include Op since its meant to be used by
// maintainers/contributors of the asynq package.
tests := []struct {
desc string
err error
want string
}{
{
desc: "With Op, Code, and string",
err: E(Op("rdb.DeleteTask"), NotFound, "cannot find task with id=123"),
want: "rdb.DeleteTask: NOT_FOUND: cannot find task with id=123",
},
{
desc: "With Op, Code and error",
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
want: `rdb.DeleteTask: NOT_FOUND: cannot find task with id=123 in queue "default"`,
},
}
for _, tc := range tests {
if got := tc.err.(*Error).DebugString(); got != tc.want {
t.Errorf("%s: got=%q, want=%q", tc.desc, got, tc.want)
}
}
}
func TestErrorString(t *testing.T) {
// String method should omit Op since op is an internal detail
// and we don't want to provide it to users of the package.
tests := []struct {
desc string
err error
want string
}{
{
desc: "With Op, Code, and string",
err: E(Op("rdb.DeleteTask"), NotFound, "cannot find task with id=123"),
want: "NOT_FOUND: cannot find task with id=123",
},
{
desc: "With Op, Code and error",
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
want: `NOT_FOUND: cannot find task with id=123 in queue "default"`,
},
}
for _, tc := range tests {
if got := tc.err.Error(); got != tc.want {
t.Errorf("%s: got=%q, want=%q", tc.desc, got, tc.want)
}
}
}
func TestErrorIs(t *testing.T) {
var ErrCustom = New("custom sentinel error")
tests := []struct {
desc string
err error
target error
want bool
}{
{
desc: "should unwrap one level",
err: E(Op("rdb.DeleteTask"), ErrCustom),
target: ErrCustom,
want: true,
},
}
for _, tc := range tests {
if got := Is(tc.err, tc.target); got != tc.want {
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
}
}
}
func TestErrorAs(t *testing.T) {
tests := []struct {
desc string
err error
target interface{}
want bool
}{
{
desc: "should unwrap one level",
err: E(Op("rdb.DeleteTask"), NotFound, &QueueNotFoundError{Queue: "email"}),
target: &QueueNotFoundError{},
want: true,
},
}
for _, tc := range tests {
if got := As(tc.err, &tc.target); got != tc.want {
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
}
}
}
func TestErrorPredicates(t *testing.T) {
tests := []struct {
desc string
fn func(err error) bool
err error
want bool
}{
{
desc: "IsTaskNotFound should detect presence of TaskNotFoundError in err's chain",
fn: IsTaskNotFound,
err: E(Op("rdb.ArchiveTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "9876"}),
want: true,
},
{
desc: "IsTaskNotFound should detect absence of TaskNotFoundError in err's chain",
fn: IsTaskNotFound,
err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}),
want: false,
},
{
desc: "IsQueueNotFound should detect presence of QueueNotFoundError in err's chain",
fn: IsQueueNotFound,
err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}),
want: true,
},
}
for _, tc := range tests {
if got := tc.fn(tc.err); got != tc.want {
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
}
}
}
func TestCanonicalCode(t *testing.T) {
tests := []struct {
desc string
err error
want Code
}{
{
desc: "without nesting",
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
want: NotFound,
},
{
desc: "with nesting",
err: E(FailedPrecondition, E(NotFound)),
want: FailedPrecondition,
},
{
desc: "returns Unspecified if err is not *Error",
err: New("some other error"),
want: Unspecified,
},
{
desc: "returns Unspecified if err is nil",
err: nil,
want: Unspecified,
},
}
for _, tc := range tests {
if got := CanonicalCode(tc.err); got != tc.want {
t.Errorf("%s: got=%s, want=%s", tc.desc, got, tc.want)
}
}
}

812
internal/proto/asynq.pb.go Normal file
View File

@@ -0,0 +1,812 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.25.0
// protoc v3.14.0
// source: asynq.proto
package proto
import (
proto "github.com/golang/protobuf/proto"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// This is a compile-time assertion that a sufficiently up-to-date version
// of the legacy proto package is being used.
const _ = proto.ProtoPackageIsVersion4
// TaskMessage is the internal representation of a task with additional
// metadata fields.
type TaskMessage struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Type indicates the kind of the task to be performed.
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
// Payload holds data needed to process the task.
Payload []byte `protobuf:"bytes,2,opt,name=payload,proto3" json:"payload,omitempty"`
// Unique identifier for the task.
Id string `protobuf:"bytes,3,opt,name=id,proto3" json:"id,omitempty"`
// Name of the queue to which this task belongs.
Queue string `protobuf:"bytes,4,opt,name=queue,proto3" json:"queue,omitempty"`
// Max number of retries for this task.
Retry int32 `protobuf:"varint,5,opt,name=retry,proto3" json:"retry,omitempty"`
// Number of times this task has been retried so far.
Retried int32 `protobuf:"varint,6,opt,name=retried,proto3" json:"retried,omitempty"`
// Error message from the last failure.
ErrorMsg string `protobuf:"bytes,7,opt,name=error_msg,json=errorMsg,proto3" json:"error_msg,omitempty"`
// Time of last failure in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no last failure.
LastFailedAt int64 `protobuf:"varint,11,opt,name=last_failed_at,json=lastFailedAt,proto3" json:"last_failed_at,omitempty"`
// Timeout specifies timeout in seconds.
// Use zero to indicate no timeout.
Timeout int64 `protobuf:"varint,8,opt,name=timeout,proto3" json:"timeout,omitempty"`
// Deadline specifies the deadline for the task in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no deadline.
Deadline int64 `protobuf:"varint,9,opt,name=deadline,proto3" json:"deadline,omitempty"`
// UniqueKey holds the redis key used for uniqueness lock for this task.
// Empty string indicates that no uniqueness lock was used.
UniqueKey string `protobuf:"bytes,10,opt,name=unique_key,json=uniqueKey,proto3" json:"unique_key,omitempty"`
}
func (x *TaskMessage) Reset() {
*x = TaskMessage{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *TaskMessage) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*TaskMessage) ProtoMessage() {}
func (x *TaskMessage) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use TaskMessage.ProtoReflect.Descriptor instead.
func (*TaskMessage) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{0}
}
func (x *TaskMessage) GetType() string {
if x != nil {
return x.Type
}
return ""
}
func (x *TaskMessage) GetPayload() []byte {
if x != nil {
return x.Payload
}
return nil
}
func (x *TaskMessage) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *TaskMessage) GetQueue() string {
if x != nil {
return x.Queue
}
return ""
}
func (x *TaskMessage) GetRetry() int32 {
if x != nil {
return x.Retry
}
return 0
}
func (x *TaskMessage) GetRetried() int32 {
if x != nil {
return x.Retried
}
return 0
}
func (x *TaskMessage) GetErrorMsg() string {
if x != nil {
return x.ErrorMsg
}
return ""
}
func (x *TaskMessage) GetLastFailedAt() int64 {
if x != nil {
return x.LastFailedAt
}
return 0
}
func (x *TaskMessage) GetTimeout() int64 {
if x != nil {
return x.Timeout
}
return 0
}
func (x *TaskMessage) GetDeadline() int64 {
if x != nil {
return x.Deadline
}
return 0
}
func (x *TaskMessage) GetUniqueKey() string {
if x != nil {
return x.UniqueKey
}
return ""
}
// ServerInfo holds information about a running server.
type ServerInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Host machine the server is running on.
Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"`
// PID of the server process.
Pid int32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
// Unique identifier for this server.
ServerId string `protobuf:"bytes,3,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"`
// Maximum number of concurrency this server will use.
Concurrency int32 `protobuf:"varint,4,opt,name=concurrency,proto3" json:"concurrency,omitempty"`
// List of queue names with their priorities.
// The server will consume tasks from the queues and prioritize
// queues with higher priority numbers.
Queues map[string]int32 `protobuf:"bytes,5,rep,name=queues,proto3" json:"queues,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
// If set, the server will always consume tasks from a queue with higher
// priority.
StrictPriority bool `protobuf:"varint,6,opt,name=strict_priority,json=strictPriority,proto3" json:"strict_priority,omitempty"`
// Status indicates the status of the server.
Status string `protobuf:"bytes,7,opt,name=status,proto3" json:"status,omitempty"`
// Time this server was started.
StartTime *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"`
// Number of workers currently processing tasks.
ActiveWorkerCount int32 `protobuf:"varint,9,opt,name=active_worker_count,json=activeWorkerCount,proto3" json:"active_worker_count,omitempty"`
}
func (x *ServerInfo) Reset() {
*x = ServerInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ServerInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ServerInfo) ProtoMessage() {}
func (x *ServerInfo) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ServerInfo.ProtoReflect.Descriptor instead.
func (*ServerInfo) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{1}
}
func (x *ServerInfo) GetHost() string {
if x != nil {
return x.Host
}
return ""
}
func (x *ServerInfo) GetPid() int32 {
if x != nil {
return x.Pid
}
return 0
}
func (x *ServerInfo) GetServerId() string {
if x != nil {
return x.ServerId
}
return ""
}
func (x *ServerInfo) GetConcurrency() int32 {
if x != nil {
return x.Concurrency
}
return 0
}
func (x *ServerInfo) GetQueues() map[string]int32 {
if x != nil {
return x.Queues
}
return nil
}
func (x *ServerInfo) GetStrictPriority() bool {
if x != nil {
return x.StrictPriority
}
return false
}
func (x *ServerInfo) GetStatus() string {
if x != nil {
return x.Status
}
return ""
}
func (x *ServerInfo) GetStartTime() *timestamppb.Timestamp {
if x != nil {
return x.StartTime
}
return nil
}
func (x *ServerInfo) GetActiveWorkerCount() int32 {
if x != nil {
return x.ActiveWorkerCount
}
return 0
}
// WorkerInfo holds information about a running worker.
type WorkerInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Host matchine this worker is running on.
Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"`
// PID of the process in which this worker is running.
Pid int32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
// ID of the server in which this worker is running.
ServerId string `protobuf:"bytes,3,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"`
// ID of the task this worker is processing.
TaskId string `protobuf:"bytes,4,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
// Type of the task this worker is processing.
TaskType string `protobuf:"bytes,5,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
// Payload of the task this worker is processing.
TaskPayload []byte `protobuf:"bytes,6,opt,name=task_payload,json=taskPayload,proto3" json:"task_payload,omitempty"`
// Name of the queue the task the worker is processing belongs.
Queue string `protobuf:"bytes,7,opt,name=queue,proto3" json:"queue,omitempty"`
// Time this worker started processing the task.
StartTime *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"`
// Deadline by which the worker needs to complete processing
// the task. If worker exceeds the deadline, the task will fail.
Deadline *timestamppb.Timestamp `protobuf:"bytes,9,opt,name=deadline,proto3" json:"deadline,omitempty"`
}
func (x *WorkerInfo) Reset() {
*x = WorkerInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *WorkerInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*WorkerInfo) ProtoMessage() {}
func (x *WorkerInfo) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use WorkerInfo.ProtoReflect.Descriptor instead.
func (*WorkerInfo) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{2}
}
func (x *WorkerInfo) GetHost() string {
if x != nil {
return x.Host
}
return ""
}
func (x *WorkerInfo) GetPid() int32 {
if x != nil {
return x.Pid
}
return 0
}
func (x *WorkerInfo) GetServerId() string {
if x != nil {
return x.ServerId
}
return ""
}
func (x *WorkerInfo) GetTaskId() string {
if x != nil {
return x.TaskId
}
return ""
}
func (x *WorkerInfo) GetTaskType() string {
if x != nil {
return x.TaskType
}
return ""
}
func (x *WorkerInfo) GetTaskPayload() []byte {
if x != nil {
return x.TaskPayload
}
return nil
}
func (x *WorkerInfo) GetQueue() string {
if x != nil {
return x.Queue
}
return ""
}
func (x *WorkerInfo) GetStartTime() *timestamppb.Timestamp {
if x != nil {
return x.StartTime
}
return nil
}
func (x *WorkerInfo) GetDeadline() *timestamppb.Timestamp {
if x != nil {
return x.Deadline
}
return nil
}
// SchedulerEntry holds information about a periodic task registered
// with a scheduler.
type SchedulerEntry struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Identifier of the scheduler entry.
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
// Periodic schedule spec of the entry.
Spec string `protobuf:"bytes,2,opt,name=spec,proto3" json:"spec,omitempty"`
// Task type of the periodic task.
TaskType string `protobuf:"bytes,3,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
// Task payload of the periodic task.
TaskPayload []byte `protobuf:"bytes,4,opt,name=task_payload,json=taskPayload,proto3" json:"task_payload,omitempty"`
// Options used to enqueue the periodic task.
EnqueueOptions []string `protobuf:"bytes,5,rep,name=enqueue_options,json=enqueueOptions,proto3" json:"enqueue_options,omitempty"`
// Next time the task will be enqueued.
NextEnqueueTime *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=next_enqueue_time,json=nextEnqueueTime,proto3" json:"next_enqueue_time,omitempty"`
// Last time the task was enqueued.
// Zero time if task was never enqueued.
PrevEnqueueTime *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=prev_enqueue_time,json=prevEnqueueTime,proto3" json:"prev_enqueue_time,omitempty"`
}
func (x *SchedulerEntry) Reset() {
*x = SchedulerEntry{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SchedulerEntry) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SchedulerEntry) ProtoMessage() {}
func (x *SchedulerEntry) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SchedulerEntry.ProtoReflect.Descriptor instead.
func (*SchedulerEntry) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{3}
}
func (x *SchedulerEntry) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *SchedulerEntry) GetSpec() string {
if x != nil {
return x.Spec
}
return ""
}
func (x *SchedulerEntry) GetTaskType() string {
if x != nil {
return x.TaskType
}
return ""
}
func (x *SchedulerEntry) GetTaskPayload() []byte {
if x != nil {
return x.TaskPayload
}
return nil
}
func (x *SchedulerEntry) GetEnqueueOptions() []string {
if x != nil {
return x.EnqueueOptions
}
return nil
}
func (x *SchedulerEntry) GetNextEnqueueTime() *timestamppb.Timestamp {
if x != nil {
return x.NextEnqueueTime
}
return nil
}
func (x *SchedulerEntry) GetPrevEnqueueTime() *timestamppb.Timestamp {
if x != nil {
return x.PrevEnqueueTime
}
return nil
}
// SchedulerEnqueueEvent holds information about an enqueue event
// by a scheduler.
type SchedulerEnqueueEvent struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// ID of the task that was enqueued.
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
// Time the task was enqueued.
EnqueueTime *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=enqueue_time,json=enqueueTime,proto3" json:"enqueue_time,omitempty"`
}
func (x *SchedulerEnqueueEvent) Reset() {
*x = SchedulerEnqueueEvent{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SchedulerEnqueueEvent) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SchedulerEnqueueEvent) ProtoMessage() {}
func (x *SchedulerEnqueueEvent) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SchedulerEnqueueEvent.ProtoReflect.Descriptor instead.
func (*SchedulerEnqueueEvent) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{4}
}
func (x *SchedulerEnqueueEvent) GetTaskId() string {
if x != nil {
return x.TaskId
}
return ""
}
func (x *SchedulerEnqueueEvent) GetEnqueueTime() *timestamppb.Timestamp {
if x != nil {
return x.EnqueueTime
}
return nil
}
var File_asynq_proto protoreflect.FileDescriptor
var file_asynq_proto_rawDesc = []byte{
0x0a, 0x0b, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x61,
0x73, 0x79, 0x6e, 0x71, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xa9, 0x02, 0x0a, 0x0b, 0x54, 0x61, 0x73, 0x6b, 0x4d, 0x65,
0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20,
0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79,
0x6c, 0x6f, 0x61, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c,
0x6f, 0x61, 0x64, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
0x02, 0x69, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x04, 0x20, 0x01,
0x28, 0x09, 0x52, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x72, 0x65, 0x74,
0x72, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x72, 0x65, 0x74, 0x72, 0x79, 0x12,
0x18, 0x0a, 0x07, 0x72, 0x65, 0x74, 0x72, 0x69, 0x65, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x05,
0x52, 0x07, 0x72, 0x65, 0x74, 0x72, 0x69, 0x65, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x65, 0x72, 0x72,
0x6f, 0x72, 0x5f, 0x6d, 0x73, 0x67, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x65, 0x72,
0x72, 0x6f, 0x72, 0x4d, 0x73, 0x67, 0x12, 0x24, 0x0a, 0x0e, 0x6c, 0x61, 0x73, 0x74, 0x5f, 0x66,
0x61, 0x69, 0x6c, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0c,
0x6c, 0x61, 0x73, 0x74, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64, 0x41, 0x74, 0x12, 0x18, 0x0a, 0x07,
0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x18, 0x08, 0x20, 0x01, 0x28, 0x03, 0x52, 0x07, 0x74,
0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x12, 0x1a, 0x0a, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69,
0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69,
0x6e, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x5f, 0x6b, 0x65, 0x79,
0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x4b, 0x65,
0x79, 0x22, 0x8f, 0x03, 0x0a, 0x0a, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f,
0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28,
0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72,
0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72, 0x76, 0x65,
0x72, 0x49, 0x64, 0x12, 0x20, 0x0a, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e,
0x63, 0x79, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72,
0x72, 0x65, 0x6e, 0x63, 0x79, 0x12, 0x35, 0x0a, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x18,
0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x53, 0x65,
0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x51, 0x75, 0x65, 0x75, 0x65, 0x73, 0x45,
0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x12, 0x27, 0x0a, 0x0f,
0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x5f, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x18,
0x06, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0e, 0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x50, 0x72, 0x69,
0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18,
0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x39, 0x0a,
0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x73,
0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x2e, 0x0a, 0x13, 0x61, 0x63, 0x74, 0x69,
0x76, 0x65, 0x5f, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18,
0x09, 0x20, 0x01, 0x28, 0x05, 0x52, 0x11, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x57, 0x6f, 0x72,
0x6b, 0x65, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x1a, 0x39, 0x0a, 0x0b, 0x51, 0x75, 0x65, 0x75,
0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01,
0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c,
0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a,
0x02, 0x38, 0x01, 0x22, 0xb1, 0x02, 0x0a, 0x0a, 0x57, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x49, 0x6e,
0x66, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20,
0x01, 0x28, 0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76,
0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72,
0x76, 0x65, 0x72, 0x49, 0x64, 0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64,
0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x1b,
0x0a, 0x09, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28,
0x09, 0x52, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74,
0x61, 0x73, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x14,
0x0a, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x71,
0x75, 0x65, 0x75, 0x65, 0x12, 0x39, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69,
0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73,
0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12,
0x36, 0x0a, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x64,
0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x22, 0xad, 0x02, 0x0a, 0x0e, 0x53, 0x63, 0x68, 0x65,
0x64, 0x75, 0x6c, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x70,
0x65, 0x63, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x73, 0x70, 0x65, 0x63, 0x12, 0x1b,
0x0a, 0x09, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28,
0x09, 0x52, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74,
0x61, 0x73, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x27,
0x0a, 0x0f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e,
0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0e, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65,
0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x46, 0x0a, 0x11, 0x6e, 0x65, 0x78, 0x74, 0x5f,
0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x06, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f,
0x6e, 0x65, 0x78, 0x74, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x12,
0x46, 0x0a, 0x11, 0x70, 0x72, 0x65, 0x76, 0x5f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f,
0x74, 0x69, 0x6d, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f,
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f, 0x70, 0x72, 0x65, 0x76, 0x45, 0x6e, 0x71, 0x75,
0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x22, 0x6f, 0x0a, 0x15, 0x53, 0x63, 0x68, 0x65, 0x64,
0x75, 0x6c, 0x65, 0x72, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x45, 0x76, 0x65, 0x6e, 0x74,
0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x3d, 0x0a, 0x0c, 0x65, 0x6e, 0x71,
0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0b, 0x65, 0x6e, 0x71,
0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x42, 0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x69, 0x62, 0x69, 0x6b, 0x65, 0x6e, 0x2f, 0x61,
0x73, 0x79, 0x6e, 0x71, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_asynq_proto_rawDescOnce sync.Once
file_asynq_proto_rawDescData = file_asynq_proto_rawDesc
)
func file_asynq_proto_rawDescGZIP() []byte {
file_asynq_proto_rawDescOnce.Do(func() {
file_asynq_proto_rawDescData = protoimpl.X.CompressGZIP(file_asynq_proto_rawDescData)
})
return file_asynq_proto_rawDescData
}
var file_asynq_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
var file_asynq_proto_goTypes = []interface{}{
(*TaskMessage)(nil), // 0: asynq.TaskMessage
(*ServerInfo)(nil), // 1: asynq.ServerInfo
(*WorkerInfo)(nil), // 2: asynq.WorkerInfo
(*SchedulerEntry)(nil), // 3: asynq.SchedulerEntry
(*SchedulerEnqueueEvent)(nil), // 4: asynq.SchedulerEnqueueEvent
nil, // 5: asynq.ServerInfo.QueuesEntry
(*timestamppb.Timestamp)(nil), // 6: google.protobuf.Timestamp
}
var file_asynq_proto_depIdxs = []int32{
5, // 0: asynq.ServerInfo.queues:type_name -> asynq.ServerInfo.QueuesEntry
6, // 1: asynq.ServerInfo.start_time:type_name -> google.protobuf.Timestamp
6, // 2: asynq.WorkerInfo.start_time:type_name -> google.protobuf.Timestamp
6, // 3: asynq.WorkerInfo.deadline:type_name -> google.protobuf.Timestamp
6, // 4: asynq.SchedulerEntry.next_enqueue_time:type_name -> google.protobuf.Timestamp
6, // 5: asynq.SchedulerEntry.prev_enqueue_time:type_name -> google.protobuf.Timestamp
6, // 6: asynq.SchedulerEnqueueEvent.enqueue_time:type_name -> google.protobuf.Timestamp
7, // [7:7] is the sub-list for method output_type
7, // [7:7] is the sub-list for method input_type
7, // [7:7] is the sub-list for extension type_name
7, // [7:7] is the sub-list for extension extendee
0, // [0:7] is the sub-list for field type_name
}
func init() { file_asynq_proto_init() }
func file_asynq_proto_init() {
if File_asynq_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_asynq_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*TaskMessage); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ServerInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*WorkerInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SchedulerEntry); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SchedulerEnqueueEvent); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_asynq_proto_rawDesc,
NumEnums: 0,
NumMessages: 6,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_asynq_proto_goTypes,
DependencyIndexes: file_asynq_proto_depIdxs,
MessageInfos: file_asynq_proto_msgTypes,
}.Build()
File_asynq_proto = out.File
file_asynq_proto_rawDesc = nil
file_asynq_proto_goTypes = nil
file_asynq_proto_depIdxs = nil
}

154
internal/proto/asynq.proto Normal file
View File

@@ -0,0 +1,154 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
syntax = "proto3";
package asynq;
import "google/protobuf/timestamp.proto";
option go_package = "github.com/hibiken/asynq/internal/proto";
// TaskMessage is the internal representation of a task with additional
// metadata fields.
message TaskMessage {
// Type indicates the kind of the task to be performed.
string type = 1;
// Payload holds data needed to process the task.
bytes payload = 2;
// Unique identifier for the task.
string id = 3;
// Name of the queue to which this task belongs.
string queue = 4;
// Max number of retries for this task.
int32 retry = 5;
// Number of times this task has been retried so far.
int32 retried = 6;
// Error message from the last failure.
string error_msg = 7;
// Time of last failure in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no last failure.
int64 last_failed_at = 11;
// Timeout specifies timeout in seconds.
// Use zero to indicate no timeout.
int64 timeout = 8;
// Deadline specifies the deadline for the task in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no deadline.
int64 deadline = 9;
// UniqueKey holds the redis key used for uniqueness lock for this task.
// Empty string indicates that no uniqueness lock was used.
string unique_key = 10;
};
// ServerInfo holds information about a running server.
message ServerInfo {
// Host machine the server is running on.
string host = 1;
// PID of the server process.
int32 pid = 2;
// Unique identifier for this server.
string server_id = 3;
// Maximum number of concurrency this server will use.
int32 concurrency = 4;
// List of queue names with their priorities.
// The server will consume tasks from the queues and prioritize
// queues with higher priority numbers.
map<string, int32> queues = 5;
// If set, the server will always consume tasks from a queue with higher
// priority.
bool strict_priority = 6;
// Status indicates the status of the server.
string status = 7;
// Time this server was started.
google.protobuf.Timestamp start_time = 8;
// Number of workers currently processing tasks.
int32 active_worker_count = 9;
};
// WorkerInfo holds information about a running worker.
message WorkerInfo {
// Host matchine this worker is running on.
string host = 1;
// PID of the process in which this worker is running.
int32 pid = 2;
// ID of the server in which this worker is running.
string server_id = 3;
// ID of the task this worker is processing.
string task_id = 4;
// Type of the task this worker is processing.
string task_type = 5;
// Payload of the task this worker is processing.
bytes task_payload = 6;
// Name of the queue the task the worker is processing belongs.
string queue = 7;
// Time this worker started processing the task.
google.protobuf.Timestamp start_time = 8;
// Deadline by which the worker needs to complete processing
// the task. If worker exceeds the deadline, the task will fail.
google.protobuf.Timestamp deadline = 9;
};
// SchedulerEntry holds information about a periodic task registered
// with a scheduler.
message SchedulerEntry {
// Identifier of the scheduler entry.
string id = 1;
// Periodic schedule spec of the entry.
string spec = 2;
// Task type of the periodic task.
string task_type = 3;
// Task payload of the periodic task.
bytes task_payload = 4;
// Options used to enqueue the periodic task.
repeated string enqueue_options = 5;
// Next time the task will be enqueued.
google.protobuf.Timestamp next_enqueue_time = 6;
// Last time the task was enqueued.
// Zero time if task was never enqueued.
google.protobuf.Timestamp prev_enqueue_time = 7;
};
// SchedulerEnqueueEvent holds information about an enqueue event
// by a scheduler.
message SchedulerEnqueueEvent {
// ID of the task that was enqueued.
string task_id = 1;
// Time the task was enqueued.
google.protobuf.Timestamp enqueue_time = 2;
};

View File

@@ -259,8 +259,8 @@ func BenchmarkCheckAndEnqueue(b *testing.B) {
asynqtest.SeedScheduledQueue(b, r.client, zs, base.DefaultQueueName) asynqtest.SeedScheduledQueue(b, r.client, zs, base.DefaultQueueName)
b.StartTimer() b.StartTimer()
if err := r.CheckAndEnqueue(base.DefaultQueueName); err != nil { if err := r.ForwardIfReady(base.DefaultQueueName); err != nil {
b.Fatalf("CheckAndEnqueue failed: %v", err) b.Fatalf("ForwardIfReady failed: %v", err)
} }
} }
} }

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -126,13 +126,13 @@ func (tb *TestBroker) Archive(msg *base.TaskMessage, errMsg string) error {
return tb.real.Archive(msg, errMsg) return tb.real.Archive(msg, errMsg)
} }
func (tb *TestBroker) CheckAndEnqueue(qnames ...string) error { func (tb *TestBroker) ForwardIfReady(qnames ...string) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.CheckAndEnqueue(qnames...) return tb.real.ForwardIfReady(qnames...)
} }
func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) { func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) {

View File

@@ -1,230 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"encoding/json"
"fmt"
"time"
"github.com/spf13/cast"
)
// Payload holds arbitrary data needed for task execution.
type Payload struct {
data map[string]interface{}
}
type errKeyNotFound struct {
key string
}
func (e *errKeyNotFound) Error() string {
return fmt.Sprintf("key %q does not exist", e.key)
}
// Has reports whether key exists.
func (p Payload) Has(key string) bool {
_, ok := p.data[key]
return ok
}
func toInt(v interface{}) (int, error) {
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return int(val), nil
default:
return cast.ToIntE(v)
}
}
// String returns a string representation of payload data.
func (p Payload) String() string {
return fmt.Sprint(p.data)
}
// MarshalJSON returns the JSON encoding of payload data.
func (p Payload) MarshalJSON() ([]byte, error) {
return json.Marshal(p.data)
}
// GetString returns a string value if a string type is associated with
// the key, otherwise reports an error.
func (p Payload) GetString(key string) (string, error) {
v, ok := p.data[key]
if !ok {
return "", &errKeyNotFound{key}
}
return cast.ToStringE(v)
}
// GetInt returns an int value if a numeric type is associated with
// the key, otherwise reports an error.
func (p Payload) GetInt(key string) (int, error) {
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
return toInt(v)
}
// GetFloat64 returns a float64 value if a numeric type is associated with
// the key, otherwise reports an error.
func (p Payload) GetFloat64(key string) (float64, error) {
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
switch v := v.(type) {
case json.Number:
return v.Float64()
default:
return cast.ToFloat64E(v)
}
}
// GetBool returns a boolean value if a boolean type is associated with
// the key, otherwise reports an error.
func (p Payload) GetBool(key string) (bool, error) {
v, ok := p.data[key]
if !ok {
return false, &errKeyNotFound{key}
}
return cast.ToBoolE(v)
}
// GetStringSlice returns a slice of strings if a string slice type is associated with
// the key, otherwise reports an error.
func (p Payload) GetStringSlice(key string) ([]string, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringSliceE(v)
}
// GetIntSlice returns a slice of ints if a int slice type is associated with
// the key, otherwise reports an error.
func (p Payload) GetIntSlice(key string) ([]int, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
switch v := v.(type) {
case []interface{}:
var res []int
for _, elem := range v {
val, err := toInt(elem)
if err != nil {
return nil, err
}
res = append(res, int(val))
}
return res, nil
default:
return cast.ToIntSliceE(v)
}
}
// GetStringMap returns a map of string to empty interface
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMap(key string) (map[string]interface{}, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapE(v)
}
// GetStringMapString returns a map of string to string
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapString(key string) (map[string]string, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapStringE(v)
}
// GetStringMapStringSlice returns a map of string to string slice
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapStringSlice(key string) (map[string][]string, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapStringSliceE(v)
}
// GetStringMapInt returns a map of string to int
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
switch v := v.(type) {
case map[string]interface{}:
res := make(map[string]int)
for key, val := range v {
ival, err := toInt(val)
if err != nil {
return nil, err
}
res[key] = ival
}
return res, nil
default:
return cast.ToStringMapIntE(v)
}
}
// GetStringMapBool returns a map of string to boolean
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapBool(key string) (map[string]bool, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapBoolE(v)
}
// GetTime returns a time value if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetTime(key string) (time.Time, error) {
v, ok := p.data[key]
if !ok {
return time.Time{}, &errKeyNotFound{key}
}
return cast.ToTimeE(v)
}
// GetDuration returns a duration value if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetDuration(key string) (time.Duration, error) {
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return time.Duration(val), nil
default:
return cast.ToDurationE(v)
}
}

View File

@@ -1,675 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"encoding/json"
"fmt"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
)
type payloadTest struct {
data map[string]interface{}
key string
nonkey string
}
func TestPayloadString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"name": "gopher"},
key: "name",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetString(tc.nonkey)
if err == nil || got != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
tc.key, got, err)
}
}
}
func TestPayloadInt(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user_id": 42},
key: "user_id",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetInt(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadFloat64(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"pi": 3.14},
key: "pi",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetFloat64(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadBool(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"enabled": true},
key: "enabled",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetBool(tc.nonkey)
if err == nil || got != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
tc.key, got, err)
}
}
}
func TestPayloadStringSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"names": []string{"luke", "rey", "anakin"}},
key: "names",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadIntSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"nums": []int{9, 8, 7}},
key: "nums",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetIntSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetIntSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetIntSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMap(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user": map[string]interface{}{"name": "Jon Doe", "score": 2.2}},
key: "user",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMap(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMap(tc.key)
ignoreOpt := cmpopts.IgnoreMapEntries(func(key string, val interface{}) bool {
switch val.(type) {
case json.Number:
return true
default:
return false
}
})
diff = cmp.Diff(got, tc.data[tc.key], ignoreOpt)
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil;(-want,+got)\n%s",
tc.key, got, err, tc.data[tc.key], diff)
}
// access non-existent key.
got, err = payload.GetStringMap(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"address": map[string]string{"line": "123 Main St", "city": "San Francisco", "state": "CA"}},
key: "address",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapString(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapString(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapString(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapStringSlice(t *testing.T) {
favs := map[string][]string{
"movies": {"forrest gump", "star wars"},
"tv_shows": {"game of thrones", "HIMYM", "breaking bad"},
}
tests := []payloadTest{
{
data: map[string]interface{}{"favorites": favs},
key: "favorites",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapInt(t *testing.T) {
counter := map[string]int{
"a": 1,
"b": 101,
"c": 42,
}
tests := []payloadTest{
{
data: map[string]interface{}{"counts": counter},
key: "counts",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapInt(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapInt(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapInt(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapBool(t *testing.T) {
features := map[string]bool{
"A": false,
"B": true,
"C": true,
}
tests := []payloadTest{
{
data: map[string]interface{}{"features": features},
key: "features",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapBool(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapBool(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapBool(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadTime(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"current": time.Now()},
key: "current",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetTime(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetTime(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetTime(tc.nonkey)
if err == nil || !got.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Time{})
}
}
}
func TestPayloadDuration(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"duration": 15 * time.Minute},
key: "duration",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetDuration(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetDuration(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetDuration(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Duration(0))
}
}
}
func TestPayloadHas(t *testing.T) {
payload := Payload{map[string]interface{}{
"user_id": 123,
}}
if !payload.Has("user_id") {
t.Errorf("Payload.Has(%q) = false, want true", "user_id")
}
if payload.Has("name") {
t.Errorf("Payload.Has(%q) = true, want false", "name")
}
}
func TestPayloadDebuggingStrings(t *testing.T) {
data := map[string]interface{}{
"foo": 123,
"bar": "hello",
"baz": false,
}
payload := Payload{data: data}
if payload.String() != fmt.Sprint(data) {
t.Errorf("Payload.String() = %q, want %q",
payload.String(), fmt.Sprint(data))
}
got, err := payload.MarshalJSON()
if err != nil {
t.Fatal(err)
}
want, err := json.Marshal(data)
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(got, want); diff != "" {
t.Errorf("Payload.MarhsalJSON() = %s, want %s; (-want,+got)\n%s",
got, want, diff)
}
}

View File

@@ -6,7 +6,6 @@ package asynq
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"math/rand" "math/rand"
"runtime" "runtime"
@@ -17,8 +16,8 @@ import (
"time" "time"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
"golang.org/x/time/rate" "golang.org/x/time/rate"
) )
@@ -123,8 +122,8 @@ func (p *processor) stop() {
}) })
} }
// NOTE: once terminated, processor cannot be re-started. // NOTE: once shutdown, processor cannot be re-started.
func (p *processor) terminate() { func (p *processor) shutdown() {
p.stop() p.stop()
time.AfterFunc(p.shutdownTimeout, func() { close(p.abort) }) time.AfterFunc(p.shutdownTimeout, func() { close(p.abort) })
@@ -163,7 +162,7 @@ func (p *processor) exec() {
qnames := p.queues() qnames := p.queues()
msg, deadline, err := p.broker.Dequeue(qnames...) msg, deadline, err := p.broker.Dequeue(qnames...)
switch { switch {
case err == rdb.ErrNoProcessableTask: case errors.Is(err, errors.ErrNoProcessableTask):
p.logger.Debug("All queues are empty") p.logger.Debug("All queues are empty")
// Queues are empty, this is a normal behavior. // Queues are empty, this is a normal behavior.
// Sleep to avoid slamming redis and let scheduler move tasks into queues. // Sleep to avoid slamming redis and let scheduler move tasks into queues.

View File

@@ -6,6 +6,7 @@ package asynq
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"sort" "sort"
"sync" "sync"
@@ -13,7 +14,6 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
@@ -44,6 +44,7 @@ func fakeSyncer(syncCh <-chan *syncRequest, done <-chan struct{}) {
func TestProcessorSuccessWithSingleQueue(t *testing.T) { func TestProcessorSuccessWithSingleQueue(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("task1", nil) m1 := h.NewTaskMessage("task1", nil)
@@ -113,7 +114,7 @@ func TestProcessorSuccessWithSingleQueue(t *testing.T) {
for _, msg := range tc.incoming { for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg) err := rdbClient.Enqueue(msg)
if err != nil { if err != nil {
p.terminate() p.shutdown()
t.Fatal(err) t.Fatal(err)
} }
} }
@@ -121,10 +122,10 @@ func TestProcessorSuccessWithSingleQueue(t *testing.T) {
if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 { if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l) t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
} }
p.terminate() p.shutdown()
mu.Lock() mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff) t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
} }
mu.Unlock() mu.Unlock()
@@ -146,6 +147,7 @@ func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
t3 = NewTask(m3.Type, m3.Payload) t3 = NewTask(m3.Type, m3.Payload)
t4 = NewTask(m4.Type, m4.Payload) t4 = NewTask(m4.Type, m4.Payload)
) )
defer r.Close()
tests := []struct { tests := []struct {
pending map[string][]*base.TaskMessage pending map[string][]*base.TaskMessage
@@ -213,10 +215,10 @@ func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l) t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
} }
} }
p.terminate() p.shutdown()
mu.Lock() mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff) t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
} }
mu.Unlock() mu.Unlock()
@@ -226,9 +228,10 @@ func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
// https://github.com/hibiken/asynq/issues/166 // https://github.com/hibiken/asynq/issues/166
func TestProcessTasksWithLargeNumberInPayload(t *testing.T) { func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("large_number", map[string]interface{}{"data": 111111111111111111}) m1 := h.NewTaskMessage("large_number", h.JSON(map[string]interface{}{"data": 111111111111111111}))
t1 := NewTask(m1.Type, m1.Payload) t1 := NewTask(m1.Type, m1.Payload)
tests := []struct { tests := []struct {
@@ -250,10 +253,14 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
handler := func(ctx context.Context, task *Task) error { handler := func(ctx context.Context, task *Task) error {
mu.Lock() mu.Lock()
defer mu.Unlock() defer mu.Unlock()
if data, err := task.Payload.GetInt("data"); err != nil { var payload map[string]int
t.Errorf("coult not get data from payload: %v", err) if err := json.Unmarshal(task.Payload(), &payload); err != nil {
} else { t.Errorf("coult not decode payload: %v", err)
}
if data, ok := payload["data"]; ok {
t.Logf("data == %d", data) t.Logf("data == %d", data)
} else {
t.Errorf("could not get data from payload")
} }
processed = append(processed, task) processed = append(processed, task)
return nil return nil
@@ -286,10 +293,10 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 { if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l) t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
} }
p.terminate() p.shutdown()
mu.Lock() mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmpopts.IgnoreUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff) t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
} }
mu.Unlock() mu.Unlock()
@@ -298,6 +305,7 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
func TestProcessorRetry(t *testing.T) { func TestProcessorRetry(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("send_email", nil) m1 := h.NewTaskMessage("send_email", nil)
@@ -308,66 +316,55 @@ func TestProcessorRetry(t *testing.T) {
errMsg := "something went wrong" errMsg := "something went wrong"
wrappedSkipRetry := fmt.Errorf("%s:%w", errMsg, SkipRetry) wrappedSkipRetry := fmt.Errorf("%s:%w", errMsg, SkipRetry)
now := time.Now()
tests := []struct { tests := []struct {
desc string // test description desc string // test description
pending []*base.TaskMessage // initial default queue state pending []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run
delay time.Duration // retry delay duration delay time.Duration // retry delay duration
handler Handler // task handler handler Handler // task handler
wait time.Duration // wait duration between starting and stopping processor for this test case wait time.Duration // wait duration between starting and stopping processor for this test case
wantRetry []base.Z // tasks in retry queue at the end wantErrMsg string // error message the task should record
wantRetry []*base.TaskMessage // tasks in retry queue at the end
wantArchived []*base.TaskMessage // tasks in archived queue at the end wantArchived []*base.TaskMessage // tasks in archived queue at the end
wantErrCount int // number of times error handler should be called wantErrCount int // number of times error handler should be called
}{ }{
{ {
desc: "Should automatically retry errored tasks", desc: "Should automatically retry errored tasks",
pending: []*base.TaskMessage{m1, m2}, pending: []*base.TaskMessage{m1, m2, m3, m4},
incoming: []*base.TaskMessage{m3, m4}, delay: time.Minute,
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error { handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return fmt.Errorf(errMsg) return fmt.Errorf(errMsg)
}), }),
wait: 2 * time.Second, wait: 2 * time.Second,
wantRetry: []base.Z{ wantErrMsg: errMsg,
{Message: h.TaskMessageAfterRetry(*m2, errMsg), Score: now.Add(time.Minute).Unix()}, wantRetry: []*base.TaskMessage{m2, m3, m4},
{Message: h.TaskMessageAfterRetry(*m3, errMsg), Score: now.Add(time.Minute).Unix()}, wantArchived: []*base.TaskMessage{m1},
{Message: h.TaskMessageAfterRetry(*m4, errMsg), Score: now.Add(time.Minute).Unix()},
},
wantArchived: []*base.TaskMessage{h.TaskMessageWithError(*m1, errMsg)},
wantErrCount: 4, wantErrCount: 4,
}, },
{ {
desc: "Should skip retry errored tasks", desc: "Should skip retry errored tasks",
pending: []*base.TaskMessage{m1, m2}, pending: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{}, delay: time.Minute,
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error { handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return SkipRetry // return SkipRetry without wrapping return SkipRetry // return SkipRetry without wrapping
}), }),
wait: 2 * time.Second, wait: 2 * time.Second,
wantRetry: []base.Z{}, wantErrMsg: SkipRetry.Error(),
wantArchived: []*base.TaskMessage{ wantRetry: []*base.TaskMessage{},
h.TaskMessageWithError(*m1, SkipRetry.Error()), wantArchived: []*base.TaskMessage{m1, m2},
h.TaskMessageWithError(*m2, SkipRetry.Error()),
},
wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error
}, },
{ {
desc: "Should skip retry errored tasks (with error wrapping)", desc: "Should skip retry errored tasks (with error wrapping)",
pending: []*base.TaskMessage{m1, m2}, pending: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{}, delay: time.Minute,
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error { handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return wrappedSkipRetry return wrappedSkipRetry
}), }),
wait: 2 * time.Second, wait: 2 * time.Second,
wantRetry: []base.Z{}, wantErrMsg: wrappedSkipRetry.Error(),
wantArchived: []*base.TaskMessage{ wantRetry: []*base.TaskMessage{},
h.TaskMessageWithError(*m1, wrappedSkipRetry.Error()), wantArchived: []*base.TaskMessage{m1, m2},
h.TaskMessageWithError(*m2, wrappedSkipRetry.Error()),
},
wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error
}, },
} }
@@ -411,24 +408,34 @@ func TestProcessorRetry(t *testing.T) {
p.handler = tc.handler p.handler = tc.handler
p.start(&sync.WaitGroup{}) p.start(&sync.WaitGroup{})
for _, msg := range tc.incoming { runTime := time.Now() // time when processor is running
err := rdbClient.Enqueue(msg) time.Sleep(tc.wait) // FIXME: This makes test flaky.
if err != nil { p.shutdown()
p.terminate()
t.Fatal(err)
}
}
time.Sleep(tc.wait) // FIXME: This makes test flaky.
p.terminate()
cmpOpt := h.EquateInt64Approx(1) // allow up to a second difference in zset score cmpOpt := h.EquateInt64Approx(int64(tc.wait.Seconds())) // allow up to a wait-second difference in zset score
gotRetry := h.GetRetryEntries(t, r, base.DefaultQueueName) gotRetry := h.GetRetryEntries(t, r, base.DefaultQueueName)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" { var wantRetry []base.Z // Note: construct wantRetry here since `LastFailedAt` and ZSCORE is relative to each test run.
for _, msg := range tc.wantRetry {
wantRetry = append(wantRetry,
base.Z{
Message: h.TaskMessageAfterRetry(*msg, tc.wantErrMsg, runTime),
Score: runTime.Add(tc.delay).Unix(),
})
}
if diff := cmp.Diff(wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.RetryKey(base.DefaultQueueName), diff) t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.RetryKey(base.DefaultQueueName), diff)
} }
gotDead := h.GetArchivedMessages(t, r, base.DefaultQueueName) gotArchived := h.GetArchivedEntries(t, r, base.DefaultQueueName)
if diff := cmp.Diff(tc.wantArchived, gotDead, h.SortMsgOpt); diff != "" { var wantArchived []base.Z // Note: construct wantArchived here since `LastFailedAt` and ZSCORE is relative to each test run.
for _, msg := range tc.wantArchived {
wantArchived = append(wantArchived,
base.Z{
Message: h.TaskMessageWithError(*msg, tc.wantErrMsg, runTime),
Score: runTime.Unix(),
})
}
if diff := cmp.Diff(wantArchived, gotArchived, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.ArchivedKey(base.DefaultQueueName), diff) t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.ArchivedKey(base.DefaultQueueName), diff)
} }
@@ -590,9 +597,9 @@ func TestProcessorWithStrictPriority(t *testing.T) {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l) t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
} }
} }
p.terminate() p.shutdown()
if diff := cmp.Diff(tc.wantProcessed, processed, cmp.AllowUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff) t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
} }
@@ -611,7 +618,7 @@ func TestProcessorPerform(t *testing.T) {
handler: func(ctx context.Context, t *Task) error { handler: func(ctx context.Context, t *Task) error {
return nil return nil
}, },
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}), task: NewTask("gen_thumbnail", h.JSON(map[string]interface{}{"src": "some/img/path"})),
wantErr: false, wantErr: false,
}, },
{ {
@@ -619,7 +626,7 @@ func TestProcessorPerform(t *testing.T) {
handler: func(ctx context.Context, t *Task) error { handler: func(ctx context.Context, t *Task) error {
return fmt.Errorf("something went wrong") return fmt.Errorf("something went wrong")
}, },
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}), task: NewTask("gen_thumbnail", h.JSON(map[string]interface{}{"src": "some/img/path"})),
wantErr: true, wantErr: true,
}, },
{ {
@@ -627,7 +634,7 @@ func TestProcessorPerform(t *testing.T) {
handler: func(ctx context.Context, t *Task) error { handler: func(ctx context.Context, t *Task) error {
panic("something went terribly wrong") panic("something went terribly wrong")
}, },
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}), task: NewTask("gen_thumbnail", h.JSON(map[string]interface{}{"src": "some/img/path"})),
wantErr: true, wantErr: true,
}, },
} }

View File

@@ -47,7 +47,7 @@ func newRecoverer(params recovererParams) *recoverer {
} }
} }
func (r *recoverer) terminate() { func (r *recoverer) shutdown() {
r.logger.Debug("Recoverer shutting down...") r.logger.Debug("Recoverer shutting down...")
// Signal the recoverer goroutine to stop polling. // Signal the recoverer goroutine to stop polling.
r.done <- struct{}{} r.done <- struct{}{}
@@ -57,6 +57,7 @@ func (r *recoverer) start(wg *sync.WaitGroup) {
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
r.recover()
timer := time.NewTimer(r.interval) timer := time.NewTimer(r.interval)
for { for {
select { select {
@@ -65,27 +66,31 @@ func (r *recoverer) start(wg *sync.WaitGroup) {
timer.Stop() timer.Stop()
return return
case <-timer.C: case <-timer.C:
// Get all tasks which have expired 30 seconds ago or earlier. r.recover()
deadline := time.Now().Add(-30 * time.Second) timer.Reset(r.interval)
msgs, err := r.broker.ListDeadlineExceeded(deadline, r.queues...)
if err != nil {
r.logger.Warn("recoverer: could not list deadline exceeded tasks")
continue
}
const errMsg = "deadline exceeded" // TODO: better error message
for _, msg := range msgs {
if msg.Retried >= msg.Retry {
r.archive(msg, errMsg)
} else {
r.retry(msg, errMsg)
}
}
} }
} }
}() }()
} }
func (r *recoverer) recover() {
// Get all tasks which have expired 30 seconds ago or earlier.
deadline := time.Now().Add(-30 * time.Second)
msgs, err := r.broker.ListDeadlineExceeded(deadline, r.queues...)
if err != nil {
r.logger.Warn("recoverer: could not list deadline exceeded tasks")
return
}
const errMsg = "deadline exceeded"
for _, msg := range msgs {
if msg.Retried >= msg.Retry {
r.archive(msg, errMsg)
} else {
r.retry(msg, errMsg)
}
}
}
func (r *recoverer) retry(msg *base.TaskMessage, errMsg string) { func (r *recoverer) retry(msg *base.TaskMessage, errMsg string) {
delay := r.retryDelayFunc(msg.Retried, fmt.Errorf(errMsg), NewTask(msg.Type, msg.Payload)) delay := r.retryDelayFunc(msg.Retried, fmt.Errorf(errMsg), NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(delay) retryAt := time.Now().Add(delay)

View File

@@ -64,7 +64,7 @@ func TestRecoverer(t *testing.T) {
"default": {}, "default": {},
}, },
wantRetry: map[string][]*base.TaskMessage{ wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")}, "default": {t1},
}, },
wantArchived: map[string][]*base.TaskMessage{ wantArchived: map[string][]*base.TaskMessage{
"default": {}, "default": {},
@@ -101,7 +101,7 @@ func TestRecoverer(t *testing.T) {
"critical": {}, "critical": {},
}, },
wantArchived: map[string][]*base.TaskMessage{ wantArchived: map[string][]*base.TaskMessage{
"default": {h.TaskMessageWithError(*t4, "deadline exceeded")}, "default": {t4},
"critical": {}, "critical": {},
}, },
}, },
@@ -137,7 +137,7 @@ func TestRecoverer(t *testing.T) {
"critical": {{Message: t3, Score: oneHourFromNow.Unix()}}, "critical": {{Message: t3, Score: oneHourFromNow.Unix()}},
}, },
wantRetry: map[string][]*base.TaskMessage{ wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")}, "default": {t1},
"critical": {}, "critical": {},
}, },
wantArchived: map[string][]*base.TaskMessage{ wantArchived: map[string][]*base.TaskMessage{
@@ -176,8 +176,8 @@ func TestRecoverer(t *testing.T) {
"default": {{Message: t2, Score: oneHourFromNow.Unix()}}, "default": {{Message: t2, Score: oneHourFromNow.Unix()}},
}, },
wantRetry: map[string][]*base.TaskMessage{ wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")}, "default": {t1},
"critical": {h.TaskMessageAfterRetry(*t3, "deadline exceeded")}, "critical": {t3},
}, },
wantArchived: map[string][]*base.TaskMessage{ wantArchived: map[string][]*base.TaskMessage{
"default": {}, "default": {},
@@ -238,8 +238,9 @@ func TestRecoverer(t *testing.T) {
var wg sync.WaitGroup var wg sync.WaitGroup
recoverer.start(&wg) recoverer.start(&wg)
runTime := time.Now() // time when recoverer is running
time.Sleep(2 * time.Second) time.Sleep(2 * time.Second)
recoverer.terminate() recoverer.shutdown()
for qname, want := range tc.wantActive { for qname, want := range tc.wantActive {
gotActive := h.GetActiveMessages(t, r, qname) gotActive := h.GetActiveMessages(t, r, qname)
@@ -253,15 +254,24 @@ func TestRecoverer(t *testing.T) {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.DeadlinesKey(qname), diff) t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.DeadlinesKey(qname), diff)
} }
} }
for qname, want := range tc.wantRetry { cmpOpt := h.EquateInt64Approx(2) // allow up to two-second difference in `LastFailedAt`
for qname, msgs := range tc.wantRetry {
gotRetry := h.GetRetryMessages(t, r, qname) gotRetry := h.GetRetryMessages(t, r, qname)
if diff := cmp.Diff(want, gotRetry, h.SortMsgOpt); diff != "" { var wantRetry []*base.TaskMessage // Note: construct message here since `LastFailedAt` is relative to each test run
for _, msg := range msgs {
wantRetry = append(wantRetry, h.TaskMessageAfterRetry(*msg, "deadline exceeded", runTime))
}
if diff := cmp.Diff(wantRetry, gotRetry, h.SortMsgOpt, cmpOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff) t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff)
} }
} }
for qname, want := range tc.wantArchived { for qname, msgs := range tc.wantArchived {
gotDead := h.GetArchivedMessages(t, r, qname) gotArchived := h.GetArchivedMessages(t, r, qname)
if diff := cmp.Diff(want, gotDead, h.SortMsgOpt); diff != "" { var wantArchived []*base.TaskMessage
for _, msg := range msgs {
wantArchived = append(wantArchived, h.TaskMessageWithError(*msg, "deadline exceeded", runTime))
}
if diff := cmp.Diff(wantArchived, gotArchived, h.SortMsgOpt, cmpOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.ArchivedKey(qname), diff) t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.ArchivedKey(qname), diff)
} }
} }

View File

@@ -21,7 +21,7 @@ import (
// A Scheduler kicks off tasks at regular intervals based on the user defined schedule. // A Scheduler kicks off tasks at regular intervals based on the user defined schedule.
type Scheduler struct { type Scheduler struct {
id string id string
status *base.ServerStatus state *base.ServerState
logger *log.Logger logger *log.Logger
client *Client client *Client
rdb *rdb.RDB rdb *rdb.RDB
@@ -61,7 +61,7 @@ func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler {
return &Scheduler{ return &Scheduler{
id: generateSchedulerID(), id: generateSchedulerID(),
status: base.NewServerStatus(base.StatusIdle), state: base.NewServerState(),
logger: logger, logger: logger,
client: NewClient(r), client: NewClient(r),
rdb: rdb.NewRDB(c), rdb: rdb.NewRDB(c),
@@ -117,7 +117,7 @@ type enqueueJob struct {
} }
func (j *enqueueJob) Run() { func (j *enqueueJob) Run() {
res, err := j.client.Enqueue(j.task, j.opts...) info, err := j.client.Enqueue(j.task, j.opts...)
if err != nil { if err != nil {
j.logger.Errorf("scheduler could not enqueue a task %+v: %v", j.task, err) j.logger.Errorf("scheduler could not enqueue a task %+v: %v", j.task, err)
if j.errHandler != nil { if j.errHandler != nil {
@@ -125,10 +125,10 @@ func (j *enqueueJob) Run() {
} }
return return
} }
j.logger.Debugf("scheduler enqueued a task: %+v", res) j.logger.Debugf("scheduler enqueued a task: %+v", info)
event := &base.SchedulerEnqueueEvent{ event := &base.SchedulerEnqueueEvent{
TaskID: res.ID, TaskID: info.ID,
EnqueuedAt: res.EnqueuedAt.In(j.location), EnqueuedAt: time.Now().In(j.location),
} }
err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event) err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event)
if err != nil { if err != nil {
@@ -170,22 +170,23 @@ func (s *Scheduler) Unregister(entryID string) error {
} }
// Run starts the scheduler until an os signal to exit the program is received. // Run starts the scheduler until an os signal to exit the program is received.
// It returns an error if scheduler is already running or has been stopped. // It returns an error if scheduler is already running or has been shutdown.
func (s *Scheduler) Run() error { func (s *Scheduler) Run() error {
if err := s.Start(); err != nil { if err := s.Start(); err != nil {
return err return err
} }
s.waitForSignals() s.waitForSignals()
return s.Stop() s.Shutdown()
return nil
} }
// Start starts the scheduler. // Start starts the scheduler.
// It returns an error if the scheduler is already running or has been stopped. // It returns an error if the scheduler is already running or has been shutdown.
func (s *Scheduler) Start() error { func (s *Scheduler) Start() error {
switch s.status.Get() { switch s.state.Get() {
case base.StatusRunning: case base.StateActive:
return fmt.Errorf("asynq: the scheduler is already running") return fmt.Errorf("asynq: the scheduler is already running")
case base.StatusStopped: case base.StateClosed:
return fmt.Errorf("asynq: the scheduler has already been stopped") return fmt.Errorf("asynq: the scheduler has already been stopped")
} }
s.logger.Info("Scheduler starting") s.logger.Info("Scheduler starting")
@@ -193,16 +194,12 @@ func (s *Scheduler) Start() error {
s.cron.Start() s.cron.Start()
s.wg.Add(1) s.wg.Add(1)
go s.runHeartbeater() go s.runHeartbeater()
s.status.Set(base.StatusRunning) s.state.Set(base.StateActive)
return nil return nil
} }
// Stop stops the scheduler. // Shutdown stops and shuts down the scheduler.
// It returns an error if the scheduler is not currently running. func (s *Scheduler) Shutdown() {
func (s *Scheduler) Stop() error {
if s.status.Get() != base.StatusRunning {
return fmt.Errorf("asynq: the scheduler is not running")
}
s.logger.Info("Scheduler shutting down") s.logger.Info("Scheduler shutting down")
close(s.done) // signal heartbeater to stop close(s.done) // signal heartbeater to stop
ctx := s.cron.Stop() ctx := s.cron.Stop()
@@ -212,9 +209,8 @@ func (s *Scheduler) Stop() error {
s.clearHistory() s.clearHistory()
s.client.Close() s.client.Close()
s.rdb.Close() s.rdb.Close()
s.status.Set(base.StatusStopped) s.state.Set(base.StateClosed)
s.logger.Info("Scheduler stopped") s.logger.Info("Scheduler stopped")
return nil
} }
func (s *Scheduler) runHeartbeater() { func (s *Scheduler) runHeartbeater() {
@@ -240,8 +236,8 @@ func (s *Scheduler) beat() {
e := &base.SchedulerEntry{ e := &base.SchedulerEntry{
ID: job.id.String(), ID: job.id.String(),
Spec: job.cronspec, Spec: job.cronspec,
Type: job.task.Type, Type: job.task.Type(),
Payload: job.task.Payload.data, Payload: job.task.Payload(),
Opts: stringifyOptions(job.opts), Opts: stringifyOptions(job.opts),
Next: entry.Next, Next: entry.Next,
Prev: entry.Prev, Prev: entry.Prev,

View File

@@ -67,9 +67,7 @@ func TestSchedulerRegister(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
time.Sleep(tc.wait) time.Sleep(tc.wait)
if err := scheduler.Stop(); err != nil { scheduler.Shutdown()
t.Fatal(err)
}
got := asynqtest.GetPendingMessages(t, r, tc.queue) got := asynqtest.GetPendingMessages(t, r, tc.queue)
if diff := cmp.Diff(tc.want, got, asynqtest.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(tc.want, got, asynqtest.IgnoreIDOpt); diff != "" {
@@ -106,9 +104,7 @@ func TestSchedulerWhenRedisDown(t *testing.T) {
} }
// Scheduler should attempt to enqueue the task three times (every 3s). // Scheduler should attempt to enqueue the task three times (every 3s).
time.Sleep(10 * time.Second) time.Sleep(10 * time.Second)
if err := scheduler.Stop(); err != nil { scheduler.Shutdown()
t.Fatal(err)
}
mu.Lock() mu.Lock()
if counter != 3 { if counter != 3 {
@@ -150,9 +146,7 @@ func TestSchedulerUnregister(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
time.Sleep(tc.wait) time.Sleep(tc.wait)
if err := scheduler.Stop(); err != nil { scheduler.Shutdown()
t.Fatal(err)
}
got := asynqtest.GetPendingMessages(t, r, tc.queue) got := asynqtest.GetPendingMessages(t, r, tc.queue)
if len(got) != 0 { if len(got) != 0 {

View File

@@ -62,7 +62,7 @@ func (mux *ServeMux) Handler(t *Task) (h Handler, pattern string) {
mux.mu.RLock() mux.mu.RLock()
defer mux.mu.RUnlock() defer mux.mu.RUnlock()
h, pattern = mux.match(t.Type) h, pattern = mux.match(t.Type())
if h == nil { if h == nil {
h, pattern = NotFoundHandler(), "" h, pattern = NotFoundHandler(), ""
} }
@@ -151,7 +151,7 @@ func (mux *ServeMux) Use(mws ...MiddlewareFunc) {
// NotFound returns an error indicating that the handler was not found for the given task. // NotFound returns an error indicating that the handler was not found for the given task.
func NotFound(ctx context.Context, task *Task) error { func NotFound(ctx context.Context, task *Task) error {
return fmt.Errorf("handler not found for task %q", task.Type) return fmt.Errorf("handler not found for task %q", task.Type())
} }
// NotFoundHandler returns a simple task handler that returns a ``not found`` error. // NotFoundHandler returns a simple task handler that returns a ``not found`` error.

View File

@@ -68,7 +68,7 @@ func TestServeMux(t *testing.T) {
} }
if called != tc.want { if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want) t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type(), tc.want)
} }
} }
} }
@@ -124,7 +124,7 @@ func TestServeMuxNotFound(t *testing.T) {
task := NewTask(tc.typename, nil) task := NewTask(tc.typename, nil)
err := mux.ProcessTask(context.Background(), task) err := mux.ProcessTask(context.Background(), task)
if err == nil { if err == nil {
t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type) t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type())
} }
} }
} }
@@ -164,7 +164,7 @@ func TestServeMuxMiddlewares(t *testing.T) {
} }
if called != tc.want { if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want) t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type(), tc.want)
} }
} }
} }

106
server.go
View File

@@ -21,23 +21,24 @@ import (
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
) )
// Server is responsible for managing the task processing. // Server is responsible for task processing and task lifecycle management.
// //
// Server pulls tasks off queues and processes them. // Server pulls tasks off queues and processes them.
// If the processing of a task is unsuccessful, server will schedule it for a retry. // If the processing of a task is unsuccessful, server will schedule it for a retry.
//
// A task will be retried until either the task gets processed successfully // A task will be retried until either the task gets processed successfully
// or until it reaches its max retry count. // or until it reaches its max retry count.
// //
// If a task exhausts its retries, it will be moved to the archive and // If a task exhausts its retries, it will be moved to the archive and
// will be kept in the archive for some time until a certain condition is met // will be kept in the archive set.
// (e.g., archive size reaches a certain limit, or the task has been in the // Note that the archive size is finite and once it reaches its max size,
// archive for a certain amount of time). // oldest tasks in the archive will be deleted.
type Server struct { type Server struct {
logger *log.Logger logger *log.Logger
broker base.Broker broker base.Broker
status *base.ServerStatus state *base.ServerState
// wait group to wait for all goroutines to finish. // wait group to wait for all goroutines to finish.
wg sync.WaitGroup wg sync.WaitGroup
@@ -278,7 +279,7 @@ const (
) )
// NewServer returns a new Server given a redis connection option // NewServer returns a new Server given a redis connection option
// and background processing configuration. // and server configuration.
func NewServer(r RedisConnOpt, cfg Config) *Server { func NewServer(r RedisConnOpt, cfg Config) *Server {
c, ok := r.MakeRedisClient().(redis.UniversalClient) c, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok { if !ok {
@@ -294,6 +295,9 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
} }
queues := make(map[string]int) queues := make(map[string]int)
for qname, p := range cfg.Queues { for qname, p := range cfg.Queues {
if err := base.ValidateQueueName(qname); err != nil {
continue // ignore invalid queue names
}
if p > 0 { if p > 0 {
queues[qname] = p queues[qname] = p
} }
@@ -324,7 +328,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
starting := make(chan *workerInfo) starting := make(chan *workerInfo)
finished := make(chan *base.TaskMessage) finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest) syncCh := make(chan *syncRequest)
status := base.NewServerStatus(base.StatusIdle) state := base.NewServerState()
cancels := base.NewCancelations() cancels := base.NewCancelations()
syncer := newSyncer(syncerParams{ syncer := newSyncer(syncerParams{
@@ -339,7 +343,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
concurrency: n, concurrency: n,
queues: queues, queues: queues,
strictPriority: cfg.StrictPriority, strictPriority: cfg.StrictPriority,
status: status, state: state,
starting: starting, starting: starting,
finished: finished, finished: finished,
}) })
@@ -384,7 +388,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
return &Server{ return &Server{
logger: logger, logger: logger,
broker: rdb, broker: rdb,
status: status, state: state,
forwarder: forwarder, forwarder: forwarder,
processor: processor, processor: processor,
syncer: syncer, syncer: syncer,
@@ -400,11 +404,13 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
// ProcessTask should return nil if the processing of a task // ProcessTask should return nil if the processing of a task
// is successful. // is successful.
// //
// If ProcessTask return a non-nil error or panics, the task // If ProcessTask returns a non-nil error or panics, the task
// will be retried after delay. // will be retried after delay if retry-count is remaining,
// One exception to this rule is when ProcessTask returns SkipRetry error. // otherwise the task will be archived.
// If the returned error is SkipRetry or the error wraps SkipRetry, retry is //
// skipped and task will be archived instead. // One exception to this rule is when ProcessTask returns a SkipRetry error.
// If the returned error is SkipRetry or an error wraps SkipRetry, retry is
// skipped and the task will be immediately archived instead.
type Handler interface { type Handler interface {
ProcessTask(context.Context, *Task) error ProcessTask(context.Context, *Task) error
} }
@@ -420,43 +426,46 @@ func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task) return fn(ctx, task)
} }
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped. // ErrServerClosed indicates that the operation is now illegal because of the server has been shutdown.
var ErrServerStopped = errors.New("asynq: the server has been stopped") var ErrServerClosed = errors.New("asynq: Server closed")
// Run starts the background-task processing and blocks until // Run starts the task processing and blocks until
// an os signal to exit the program is received. Once it receives // an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all active workers and other // a signal, it gracefully shuts down all active workers and other
// goroutines to process the tasks. // goroutines to process the tasks.
// //
// Run returns any error encountered during server startup time. // Run returns any error encountered at server startup time.
// If the server has already been stopped, ErrServerStopped is returned. // If the server has already been shutdown, ErrServerClosed is returned.
func (srv *Server) Run(handler Handler) error { func (srv *Server) Run(handler Handler) error {
if err := srv.Start(handler); err != nil { if err := srv.Start(handler); err != nil {
return err return err
} }
srv.waitForSignals() srv.waitForSignals()
srv.Stop() srv.Shutdown()
return nil return nil
} }
// Start starts the worker server. Once the server has started, // Start starts the worker server. Once the server has started,
// it pulls tasks off queues and starts a worker goroutine for each task. // it pulls tasks off queues and starts a worker goroutine for each task
// Tasks are processed concurrently by the workers up to the number of // and then call Handler to process it.
// concurrency specified at the initialization time. // Tasks are processed concurrently by the workers up to the number of
// concurrency specified in Config.Concurrency.
// //
// Start returns any error encountered during server startup time. // Start returns any error encountered at server startup time.
// If the server has already been stopped, ErrServerStopped is returned. // If the server has already been shutdown, ErrServerClosed is returned.
func (srv *Server) Start(handler Handler) error { func (srv *Server) Start(handler Handler) error {
if handler == nil { if handler == nil {
return fmt.Errorf("asynq: server cannot run with nil handler") return fmt.Errorf("asynq: server cannot run with nil handler")
} }
switch srv.status.Get() { switch srv.state.Get() {
case base.StatusRunning: case base.StateActive:
return fmt.Errorf("asynq: the server is already running") return fmt.Errorf("asynq: the server is already running")
case base.StatusStopped: case base.StateStopped:
return ErrServerStopped return fmt.Errorf("asynq: the server is in the stopped state. Waiting for shutdown.")
case base.StateClosed:
return ErrServerClosed
} }
srv.status.Set(base.StatusRunning) srv.state.Set(base.StateActive)
srv.processor.handler = handler srv.processor.handler = handler
srv.logger.Info("Starting processing") srv.logger.Info("Starting processing")
@@ -471,43 +480,46 @@ func (srv *Server) Start(handler Handler) error {
return nil return nil
} }
// Stop stops the worker server. // Shutdown gracefully shuts down the server.
// It gracefully closes all active workers. The server will wait for // It gracefully closes all active workers. The server will wait for
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout. // active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis. // If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
func (srv *Server) Stop() { func (srv *Server) Shutdown() {
switch srv.status.Get() { switch srv.state.Get() {
case base.StatusIdle, base.StatusStopped: case base.StateNew, base.StateClosed:
// server is not running, do nothing and return. // server is not running, do nothing and return.
return return
} }
srv.logger.Info("Starting graceful shutdown") srv.logger.Info("Starting graceful shutdown")
// Note: The order of termination is important. // Note: The order of shutdown is important.
// Sender goroutines should be terminated before the receiver goroutines. // Sender goroutines should be terminated before the receiver goroutines.
// processor -> syncer (via syncCh) // processor -> syncer (via syncCh)
// processor -> heartbeater (via starting, finished channels) // processor -> heartbeater (via starting, finished channels)
srv.forwarder.terminate() srv.forwarder.shutdown()
srv.processor.terminate() srv.processor.shutdown()
srv.recoverer.terminate() srv.recoverer.shutdown()
srv.syncer.terminate() srv.syncer.shutdown()
srv.subscriber.terminate() srv.subscriber.shutdown()
srv.healthchecker.terminate() srv.healthchecker.shutdown()
srv.heartbeater.terminate() srv.heartbeater.shutdown()
srv.wg.Wait() srv.wg.Wait()
srv.broker.Close() srv.broker.Close()
srv.status.Set(base.StatusStopped) srv.state.Set(base.StateClosed)
srv.logger.Info("Exiting") srv.logger.Info("Exiting")
} }
// Quiet signals the server to stop pulling new tasks off queues. // Stop signals the server to stop pulling new tasks off queues.
// Quiet should be used before stopping the server. // Stop can be used before shutting down the server to ensure that all
func (srv *Server) Quiet() { // currently active tasks are processed before server shutdown.
//
// Stop does not shutdown the server, make sure to call Shutdown before exit.
func (srv *Server) Stop() {
srv.logger.Info("Stopping processor") srv.logger.Info("Stopping processor")
srv.processor.stop() srv.processor.stop()
srv.status.Set(base.StatusQuiet) srv.state.Set(base.StateStopped)
srv.logger.Info("Processor stopped") srv.logger.Info("Processor stopped")
} }

View File

@@ -11,6 +11,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker" "github.com/hibiken/asynq/internal/testbroker"
"go.uber.org/goleak" "go.uber.org/goleak"
@@ -39,17 +40,17 @@ func TestServer(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123})) _, err = c.Enqueue(NewTask("send_email", asynqtest.JSON(map[string]interface{}{"recipient_id": 123})))
if err != nil { if err != nil {
t.Errorf("could not enqueue a task: %v", err) t.Errorf("could not enqueue a task: %v", err)
} }
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 456}), ProcessIn(1*time.Hour)) _, err = c.Enqueue(NewTask("send_email", asynqtest.JSON(map[string]interface{}{"recipient_id": 456})), ProcessIn(1*time.Hour))
if err != nil { if err != nil {
t.Errorf("could not enqueue a task: %v", err) t.Errorf("could not enqueue a task: %v", err)
} }
srv.Stop() srv.Shutdown()
} }
func TestServerRun(t *testing.T) { func TestServerRun(t *testing.T) {
@@ -81,16 +82,16 @@ func TestServerRun(t *testing.T) {
} }
} }
func TestServerErrServerStopped(t *testing.T) { func TestServerErrServerClosed(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel}) srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
handler := NewServeMux() handler := NewServeMux()
if err := srv.Start(handler); err != nil { if err := srv.Start(handler); err != nil {
t.Fatal(err) t.Fatal(err)
} }
srv.Stop() srv.Shutdown()
err := srv.Start(handler) err := srv.Start(handler)
if err != ErrServerStopped { if err != ErrServerClosed {
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err) t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerClosed error", err)
} }
} }
@@ -99,7 +100,7 @@ func TestServerErrNilHandler(t *testing.T) {
err := srv.Start(nil) err := srv.Start(nil)
if err == nil { if err == nil {
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error") t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
srv.Stop() srv.Shutdown()
} }
} }
@@ -113,7 +114,7 @@ func TestServerErrServerRunning(t *testing.T) {
if err == nil { if err == nil {
t.Error("Calling (*Server).Start(handler) on already running server did not return error") t.Error("Calling (*Server).Start(handler) on already running server did not return error")
} }
srv.Stop() srv.Shutdown()
} }
func TestServerWithRedisDown(t *testing.T) { func TestServerWithRedisDown(t *testing.T) {
@@ -145,7 +146,7 @@ func TestServerWithRedisDown(t *testing.T) {
time.Sleep(3 * time.Second) time.Sleep(3 * time.Second)
srv.Stop() srv.Shutdown()
} }
func TestServerWithFlakyBroker(t *testing.T) { func TestServerWithFlakyBroker(t *testing.T) {
@@ -169,8 +170,8 @@ func TestServerWithFlakyBroker(t *testing.T) {
h := func(ctx context.Context, task *Task) error { h := func(ctx context.Context, task *Task) error {
// force task retry. // force task retry.
if task.Type == "bad_task" { if task.Type() == "bad_task" {
return fmt.Errorf("could not process %q", task.Type) return fmt.Errorf("could not process %q", task.Type())
} }
time.Sleep(2 * time.Second) time.Sleep(2 * time.Second)
return nil return nil
@@ -206,7 +207,7 @@ func TestServerWithFlakyBroker(t *testing.T) {
time.Sleep(3 * time.Second) time.Sleep(3 * time.Second)
srv.Stop() srv.Shutdown()
} }
func TestLogLevel(t *testing.T) { func TestLogLevel(t *testing.T) {

View File

@@ -22,7 +22,7 @@ func (srv *Server) waitForSignals() {
for { for {
sig := <-sigs sig := <-sigs
if sig == unix.SIGTSTP { if sig == unix.SIGTSTP {
srv.Quiet() srv.Stop()
continue continue
} }
break break

View File

@@ -43,7 +43,7 @@ func newSubscriber(params subscriberParams) *subscriber {
} }
} }
func (s *subscriber) terminate() { func (s *subscriber) shutdown() {
s.logger.Debug("Subscriber shutting down...") s.logger.Debug("Subscriber shutting down...")
// Signal the subscriber goroutine to stop. // Signal the subscriber goroutine to stop.
s.done <- struct{}{} s.done <- struct{}{}

View File

@@ -46,7 +46,7 @@ func TestSubscriber(t *testing.T) {
}) })
var wg sync.WaitGroup var wg sync.WaitGroup
subscriber.start(&wg) subscriber.start(&wg)
defer subscriber.terminate() defer subscriber.shutdown()
// wait for subscriber to establish connection to pubsub channel // wait for subscriber to establish connection to pubsub channel
time.Sleep(time.Second) time.Sleep(time.Second)
@@ -91,7 +91,7 @@ func TestSubscriberWithRedisDown(t *testing.T) {
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis. testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
var wg sync.WaitGroup var wg sync.WaitGroup
subscriber.start(&wg) subscriber.start(&wg)
defer subscriber.terminate() defer subscriber.shutdown()
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis. time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.

View File

@@ -46,7 +46,7 @@ func newSyncer(params syncerParams) *syncer {
} }
} }
func (s *syncer) terminate() { func (s *syncer) shutdown() {
s.logger.Debug("Syncer shutting down...") s.logger.Debug("Syncer shutting down...")
// Signal the syncer goroutine to stop. // Signal the syncer goroutine to stop.
s.done <- struct{}{} s.done <- struct{}{}

View File

@@ -35,7 +35,7 @@ func TestSyncer(t *testing.T) {
}) })
var wg sync.WaitGroup var wg sync.WaitGroup
syncer.start(&wg) syncer.start(&wg)
defer syncer.terminate() defer syncer.shutdown()
for _, msg := range inProgress { for _, msg := range inProgress {
m := msg m := msg
@@ -66,7 +66,7 @@ func TestSyncerRetry(t *testing.T) {
var wg sync.WaitGroup var wg sync.WaitGroup
syncer.start(&wg) syncer.start(&wg)
defer syncer.terminate() defer syncer.shutdown()
var ( var (
mu sync.Mutex mu sync.Mutex
@@ -131,7 +131,7 @@ func TestSyncerDropsStaleRequests(t *testing.T) {
} }
time.Sleep(2 * interval) // ensure that syncer runs at least once time.Sleep(2 * interval) // ensure that syncer runs at least once
syncer.terminate() syncer.shutdown()
mu.Lock() mu.Lock()
if n != 0 { if n != 0 {

View File

@@ -11,7 +11,7 @@ import (
"sort" "sort"
"time" "time"
"github.com/hibiken/asynq/inspeq" "github.com/hibiken/asynq"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -63,7 +63,7 @@ func cronList(cmd *cobra.Command, args []string) {
cols := []string{"EntryID", "Spec", "Type", "Payload", "Options", "Next", "Prev"} cols := []string{"EntryID", "Spec", "Type", "Payload", "Options", "Next", "Prev"}
printRows := func(w io.Writer, tmpl string) { printRows := func(w io.Writer, tmpl string) {
for _, e := range entries { for _, e := range entries {
fmt.Fprintf(w, tmpl, e.ID, e.Spec, e.Task.Type, e.Task.Payload, e.Opts, fmt.Fprintf(w, tmpl, e.ID, e.Spec, e.Task.Type(), formatPayload(e.Task.Payload()), e.Opts,
nextEnqueue(e.Next), prevEnqueue(e.Prev)) nextEnqueue(e.Next), prevEnqueue(e.Prev))
} }
} }
@@ -108,7 +108,7 @@ func cronHistory(cmd *cobra.Command, args []string) {
fmt.Printf("Entry: %s\n\n", entryID) fmt.Printf("Entry: %s\n\n", entryID)
events, err := inspector.ListSchedulerEnqueueEvents( events, err := inspector.ListSchedulerEnqueueEvents(
entryID, inspeq.PageSize(pageSize), inspeq.Page(pageNum)) entryID, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
continue continue

View File

@@ -14,376 +14,391 @@ import (
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/spf13/cast" "github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
) )
// migrateCmd represents the migrate command.
var migrateCmd = &cobra.Command{ var migrateCmd = &cobra.Command{
Use: "migrate", Use: "migrate",
Short: fmt.Sprintf("Migrate all tasks to be compatible with asynq v%s", base.Version), Short: fmt.Sprintf("Migrate existing tasks and queues to be asynq%s compatible", base.Version),
Args: cobra.NoArgs, Long: `Migrate (asynq migrate) will migrate existing tasks and queues in redis to be compatible with the latest version of asynq.
Run: migrate, `,
Args: cobra.NoArgs,
Run: migrate,
} }
func init() { func init() {
rootCmd.AddCommand(migrateCmd) rootCmd.AddCommand(migrateCmd)
} }
func migrate(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := createRDB()
/*** Migrate from 0.9 to 0.10, 0.11 compatible ***/
lists := []string{"asynq:in_progress"}
allQueues, err := c.SMembers(base.AllQueues).Result()
if err != nil {
printError(fmt.Errorf("could not read all queues: %v", err))
os.Exit(1)
}
lists = append(lists, allQueues...)
for _, key := range lists {
if err := migrateList(c, key); err != nil {
printError(err)
os.Exit(1)
}
}
zsets := []string{"asynq:scheduled", "asynq:retry", "asynq:dead"}
for _, key := range zsets {
if err := migrateZSet(c, key); err != nil {
printError(err)
os.Exit(1)
}
}
/*** Migrate from 0.11 to 0.12 compatible ***/
if err := createBackup(c, base.AllQueues); err != nil {
printError(err)
os.Exit(1)
}
for _, qkey := range allQueues {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if err := c.SAdd(base.AllQueues, qname).Err(); err != nil {
err = fmt.Errorf("could not add queue name %q to %q set: %v\n",
qname, base.AllQueues, err)
printError(err)
os.Exit(1)
}
}
if err := deleteBackup(c, base.AllQueues); err != nil {
printError(err)
os.Exit(1)
}
for _, qkey := range allQueues {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if exists := c.Exists(qkey).Val(); exists == 1 {
if err := c.Rename(qkey, base.QueueKey(qname)).Err(); err != nil {
printError(fmt.Errorf("could not rename key %q: %v\n", qkey, err))
os.Exit(1)
}
}
}
if err := partitionZSetMembersByQueue(c, "asynq:scheduled", base.ScheduledKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionZSetMembersByQueue(c, "asynq:retry", base.RetryKey); err != nil {
printError(err)
os.Exit(1)
}
// Note: base.DeadKey function was renamed in v0.14. We define the legacy function here since we need it for this migration script.
deadKeyFunc := func(qname string) string { return fmt.Sprintf("asynq:{%s}:dead", qname) }
if err := partitionZSetMembersByQueue(c, "asynq:dead", deadKeyFunc); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionZSetMembersByQueue(c, "asynq:deadlines", base.DeadlinesKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionListMembersByQueue(c, "asynq:in_progress", base.ActiveKey); err != nil {
printError(err)
os.Exit(1)
}
paused, err := c.SMembers("asynq:paused").Result()
if err != nil {
printError(fmt.Errorf("command SMEMBERS asynq:paused failed: %v", err))
os.Exit(1)
}
for _, qkey := range paused {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if err := r.Pause(qname); err != nil {
printError(err)
os.Exit(1)
}
}
if err := deleteKey(c, "asynq:paused"); err != nil {
printError(err)
os.Exit(1)
}
if err := deleteKey(c, "asynq:servers"); err != nil {
printError(err)
os.Exit(1)
}
if err := deleteKey(c, "asynq:workers"); err != nil {
printError(err)
os.Exit(1)
}
/*** Migrate from 0.13 to 0.14 compatible ***/
// Move all dead tasks to archived ZSET.
for _, qname := range allQueues {
zs, err := c.ZRangeWithScores(deadKeyFunc(qname), 0, -1).Result()
if err != nil {
printError(err)
os.Exit(1)
}
for _, z := range zs {
if err := c.ZAdd(base.ArchivedKey(qname), &z).Err(); err != nil {
printError(err)
os.Exit(1)
}
}
if err := deleteKey(c, deadKeyFunc(qname)); err != nil {
printError(err)
os.Exit(1)
}
}
}
func backupKey(key string) string { func backupKey(key string) string {
return fmt.Sprintf("%s:backup", key) return fmt.Sprintf("%s:backup", key)
} }
func createBackup(c *redis.Client, key string) error { func renameKeyAsBackup(c redis.UniversalClient, key string) error {
err := c.Rename(key, backupKey(key)).Err() if c.Exists(key).Val() == 0 {
return nil // key doesn't exist; no-op
}
return c.Rename(key, backupKey(key)).Err()
}
func failIfError(err error, msg string) {
if err != nil { if err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err) fmt.Printf("error: %s: %v\n", msg, err)
fmt.Println("*** Please report this issue at https://github.com/hibiken/asynq/issues ***")
os.Exit(1)
} }
return nil
} }
func deleteBackup(c *redis.Client, key string) error { func logIfError(err error, msg string) {
return deleteKey(c, backupKey(key))
}
func deleteKey(c *redis.Client, key string) error {
exists := c.Exists(key).Val()
if exists == 0 {
// key does not exist
return nil
}
err := c.Del(key).Err()
if err != nil { if err != nil {
return fmt.Errorf("could not delete key %q: %v", key, err) fmt.Printf("warning: %s: %v\n", msg, err)
} }
return nil
} }
func printError(err error) { func migrate(cmd *cobra.Command, args []string) {
fmt.Println(err) r := createRDB()
fmt.Println() queues, err := r.AllQueues()
fmt.Println("Migrate command error") failIfError(err, "Failed to get queue names")
fmt.Println("Please file an issue on Github at https://github.com/hibiken/asynq/issues/new/choose")
}
func partitionZSetMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error { // ---------------------------------------------
zs, err := c.ZRangeWithScores(key, 0, -1).Result() // Pre-check: Ensure no active servers, tasks.
if err != nil { // ---------------------------------------------
return fmt.Errorf("command ZRANGE %s 0 -1 WITHSCORES failed: %v", key, err) srvs, err := r.ListServers()
failIfError(err, "Failed to get server infos")
if len(srvs) > 0 {
fmt.Println("(error): Server(s) still running. Please ensure that no asynq servers are running when runnning migrate command.")
os.Exit(1)
} }
for _, z := range zs { for _, qname := range queues {
s := cast.ToString(z.Member) stats, err := r.CurrentStats(qname)
msg, err := base.DecodeMessage(s) failIfError(err, "Failed to get stats")
if err != nil { if stats.Active > 0 {
return fmt.Errorf("could not decode message from %q: %v", key, err) fmt.Printf("(error): %d active tasks found. Please ensure that no active tasks exist when running migrate command.\n", stats.Active)
} os.Exit(1)
if err := c.ZAdd(newKeyFunc(msg.Queue), &z).Err(); err != nil {
return fmt.Errorf("could not add %v to %q: %v", z, newKeyFunc(msg.Queue))
} }
} }
if err := deleteKey(c, key); err != nil {
return err
}
return nil
}
func partitionListMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error { // ---------------------------------------------
data, err := c.LRange(key, 0, -1).Result() // Rename pending key
if err != nil { // ---------------------------------------------
return fmt.Errorf("command LRANGE %s 0 -1 failed: %v", key, err) fmt.Print("Renaming pending keys...")
} for _, qname := range queues {
for _, s := range data { oldKey := fmt.Sprintf("asynq:{%s}", qname)
msg, err := base.DecodeMessage(s) if r.Client().Exists(oldKey).Val() == 0 {
if err != nil { continue
return fmt.Errorf("could not decode message from %q: %v", key, err)
} }
if err := c.LPush(newKeyFunc(msg.Queue), s).Err(); err != nil { newKey := base.PendingKey(qname)
return fmt.Errorf("could not add %v to %q: %v", s, newKeyFunc(msg.Queue)) err := r.Client().Rename(oldKey, newKey).Err()
failIfError(err, "Failed to rename key")
}
fmt.Print("Done\n")
// ---------------------------------------------
// Rename keys as backup
// ---------------------------------------------
fmt.Print("Renaming keys for backup...")
for _, qname := range queues {
keys := []string{
base.ActiveKey(qname),
base.PendingKey(qname),
base.ScheduledKey(qname),
base.RetryKey(qname),
base.ArchivedKey(qname),
}
for _, key := range keys {
err := renameKeyAsBackup(r.Client(), key)
failIfError(err, fmt.Sprintf("Failed to rename key %q for backup", key))
} }
} }
if err := deleteKey(c, key); err != nil { fmt.Print("Done\n")
return err
// ---------------------------------------------
// Update to new schema
// ---------------------------------------------
fmt.Print("Updating to new schema...")
for _, qname := range queues {
updatePendingMessages(r, qname)
updateZSetMessages(r.Client(), base.ScheduledKey(qname), "scheduled")
updateZSetMessages(r.Client(), base.RetryKey(qname), "retry")
updateZSetMessages(r.Client(), base.ArchivedKey(qname), "archived")
} }
return nil fmt.Print("Done\n")
// ---------------------------------------------
// Delete backup keys
// ---------------------------------------------
fmt.Print("Deleting backup keys...")
for _, qname := range queues {
keys := []string{
backupKey(base.ActiveKey(qname)),
backupKey(base.PendingKey(qname)),
backupKey(base.ScheduledKey(qname)),
backupKey(base.RetryKey(qname)),
backupKey(base.ArchivedKey(qname)),
}
for _, key := range keys {
err := r.Client().Del(key).Err()
failIfError(err, "Failed to delete backup key")
}
}
fmt.Print("Done\n")
} }
type oldTaskMessage struct { func UnmarshalOldMessage(encoded string) (*base.TaskMessage, error) {
// Unchanged oldMsg, err := DecodeMessage(encoded)
Type string
Payload map[string]interface{}
ID uuid.UUID
Queue string
Retry int
Retried int
ErrorMsg string
UniqueKey string
// Following fields have changed.
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Timeout string
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
//
// time.Time's zero value means no deadline.
Deadline string
}
var defaultTimeout = 30 * time.Minute
func convertMessage(old *oldTaskMessage) (*base.TaskMessage, error) {
timeout, err := time.ParseDuration(old.Timeout)
if err != nil { if err != nil {
return nil, fmt.Errorf("could not parse Timeout field of %+v", old) return nil, err
} }
deadline, err := time.Parse(time.RFC3339, old.Deadline) payload, err := json.Marshal(oldMsg.Payload)
if err != nil { if err != nil {
return nil, fmt.Errorf("could not parse Deadline field of %+v", old) return nil, fmt.Errorf("could not marshal payload: %v", err)
}
if timeout == 0 && deadline.IsZero() {
timeout = defaultTimeout
}
if deadline.IsZero() {
// Zero value used to be time.Time{},
// in the new schema zero value is represented by
// zero in Unix time.
deadline = time.Unix(0, 0)
} }
return &base.TaskMessage{ return &base.TaskMessage{
Type: old.Type, Type: oldMsg.Type,
Payload: old.Payload, Payload: payload,
ID: uuid.New(), ID: oldMsg.ID,
Queue: old.Queue, Queue: oldMsg.Queue,
Retry: old.Retry, Retry: oldMsg.Retry,
Retried: old.Retried, Retried: oldMsg.Retried,
ErrorMsg: old.ErrorMsg, ErrorMsg: oldMsg.ErrorMsg,
UniqueKey: old.UniqueKey, LastFailedAt: 0,
Timeout: int64(timeout.Seconds()), Timeout: oldMsg.Timeout,
Deadline: deadline.Unix(), Deadline: oldMsg.Deadline,
UniqueKey: oldMsg.UniqueKey,
}, nil }, nil
} }
func deserialize(s string) (*base.TaskMessage, error) { // TaskMessage from v0.17
// Try deserializing as old message. type OldTaskMessage struct {
// Type indicates the kind of the task to be performed.
Type string
// Payload holds data needed to process the task.
Payload map[string]interface{}
// ID is a unique identifier for each task.
ID uuid.UUID
// Queue is a name this message should be enqueued to.
Queue string
// Retry is the max number of retry for this task.
Retry int
// Retried is the number of times we've retried this task so far.
Retried int
// ErrorMsg holds the error message from the last failure.
ErrorMsg string
// Timeout specifies timeout in seconds.
// If task processing doesn't complete within the timeout, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the archive.
//
// Use zero to indicate no timeout.
Timeout int64
// Deadline specifies the deadline for the task in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// If task processing doesn't complete before the deadline, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the archive.
//
// Use zero to indicate no deadline.
Deadline int64
// UniqueKey holds the redis key used for uniqueness lock for this task.
//
// Empty string indicates that no uniqueness lock was used.
UniqueKey string
}
// DecodeMessage unmarshals the given encoded string and returns a decoded task message.
// Code from v0.17.
func DecodeMessage(s string) (*OldTaskMessage, error) {
d := json.NewDecoder(strings.NewReader(s)) d := json.NewDecoder(strings.NewReader(s))
d.UseNumber() d.UseNumber()
var old *oldTaskMessage var msg OldTaskMessage
if err := d.Decode(&old); err != nil { if err := d.Decode(&msg); err != nil {
// Try deserializing as new message. return nil, err
d = json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg *base.TaskMessage
if err := d.Decode(&msg); err != nil {
return nil, fmt.Errorf("could not deserialize %s into task message: %v", s, err)
}
return msg, nil
} }
return convertMessage(old) return &msg, nil
} }
func migrateZSet(c *redis.Client, key string) error { func updatePendingMessages(r *rdb.RDB, qname string) {
if c.Exists(key).Val() == 0 { data, err := r.Client().LRange(backupKey(base.PendingKey(qname)), 0, -1).Result()
// skip if key doesn't exist. failIfError(err, "Failed to read backup pending key")
return nil
for _, s := range data {
msg, err := UnmarshalOldMessage(s)
failIfError(err, "Failed to unmarshal message")
if msg.UniqueKey != "" {
ttl, err := r.Client().TTL(msg.UniqueKey).Result()
failIfError(err, "Failed to get ttl")
if ttl > 0 {
err = r.Client().Del(msg.UniqueKey).Err()
logIfError(err, "Failed to delete unique key")
}
// Regenerate unique key.
msg.UniqueKey = base.UniqueKey(msg.Queue, msg.Type, msg.Payload)
if ttl > 0 {
err = r.EnqueueUnique(msg, ttl)
} else {
err = r.Enqueue(msg)
}
failIfError(err, "Failed to enqueue message")
} else {
err := r.Enqueue(msg)
failIfError(err, "Failed to enqueue message")
}
} }
res, err := c.ZRangeWithScores(key, 0, -1).Result() }
// KEYS[1] -> asynq:{<qname>}:t:<task_id>
// KEYS[2] -> asynq:{<qname>}:scheduled
// ARGV[1] -> task message data
// ARGV[2] -> zset score
// ARGV[3] -> task ID
// ARGV[4] -> task timeout in seconds (0 if not timeout)
// ARGV[5] -> task deadline in unix time (0 if no deadline)
// ARGV[6] -> task state (e.g. "retry", "archived")
var taskZAddCmd = redis.NewScript(`
redis.call("HSET", KEYS[1],
"msg", ARGV[1],
"state", ARGV[6],
"timeout", ARGV[4],
"deadline", ARGV[5])
redis.call("ZADD", KEYS[2], ARGV[2], ARGV[3])
return 1
`)
// ZAddTask adds task to zset.
func ZAddTask(c redis.UniversalClient, key string, msg *base.TaskMessage, score float64, state string) error {
// Special case; LastFailedAt field is new so assign a value inferred from zscore.
if state == "archived" {
msg.LastFailedAt = int64(score)
}
encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
var msgs []*redis.Z if err := c.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
for _, z := range res { return err
s, err := cast.ToStringE(z.Member)
if err != nil {
return fmt.Errorf("could not cast to string: %v", err)
}
msg, err := deserialize(s)
if err != nil {
return err
}
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, &redis.Z{Score: z.Score, Member: encoded})
} }
if err := c.Rename(key, key+":backup").Err(); err != nil { keys := []string{
return fmt.Errorf("could not rename key %q: %v", key, err) base.TaskKey(msg.Queue, msg.ID.String()),
key,
} }
if err := c.ZAdd(key, msgs...).Err(); err != nil { argv := []interface{}{
return fmt.Errorf("could not write new messages to %q: %v", key, err) encoded,
score,
msg.ID.String(),
msg.Timeout,
msg.Deadline,
state,
} }
if err := c.Del(key + ":backup").Err(); err != nil { return taskZAddCmd.Run(c, keys, argv...).Err()
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err) }
// KEYS[1] -> unique key
// KEYS[2] -> asynq:{<qname>}:t:<task_id>
// KEYS[3] -> zset key (e.g. asynq:{<qname>}:scheduled)
// --
// ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> score (process_at timestamp)
// ARGV[4] -> task message
// ARGV[5] -> task timeout in seconds (0 if not timeout)
// ARGV[6] -> task deadline in unix time (0 if no deadline)
// ARGV[7] -> task state (oneof "scheduled", "retry", "archived")
var taskZAddUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then
return 0
end
redis.call("HSET", KEYS[2],
"msg", ARGV[4],
"state", ARGV[7],
"timeout", ARGV[5],
"deadline", ARGV[6],
"unique_key", KEYS[1])
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[1])
return 1
`)
// ScheduleUnique adds the task to the backlog queue to be processed in the future if the uniqueness lock can be acquired.
// It returns ErrDuplicateTask if the lock cannot be acquired.
func ZAddTaskUnique(c redis.UniversalClient, key string, msg *base.TaskMessage, score float64, state string, ttl time.Duration) error {
encoded, err := base.EncodeMessage(msg)
if err != nil {
return err
}
if err := c.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
keys := []string{
msg.UniqueKey,
base.TaskKey(msg.Queue, msg.ID.String()),
key,
}
argv := []interface{}{
msg.ID.String(),
int(ttl.Seconds()),
score,
encoded,
msg.Timeout,
msg.Deadline,
state,
}
res, err := taskZAddUniqueCmd.Run(c, keys, argv...).Result()
if err != nil {
return err
}
n, ok := res.(int64)
if !ok {
return errors.E(errors.Internal, fmt.Sprintf("cast error: unexpected return value from Lua script: %v", res))
}
if n == 0 {
return errors.E(errors.AlreadyExists, errors.ErrDuplicateTask)
} }
return nil return nil
} }
func migrateList(c *redis.Client, key string) error { func updateZSetMessages(c redis.UniversalClient, key, state string) {
if c.Exists(key).Val() == 0 { zs, err := c.ZRangeWithScores(backupKey(key), 0, -1).Result()
// skip if key doesn't exist. failIfError(err, "Failed to read")
return nil
} for _, z := range zs {
res, err := c.LRange(key, 0, -1).Result() msg, err := UnmarshalOldMessage(z.Member.(string))
if err != nil { failIfError(err, "Failed to unmarshal message")
return err
} if msg.UniqueKey != "" {
var msgs []interface{} ttl, err := c.TTL(msg.UniqueKey).Result()
for _, s := range res { failIfError(err, "Failed to get ttl")
msg, err := deserialize(s)
if err != nil { if ttl > 0 {
return err err = c.Del(msg.UniqueKey).Err()
logIfError(err, "Failed to delete unique key")
}
// Regenerate unique key.
msg.UniqueKey = base.UniqueKey(msg.Queue, msg.Type, msg.Payload)
if ttl > 0 {
err = ZAddTaskUnique(c, key, msg, z.Score, state, ttl)
} else {
err = ZAddTask(c, key, msg, z.Score, state)
}
failIfError(err, "Failed to zadd message")
} else {
err := ZAddTask(c, key, msg, z.Score, state)
failIfError(err, "Failed to enqueue scheduled message")
} }
encoded, err := base.EncodeMessage(msg)
if err != nil {
return fmt.Errorf("could not encode message from %q: %v", key, err)
}
msgs = append(msgs, encoded)
} }
if err := c.Rename(key, key+":backup").Err(); err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
if err := c.LPush(key, msgs...).Err(); err != nil {
return fmt.Errorf("could not write new messages to %q: %v", key, err)
}
if err := c.Del(key + ":backup").Err(); err != nil {
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
}
return nil
} }

View File

@@ -10,8 +10,8 @@ import (
"os" "os"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/hibiken/asynq/inspeq" "github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -82,7 +82,7 @@ func queueList(cmd *cobra.Command, args []string) {
type queueInfo struct { type queueInfo struct {
name string name string
keyslot int64 keyslot int64
nodes []inspeq.ClusterNode nodes []*asynq.ClusterNode
} }
inspector := createInspector() inspector := createInspector()
queues, err := inspector.Queues() queues, err := inspector.Queues()
@@ -90,7 +90,7 @@ func queueList(cmd *cobra.Command, args []string) {
fmt.Printf("error: Could not fetch list of queues: %v\n", err) fmt.Printf("error: Could not fetch list of queues: %v\n", err)
os.Exit(1) os.Exit(1)
} }
var qs []queueInfo var qs []*queueInfo
for _, qname := range queues { for _, qname := range queues {
q := queueInfo{name: qname} q := queueInfo{name: qname}
if useRedisCluster { if useRedisCluster {
@@ -107,7 +107,7 @@ func queueList(cmd *cobra.Command, args []string) {
} }
q.nodes = nodes q.nodes = nodes
} }
qs = append(qs, q) qs = append(qs, &q)
} }
if useRedisCluster { if useRedisCluster {
printTable( printTable(
@@ -129,43 +129,42 @@ func queueInspect(cmd *cobra.Command, args []string) {
inspector := createInspector() inspector := createInspector()
for i, qname := range args { for i, qname := range args {
if i > 0 { if i > 0 {
fmt.Printf("\n%s\n", separator) fmt.Printf("\n%s\n\n", separator)
} }
fmt.Println() info, err := inspector.GetQueueInfo(qname)
stats, err := inspector.CurrentStats(qname)
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
continue continue
} }
printQueueStats(stats) printQueueInfo(info)
} }
} }
func printQueueStats(s *inspeq.QueueStats) { func printQueueInfo(info *asynq.QueueInfo) {
bold := color.New(color.Bold) bold := color.New(color.Bold)
bold.Println("Queue Info") bold.Println("Queue Info")
fmt.Printf("Name: %s\n", s.Queue) fmt.Printf("Name: %s\n", info.Queue)
fmt.Printf("Size: %d\n", s.Size) fmt.Printf("Size: %d\n", info.Size)
fmt.Printf("Paused: %t\n\n", s.Paused) fmt.Printf("Paused: %t\n\n", info.Paused)
bold.Println("Task Count by State") bold.Println("Task Count by State")
printTable( printTable(
[]string{"active", "pending", "scheduled", "retry", "archived"}, []string{"active", "pending", "scheduled", "retry", "archived"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
fmt.Fprintf(w, tmpl, s.Active, s.Pending, s.Scheduled, s.Retry, s.Archived) fmt.Fprintf(w, tmpl, info.Active, info.Pending, info.Scheduled, info.Retry, info.Archived)
}, },
) )
fmt.Println() fmt.Println()
bold.Printf("Daily Stats %s UTC\n", s.Timestamp.UTC().Format("2006-01-02")) bold.Printf("Daily Stats %s UTC\n", info.Timestamp.UTC().Format("2006-01-02"))
printTable( printTable(
[]string{"processed", "failed", "error rate"}, []string{"processed", "failed", "error rate"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
var errRate string var errRate string
if s.Processed == 0 { if info.Processed == 0 {
errRate = "N/A" errRate = "N/A"
} else { } else {
errRate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100) errRate = fmt.Sprintf("%.2f%%", float64(info.Failed)/float64(info.Processed)*100)
} }
fmt.Fprintf(w, tmpl, s.Processed, s.Failed, errRate) fmt.Fprintf(w, tmpl, info.Processed, info.Failed, errRate)
}, },
) )
} }
@@ -179,9 +178,9 @@ func queueHistory(cmd *cobra.Command, args []string) {
inspector := createInspector() inspector := createInspector()
for i, qname := range args { for i, qname := range args {
if i > 0 { if i > 0 {
fmt.Printf("\n%s\n", separator) fmt.Printf("\n%s\n\n", separator)
} }
fmt.Printf("\nQueue: %s\n\n", qname) fmt.Printf("Queue: %s\n\n", qname)
stats, err := inspector.History(qname, days) stats, err := inspector.History(qname, days)
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
@@ -191,7 +190,7 @@ func queueHistory(cmd *cobra.Command, args []string) {
} }
} }
func printDailyStats(stats []*inspeq.DailyStats) { func printDailyStats(stats []*asynq.DailyStats) {
printTable( printTable(
[]string{"date (UTC)", "processed", "failed", "error rate"}, []string{"date (UTC)", "processed", "failed", "error rate"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
@@ -244,7 +243,7 @@ func queueRemove(cmd *cobra.Command, args []string) {
for _, qname := range args { for _, qname := range args {
err = r.RemoveQueue(qname, force) err = r.RemoveQueue(qname, force)
if err != nil { if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok { if errors.IsQueueNotEmpty(err) {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq queue rm --force %s'\n", err, qname) fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq queue rm --force %s'\n", err, qname)
continue continue
} }

View File

@@ -11,10 +11,11 @@ import (
"os" "os"
"strings" "strings"
"text/tabwriter" "text/tabwriter"
"unicode"
"unicode/utf8"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"github.com/hibiken/asynq/inspeq"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@@ -136,24 +137,25 @@ func createRDB() *rdb.RDB {
} }
// createRDB creates a Inspector instance using flag values and returns it. // createRDB creates a Inspector instance using flag values and returns it.
func createInspector() *inspeq.Inspector { func createInspector() *asynq.Inspector {
var connOpt asynq.RedisConnOpt return asynq.NewInspector(getRedisConnOpt())
}
func getRedisConnOpt() asynq.RedisConnOpt {
if useRedisCluster { if useRedisCluster {
addrs := strings.Split(viper.GetString("cluster_addrs"), ",") addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
connOpt = asynq.RedisClusterClientOpt{ return asynq.RedisClusterClientOpt{
Addrs: addrs, Addrs: addrs,
Password: viper.GetString("password"), Password: viper.GetString("password"),
TLSConfig: getTLSConfig(), TLSConfig: getTLSConfig(),
} }
} else {
connOpt = asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
}
} }
return inspeq.New(connOpt) return asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
}
} }
func getTLSConfig() *tls.Config { func getTLSConfig() *tls.Config {
@@ -196,3 +198,28 @@ func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
printRows(tw, format) printRows(tw, format)
tw.Flush() tw.Flush()
} }
// formatPayload returns string representation of payload if data is printable.
// If data is not printable, it returns a string describing payload is not printable.
func formatPayload(payload []byte) string {
if !isPrintable(payload) {
return "non-printable bytes"
}
return string(payload)
}
func isPrintable(data []byte) bool {
if !utf8.Valid(data) {
return false
}
isAllSpace := true
for _, r := range string(data) {
if !unicode.IsPrint(r) {
return false
}
if !unicode.IsSpace(r) {
isAllSpace = false
}
}
return !isAllSpace
}

View File

@@ -35,11 +35,11 @@ The command shows the following for each server:
* Host and PID of the process in which the server is running * Host and PID of the process in which the server is running
* Number of active workers out of worker pool * Number of active workers out of worker pool
* Queue configuration * Queue configuration
* State of the worker server ("running" | "quiet") * State of the worker server ("active" | "stopped")
* Time the server was started * Time the server was started
A "running" server is pulling tasks from queues and processing them. A "active" server is pulling tasks from queues and processing them.
A "quiet" server is no longer pulling new tasks from queues`, A "stopped" server is no longer pulling new tasks from queues`,
Run: serverList, Run: serverList,
} }

View File

@@ -22,7 +22,7 @@ import (
var statsCmd = &cobra.Command{ var statsCmd = &cobra.Command{
Use: "stats", Use: "stats",
Short: "Shows current state of the tasks and queues", Short: "Shows current state of the tasks and queues",
Long: `Stats (aysnqmon stats) will show the overview of tasks and queues at that instant. Long: `Stats (aysnq stats) will show the overview of tasks and queues at that instant.
Specifically, the command shows the following: Specifically, the command shows the following:
* Number of tasks in each state * Number of tasks in each state

View File

@@ -10,7 +10,8 @@ import (
"os" "os"
"time" "time"
"github.com/hibiken/asynq/inspeq" "github.com/fatih/color"
"github.com/hibiken/asynq"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -26,23 +27,29 @@ func init() {
taskCmd.AddCommand(taskCancelCmd) taskCmd.AddCommand(taskCancelCmd)
taskCmd.AddCommand(taskInspectCmd)
taskInspectCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskInspectCmd.Flags().StringP("id", "i", "", "id of the task")
taskInspectCmd.MarkFlagRequired("queue")
taskInspectCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskArchiveCmd) taskCmd.AddCommand(taskArchiveCmd)
taskArchiveCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs") taskArchiveCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskArchiveCmd.Flags().StringP("key", "k", "", "key of the task") taskArchiveCmd.Flags().StringP("id", "i", "", "id of the task")
taskArchiveCmd.MarkFlagRequired("queue") taskArchiveCmd.MarkFlagRequired("queue")
taskArchiveCmd.MarkFlagRequired("key") taskArchiveCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskDeleteCmd) taskCmd.AddCommand(taskDeleteCmd)
taskDeleteCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs") taskDeleteCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskDeleteCmd.Flags().StringP("key", "k", "", "key of the task") taskDeleteCmd.Flags().StringP("id", "i", "", "id of the task")
taskDeleteCmd.MarkFlagRequired("queue") taskDeleteCmd.MarkFlagRequired("queue")
taskDeleteCmd.MarkFlagRequired("key") taskDeleteCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskRunCmd) taskCmd.AddCommand(taskRunCmd)
taskRunCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs") taskRunCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskRunCmd.Flags().StringP("key", "k", "", "key of the task") taskRunCmd.Flags().StringP("id", "i", "", "id of the task")
taskRunCmd.MarkFlagRequired("queue") taskRunCmd.MarkFlagRequired("queue")
taskRunCmd.MarkFlagRequired("key") taskRunCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskArchiveAllCmd) taskCmd.AddCommand(taskArchiveAllCmd)
taskArchiveAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong") taskArchiveAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
@@ -93,6 +100,13 @@ To list the tasks from the second page, run
Run: taskList, Run: taskList,
} }
var taskInspectCmd = &cobra.Command{
Use: "inspect --queue=QUEUE --id=TASK_ID",
Short: "Display detailed information on the specified task",
Args: cobra.NoArgs,
Run: taskInspect,
}
var taskCancelCmd = &cobra.Command{ var taskCancelCmd = &cobra.Command{
Use: "cancel TASK_ID [TASK_ID...]", Use: "cancel TASK_ID [TASK_ID...]",
Short: "Cancel one or more active tasks", Short: "Cancel one or more active tasks",
@@ -101,42 +115,42 @@ var taskCancelCmd = &cobra.Command{
} }
var taskArchiveCmd = &cobra.Command{ var taskArchiveCmd = &cobra.Command{
Use: "archive --queue=QUEUE --key=KEY", Use: "archive --queue=QUEUE --id=TASK_ID",
Short: "Archive a task with the given key", Short: "Archive a task with the given id",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskArchive, Run: taskArchive,
} }
var taskDeleteCmd = &cobra.Command{ var taskDeleteCmd = &cobra.Command{
Use: "delete --queue=QUEUE --key=KEY", Use: "delete --queue=QUEUE --id=TASK_ID",
Short: "Delete a task with the given key", Short: "Delete a task with the given id",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskDelete, Run: taskDelete,
} }
var taskRunCmd = &cobra.Command{ var taskRunCmd = &cobra.Command{
Use: "run --queue=QUEUE --key=KEY", Use: "run --queue=QUEUE --id=TASK_ID",
Short: "Run a task with the given key", Short: "Run a task with the given id",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskRun, Run: taskRun,
} }
var taskArchiveAllCmd = &cobra.Command{ var taskArchiveAllCmd = &cobra.Command{
Use: "archive-all --queue=QUEUE --state=STATE", Use: "archiveall --queue=QUEUE --state=STATE",
Short: "Archive all tasks in the given state", Short: "Archive all tasks in the given state",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskArchiveAll, Run: taskArchiveAll,
} }
var taskDeleteAllCmd = &cobra.Command{ var taskDeleteAllCmd = &cobra.Command{
Use: "delete-all --queue=QUEUE --key=KEY", Use: "deleteall --queue=QUEUE --state=STATE",
Short: "Delete all tasks in the given state", Short: "Delete all tasks in the given state",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskDeleteAll, Run: taskDeleteAll,
} }
var taskRunAllCmd = &cobra.Command{ var taskRunAllCmd = &cobra.Command{
Use: "run-all --queue=QUEUE --key=KEY", Use: "runall --queue=QUEUE --state=STATE",
Short: "Run all tasks in the given state", Short: "Run all tasks in the given state",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskRunAll, Run: taskRunAll,
@@ -183,7 +197,7 @@ func taskList(cmd *cobra.Command, args []string) {
func listActiveTasks(qname string, pageNum, pageSize int) { func listActiveTasks(qname string, pageNum, pageSize int) {
i := createInspector() i := createInspector()
tasks, err := i.ListActiveTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum)) tasks, err := i.ListActiveTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
@@ -196,7 +210,7 @@ func listActiveTasks(qname string, pageNum, pageSize int) {
[]string{"ID", "Type", "Payload"}, []string{"ID", "Type", "Payload"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
for _, t := range tasks { for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload) fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload))
} }
}, },
) )
@@ -204,7 +218,7 @@ func listActiveTasks(qname string, pageNum, pageSize int) {
func listPendingTasks(qname string, pageNum, pageSize int) { func listPendingTasks(qname string, pageNum, pageSize int) {
i := createInspector() i := createInspector()
tasks, err := i.ListPendingTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum)) tasks, err := i.ListPendingTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
@@ -214,10 +228,10 @@ func listPendingTasks(qname string, pageNum, pageSize int) {
return return
} }
printTable( printTable(
[]string{"Key", "Type", "Payload"}, []string{"ID", "Type", "Payload"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
for _, t := range tasks { for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload) fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload))
} }
}, },
) )
@@ -225,7 +239,7 @@ func listPendingTasks(qname string, pageNum, pageSize int) {
func listScheduledTasks(qname string, pageNum, pageSize int) { func listScheduledTasks(qname string, pageNum, pageSize int) {
i := createInspector() i := createInspector()
tasks, err := i.ListScheduledTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum)) tasks, err := i.ListScheduledTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
@@ -235,20 +249,29 @@ func listScheduledTasks(qname string, pageNum, pageSize int) {
return return
} }
printTable( printTable(
[]string{"Key", "Type", "Payload", "Process In"}, []string{"ID", "Type", "Payload", "Process In"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
for _, t := range tasks { for _, t := range tasks {
processIn := fmt.Sprintf("%.0f seconds", fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload), formatProcessAt(t.NextProcessAt))
t.NextProcessAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, processIn)
} }
}, },
) )
} }
// formatProcessAt formats next process at time to human friendly string.
// If processAt time is in the past, returns "right now".
// If processAt time is in the future, returns "in xxx" where xxx is the duration from now.
func formatProcessAt(processAt time.Time) string {
d := processAt.Sub(time.Now())
if d < 0 {
return "right now"
}
return fmt.Sprintf("in %v", d.Round(time.Second))
}
func listRetryTasks(qname string, pageNum, pageSize int) { func listRetryTasks(qname string, pageNum, pageSize int) {
i := createInspector() i := createInspector()
tasks, err := i.ListRetryTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum)) tasks, err := i.ListRetryTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
@@ -258,16 +281,11 @@ func listRetryTasks(qname string, pageNum, pageSize int) {
return return
} }
printTable( printTable(
[]string{"Key", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry"}, []string{"ID", "Type", "Payload", "Next Retry", "Last Error", "Last Failed", "Retried", "Max Retry"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
for _, t := range tasks { for _, t := range tasks {
var nextRetry string fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload), formatProcessAt(t.NextProcessAt),
if d := t.NextProcessAt.Sub(time.Now()); d > 0 { t.LastErr, formatLastFailedAt(t.LastFailedAt), t.Retried, t.MaxRetry)
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
} else {
nextRetry = "right now"
}
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, nextRetry, t.LastError, t.Retried, t.MaxRetry)
} }
}, },
) )
@@ -275,7 +293,7 @@ func listRetryTasks(qname string, pageNum, pageSize int) {
func listArchivedTasks(qname string, pageNum, pageSize int) { func listArchivedTasks(qname string, pageNum, pageSize int) {
i := createInspector() i := createInspector()
tasks, err := i.ListArchivedTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum)) tasks, err := i.ListArchivedTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
@@ -285,19 +303,18 @@ func listArchivedTasks(qname string, pageNum, pageSize int) {
return return
} }
printTable( printTable(
[]string{"Key", "Type", "Payload", "Last Failed", "Last Error"}, []string{"ID", "Type", "Payload", "Last Failed", "Last Error"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
for _, t := range tasks { for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, t.LastFailedAt, t.LastError) fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload), formatLastFailedAt(t.LastFailedAt), t.LastErr)
} }
}) })
} }
func taskCancel(cmd *cobra.Command, args []string) { func taskCancel(cmd *cobra.Command, args []string) {
r := createRDB() i := createInspector()
for _, id := range args { for _, id := range args {
err := r.PublishCancelation(id) if err := i.CancelProcessing(id); err != nil {
if err != nil {
fmt.Printf("error: could not send cancelation signal: %v\n", err) fmt.Printf("error: could not send cancelation signal: %v\n", err)
continue continue
} }
@@ -305,20 +322,76 @@ func taskCancel(cmd *cobra.Command, args []string) {
} }
} }
func taskInspect(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
id, err := cmd.Flags().GetString("id")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
info, err := i.GetTaskInfo(qname, id)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
printTaskInfo(info)
}
func printTaskInfo(info *asynq.TaskInfo) {
bold := color.New(color.Bold)
bold.Println("Task Info")
fmt.Printf("Queue: %s\n", info.Queue)
fmt.Printf("ID: %s\n", info.ID)
fmt.Printf("Type: %s\n", info.Type)
fmt.Printf("State: %v\n", info.State)
fmt.Printf("Retried: %d/%d\n", info.Retried, info.MaxRetry)
fmt.Println()
fmt.Printf("Next process time: %s\n", formatNextProcessAt(info.NextProcessAt))
if len(info.LastErr) != 0 {
fmt.Println()
bold.Println("Last Failure")
fmt.Printf("Failed at: %s\n", formatLastFailedAt(info.LastFailedAt))
fmt.Printf("Error message: %s\n", info.LastErr)
}
}
func formatNextProcessAt(processAt time.Time) string {
if processAt.IsZero() || processAt.Unix() == 0 {
return "n/a"
}
if processAt.Before(time.Now()) {
return "now"
}
return fmt.Sprintf("%s (in %v)", processAt.Format(time.UnixDate), processAt.Sub(time.Now()).Round(time.Second))
}
func formatLastFailedAt(lastFailedAt time.Time) string {
if lastFailedAt.IsZero() || lastFailedAt.Unix() == 0 {
return ""
}
return lastFailedAt.Format(time.UnixDate)
}
func taskArchive(cmd *cobra.Command, args []string) { func taskArchive(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue") qname, err := cmd.Flags().GetString("queue")
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
} }
key, err := cmd.Flags().GetString("key") id, err := cmd.Flags().GetString("id")
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
} }
i := createInspector() i := createInspector()
err = i.ArchiveTaskByKey(qname, key) err = i.ArchiveTask(qname, id)
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
@@ -332,14 +405,14 @@ func taskDelete(cmd *cobra.Command, args []string) {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
} }
key, err := cmd.Flags().GetString("key") id, err := cmd.Flags().GetString("id")
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
} }
i := createInspector() i := createInspector()
err = i.DeleteTaskByKey(qname, key) err = i.DeleteTask(qname, id)
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
@@ -353,14 +426,14 @@ func taskRun(cmd *cobra.Command, args []string) {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
} }
key, err := cmd.Flags().GetString("key") id, err := cmd.Flags().GetString("id")
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)
} }
i := createInspector() i := createInspector()
err = i.RunTaskByKey(qname, key) err = i.RunTask(qname, id)
if err != nil { if err != nil {
fmt.Printf("error: %v\n", err) fmt.Printf("error: %v\n", err)
os.Exit(1) os.Exit(1)

View File

@@ -8,8 +8,9 @@ require (
github.com/cpuguy83/go-md2man v1.0.10 // indirect github.com/cpuguy83/go-md2man v1.0.10 // indirect
github.com/fatih/color v1.9.0 github.com/fatih/color v1.9.0
github.com/go-redis/redis/v7 v7.4.0 github.com/go-redis/redis/v7 v7.4.0
github.com/google/uuid v1.1.1 github.com/golang/protobuf v1.4.1 // indirect
github.com/hibiken/asynq v0.14.0 github.com/google/uuid v1.2.0
github.com/hibiken/asynq v0.17.1
github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-homedir v1.1.0
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.3.1
github.com/spf13/cobra v1.1.1 github.com/spf13/cobra v1.1.1

View File

@@ -25,6 +25,7 @@ github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84= github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
@@ -42,6 +43,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s= github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
@@ -68,18 +71,29 @@ github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5y
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1 h1:ZFgWrT+bLgsYPirOnRfKLYJLvssAegOj/hgyMFdJZe0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4= github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY= github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
@@ -172,6 +186,7 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
@@ -322,6 +337,7 @@ golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
@@ -350,10 +366,22 @@ google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8= google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -378,5 +406,6 @@ gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=