mirror of
https://github.com/hibiken/asynq.git
synced 2025-09-17 12:20:07 +08:00
Compare commits
97 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
d612a8a9e4 | ||
![]() |
b3ef9e91a9 | ||
![]() |
05534c6f24 | ||
![]() |
f0db219f6a | ||
![]() |
3ae0e7f528 | ||
![]() |
421dc584ff | ||
![]() |
cfd1a1dfe8 | ||
![]() |
c197902dc0 | ||
![]() |
e6355bf3f5 | ||
![]() |
95c90a5cb8 | ||
![]() |
6817af366a | ||
![]() |
4bce28d677 | ||
![]() |
73f930313c | ||
![]() |
bff2a05d59 | ||
![]() |
684a7e0c98 | ||
![]() |
46b23d6495 | ||
![]() |
c0ae62499f | ||
![]() |
7744ade362 | ||
![]() |
f532c95394 | ||
![]() |
ff6768f9bb | ||
![]() |
d5e9f3b1bd | ||
![]() |
d02b722d8a | ||
![]() |
99c7ebeef2 | ||
![]() |
bf54621196 | ||
![]() |
27baf6de0d | ||
![]() |
1bd0bee1e5 | ||
![]() |
a9feec5967 | ||
![]() |
e01c6379c8 | ||
![]() |
a0df047f71 | ||
![]() |
68dd6d9a9d | ||
![]() |
6cce31a134 | ||
![]() |
f9d7af3def | ||
![]() |
b0321fb465 | ||
![]() |
7776c7ae53 | ||
![]() |
709ca79a2b | ||
![]() |
08d8f0b37c | ||
![]() |
385323b679 | ||
![]() |
77604af265 | ||
![]() |
4765742e8a | ||
![]() |
68839dc9d3 | ||
![]() |
8922d2423a | ||
![]() |
b358de907e | ||
![]() |
8ee1825e67 | ||
![]() |
c8bda26bed | ||
![]() |
8aeeb61c9d | ||
![]() |
96c51fdc23 | ||
![]() |
ea9086fd8b | ||
![]() |
e63d51da0c | ||
![]() |
cd351d49b9 | ||
![]() |
87264b66f3 | ||
![]() |
62168b8d0d | ||
![]() |
840f7245b1 | ||
![]() |
12f4c7cf6e | ||
![]() |
0ec3b55e6b | ||
![]() |
4bcc5ab6aa | ||
![]() |
456edb6b71 | ||
![]() |
b835090ad8 | ||
![]() |
09cbea66f6 | ||
![]() |
b9c2572203 | ||
![]() |
0bf767cf21 | ||
![]() |
1812d05d21 | ||
![]() |
4af65d5fa5 | ||
![]() |
a19ad19382 | ||
![]() |
8117ce8972 | ||
![]() |
d98ecdebb4 | ||
![]() |
ffe9aa74b3 | ||
![]() |
d2d4029aba | ||
![]() |
76bd865ebc | ||
![]() |
136d1c9ea9 | ||
![]() |
52e04355d3 | ||
![]() |
cde3e57c6c | ||
![]() |
dd66acef1b | ||
![]() |
30a3d9641a | ||
![]() |
961582cba6 | ||
![]() |
430dbb298e | ||
![]() |
675826be5f | ||
![]() |
62f4e46b73 | ||
![]() |
a500f8a534 | ||
![]() |
bcfeff38ed | ||
![]() |
12a90f6a8d | ||
![]() |
807624e7dd | ||
![]() |
4d65024bd7 | ||
![]() |
76486b5cb4 | ||
![]() |
1db516c53c | ||
![]() |
cb5bdf245c | ||
![]() |
267493ccef | ||
![]() |
5d7f1b6a80 | ||
![]() |
77ded502ab | ||
![]() |
f2284be43d | ||
![]() |
3cadab55cb | ||
![]() |
298a420f9f | ||
![]() |
b1d717c842 | ||
![]() |
56e5762eea | ||
![]() |
5ec41e388b | ||
![]() |
9c95c41651 | ||
![]() |
476812475e | ||
![]() |
7af3981929 |
5
.gitignore
vendored
5
.gitignore
vendored
@@ -18,4 +18,7 @@
|
||||
/tools/asynq/asynq
|
||||
|
||||
# Ignore asynq config file
|
||||
.asynq.*
|
||||
.asynq.*
|
||||
|
||||
# Ignore editor config files
|
||||
.vscode
|
62
CHANGELOG.md
62
CHANGELOG.md
@@ -7,6 +7,68 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [0.18.6] - 2021-10-03
|
||||
|
||||
### Changed
|
||||
|
||||
- Updated `github.com/go-redis/redis` package to v8
|
||||
|
||||
## [0.18.5] - 2021-09-01
|
||||
|
||||
### Added
|
||||
|
||||
- `IsFailure` config option is added to determine whether error returned from Handler counts as a failure.
|
||||
|
||||
## [0.18.4] - 2021-08-17
|
||||
|
||||
### Fixed
|
||||
|
||||
- Scheduler methods are now thread-safe. It's now safe to call `Register` and `Unregister` concurrently.
|
||||
|
||||
## [0.18.3] - 2021-08-09
|
||||
|
||||
### Changed
|
||||
|
||||
- `Client.Enqueue` no longer enqueues tasks with empty typename; Error message is returned.
|
||||
|
||||
## [0.18.2] - 2021-07-15
|
||||
|
||||
### Changed
|
||||
|
||||
- Changed `Queue` function to not to convert the provided queue name to lowercase. Queue names are now case-sensitive.
|
||||
- `QueueInfo.MemoryUsage` is now an approximate usage value.
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fixed latency issue around memory usage (see https://github.com/hibiken/asynq/issues/309).
|
||||
|
||||
## [0.18.1] - 2021-07-04
|
||||
|
||||
### Changed
|
||||
|
||||
- Changed to execute task recovering logic when server starts up; Previously it needed to wait for a minute for task recovering logic to exeucte.
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fixed task recovering logic to execute every minute
|
||||
|
||||
## [0.18.0] - 2021-06-29
|
||||
|
||||
### Changed
|
||||
|
||||
- NewTask function now takes array of bytes as payload.
|
||||
- Task `Type` and `Payload` should be accessed by a method call.
|
||||
- `Server` API has changed. Renamed `Quiet` to `Stop`. Renamed `Stop` to `Shutdown`. _Note:_ As a result of this renaming, the behavior of `Stop` has changed. Please update the exising code to call `Shutdown` where it used to call `Stop`.
|
||||
- `Scheduler` API has changed. Renamed `Stop` to `Shutdown`.
|
||||
- Requires redis v4.0+ for multiple field/value pair support
|
||||
- `Client.Enqueue` now returns `TaskInfo`
|
||||
- `Inspector.RunTaskByKey` is replaced with `Inspector.RunTask`
|
||||
- `Inspector.DeleteTaskByKey` is replaced with `Inspector.DeleteTask`
|
||||
- `Inspector.ArchiveTaskByKey` is replaced with `Inspector.ArchiveTask`
|
||||
- `inspeq` package is removed. All types and functions from the package is moved to `asynq` package.
|
||||
- `WorkerInfo` field names have changed.
|
||||
- `Inspector.CancelActiveTask` is renamed to `Inspector.CancelProcessing`
|
||||
|
||||
## [0.17.2] - 2021-06-06
|
||||
|
||||
### Fixed
|
||||
|
7
Makefile
Normal file
7
Makefile
Normal file
@@ -0,0 +1,7 @@
|
||||
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
|
||||
|
||||
proto: internal/proto/asynq.proto
|
||||
protoc -I=$(ROOT_DIR)/internal/proto \
|
||||
--go_out=$(ROOT_DIR)/internal/proto \
|
||||
--go_opt=module=github.com/hibiken/asynq/internal/proto \
|
||||
$(ROOT_DIR)/internal/proto/asynq.proto
|
143
README.md
143
README.md
@@ -12,8 +12,8 @@ Asynq is a Go library for queueing tasks and processing them asynchronously with
|
||||
|
||||
Highlevel overview of how Asynq works:
|
||||
|
||||
- Client puts task on a queue
|
||||
- Server pulls task off queues and starts a worker goroutine for each task
|
||||
- Client puts tasks on a queue
|
||||
- Server pulls tasks off queues and starts a worker goroutine for each task
|
||||
- Tasks are processed concurrently by multiple workers
|
||||
|
||||
Task queues are used as a mechanism to distribute work across multiple machines. A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
|
||||
@@ -26,11 +26,10 @@ Task queues are used as a mechanism to distribute work across multiple machines.
|
||||
|
||||
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
|
||||
- Scheduling of tasks
|
||||
- Durability since tasks are written to Redis
|
||||
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
|
||||
- Automatic recovery of tasks in the event of a worker crash
|
||||
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
|
||||
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#strict-priority-queues)
|
||||
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Queue-Priority#weighted-priority)
|
||||
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Queue-Priority#strict-priority)
|
||||
- Low latency to add a task since writes are fast in Redis
|
||||
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
|
||||
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
|
||||
@@ -50,7 +49,7 @@ Task queues are used as a mechanism to distribute work across multiple machines.
|
||||
|
||||
## Quickstart
|
||||
|
||||
Make sure you have Go installed ([download](https://golang.org/dl/)). Version `1.13` or higher is required.
|
||||
Make sure you have Go installed ([download](https://golang.org/dl/)). Version `1.13` or higher is required.
|
||||
|
||||
Initialize your project by creating a folder and then running `go mod init github.com/your/repo` ([learn more](https://blog.golang.org/using-go-modules)) inside the folder. Then install Asynq library with the [`go get`](https://golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them) command:
|
||||
|
||||
@@ -58,7 +57,7 @@ Initialize your project by creating a folder and then running `go mod init githu
|
||||
go get -u github.com/hibiken/asynq
|
||||
```
|
||||
|
||||
Make sure you're running a Redis server locally or from a [Docker](https://hub.docker.com/_/redis) container. Version `3.0` or higher is required.
|
||||
Make sure you're running a Redis server locally or from a [Docker](https://hub.docker.com/_/redis) container. Version `4.0` or higher is required.
|
||||
|
||||
Next, write a package that encapsulates task creation and task handling.
|
||||
|
||||
@@ -77,19 +76,34 @@ const (
|
||||
TypeImageResize = "image:resize"
|
||||
)
|
||||
|
||||
type EmailDeliveryPayload struct {
|
||||
UserID int
|
||||
TemplateID string
|
||||
}
|
||||
|
||||
type ImageResizePayload struct {
|
||||
SourceURL string
|
||||
}
|
||||
|
||||
//----------------------------------------------
|
||||
// Write a function NewXXXTask to create a task.
|
||||
// A task consists of a type and a payload.
|
||||
//----------------------------------------------
|
||||
|
||||
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
|
||||
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
|
||||
return asynq.NewTask(TypeEmailDelivery, payload)
|
||||
func NewEmailDeliveryTask(userID int, tmplID string) (*asynq.Task, error) {
|
||||
payload, err := json.Marshal(EmailDeliveryPayload{UserID: userID, TemplateID: tmplID})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.NewTask(TypeEmailDelivery, payload), nil
|
||||
}
|
||||
|
||||
func NewImageResizeTask(src string) *asynq.Task {
|
||||
payload := map[string]interface{}{"src": src}
|
||||
return asynq.NewTask(TypeImageResize, payload)
|
||||
func NewImageResizeTask(src string) (*asynq.Task, error) {
|
||||
payload, err := json.Marshal(ImageResizePayload{SourceURL: src})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.NewTask(TypeImageResize, payload), nil
|
||||
}
|
||||
|
||||
//---------------------------------------------------------------
|
||||
@@ -101,15 +115,11 @@ func NewImageResizeTask(src string) *asynq.Task {
|
||||
//---------------------------------------------------------------
|
||||
|
||||
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
|
||||
userID, err := t.Payload.GetInt("user_id")
|
||||
if err != nil {
|
||||
return err
|
||||
var p EmailDeliveryPayload
|
||||
if err := json.Unmarshal(t.Payload(), &p); err != nil {
|
||||
return fmt.Errorf("json.Unmarshal failed: %v: %w", err, asynq.SkipRetry)
|
||||
}
|
||||
tmplID, err := t.Payload.GetString("template_id")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID)
|
||||
log.Printf("Sending Email to User: user_id=%d, template_id=%s", p.UserID, p.TemplateID)
|
||||
// Email delivery code ...
|
||||
return nil
|
||||
}
|
||||
@@ -119,28 +129,27 @@ type ImageProcessor struct {
|
||||
// ... fields for struct
|
||||
}
|
||||
|
||||
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
|
||||
src, err := t.Payload.GetString("src")
|
||||
if err != nil {
|
||||
return err
|
||||
func (processor *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
|
||||
var p ImageResizePayload
|
||||
if err := json.Unmarshal(t.Payload(), &p); err != nil {
|
||||
return fmt.Errorf("json.Unmarshal failed: %v: %w", err, asynq.SkipRetry)
|
||||
}
|
||||
fmt.Printf("Resize image: src = %s\n", src)
|
||||
log.Printf("Resizing image: src=%s", p.SourceURL)
|
||||
// Image resizing code ...
|
||||
return nil
|
||||
}
|
||||
|
||||
func NewImageProcessor() *ImageProcessor {
|
||||
// ... return an instance
|
||||
return &ImageProcessor{}
|
||||
}
|
||||
```
|
||||
|
||||
In your application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
|
||||
In your application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on queues.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
@@ -151,21 +160,23 @@ import (
|
||||
const redisAddr = "127.0.0.1:6379"
|
||||
|
||||
func main() {
|
||||
r := asynq.RedisClientOpt{Addr: redisAddr}
|
||||
c := asynq.NewClient(r)
|
||||
defer c.Close()
|
||||
client := asynq.NewClient(asynq.RedisClientOpt{Addr: redisAddr})
|
||||
defer client.Close()
|
||||
|
||||
// ------------------------------------------------------
|
||||
// Example 1: Enqueue task to be processed immediately.
|
||||
// Use (*Client).Enqueue method.
|
||||
// ------------------------------------------------------
|
||||
|
||||
t := tasks.NewEmailDeliveryTask(42, "some:template:id")
|
||||
res, err := c.Enqueue(t)
|
||||
task, err := tasks.NewEmailDeliveryTask(42, "some:template:id")
|
||||
if err != nil {
|
||||
log.Fatal("could not enqueue task: %v", err)
|
||||
log.Fatalf("could not create task: %v", err)
|
||||
}
|
||||
fmt.Printf("Enqueued Result: %+v\n", res)
|
||||
info, err := client.Enqueue(task)
|
||||
if err != nil {
|
||||
log.Fatalf("could not enqueue task: %v", err)
|
||||
}
|
||||
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
|
||||
|
||||
|
||||
// ------------------------------------------------------------
|
||||
@@ -173,12 +184,11 @@ func main() {
|
||||
// Use ProcessIn or ProcessAt option.
|
||||
// ------------------------------------------------------------
|
||||
|
||||
t = tasks.NewEmailDeliveryTask(42, "other:template:id")
|
||||
res, err = c.Enqueue(t, asynq.ProcessIn(24*time.Hour))
|
||||
info, err = client.Enqueue(task, asynq.ProcessIn(24*time.Hour))
|
||||
if err != nil {
|
||||
log.Fatal("could not schedule task: %v", err)
|
||||
log.Fatalf("could not schedule task: %v", err)
|
||||
}
|
||||
fmt.Printf("Enqueued Result: %+v\n", res)
|
||||
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
|
||||
|
||||
|
||||
// ----------------------------------------------------------------------------
|
||||
@@ -186,26 +196,28 @@ func main() {
|
||||
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
|
||||
// ----------------------------------------------------------------------------
|
||||
|
||||
c.SetDefaultOptions(tasks.TypeImageResize, asynq.MaxRetry(10), asynq.Timeout(3*time.Minute))
|
||||
client.SetDefaultOptions(tasks.TypeImageResize, asynq.MaxRetry(10), asynq.Timeout(3*time.Minute))
|
||||
|
||||
t = tasks.NewImageResizeTask("some/blobstore/path")
|
||||
res, err = c.Enqueue(t)
|
||||
task, err = tasks.NewImageResizeTask("https://example.com/myassets/image.jpg")
|
||||
if err != nil {
|
||||
log.Fatal("could not enqueue task: %v", err)
|
||||
log.Fatalf("could not create task: %v", err)
|
||||
}
|
||||
fmt.Printf("Enqueued Result: %+v\n", res)
|
||||
info, err = client.Enqueue(task)
|
||||
if err != nil {
|
||||
log.Fatalf("could not enqueue task: %v", err)
|
||||
}
|
||||
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Example 4: Pass options to tune task processing behavior at enqueue time.
|
||||
// Options passed at enqueue time override default ones, if any.
|
||||
// Options passed at enqueue time override default ones.
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
t = tasks.NewImageResizeTask("some/blobstore/path")
|
||||
res, err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
|
||||
info, err = client.Enqueue(task, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
|
||||
if err != nil {
|
||||
log.Fatal("could not enqueue task: %v", err)
|
||||
log.Fatalf("could not enqueue task: %v", err)
|
||||
}
|
||||
fmt.Printf("Enqueued Result: %+v\n", res)
|
||||
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
|
||||
}
|
||||
```
|
||||
|
||||
@@ -226,19 +238,20 @@ import (
|
||||
const redisAddr = "127.0.0.1:6379"
|
||||
|
||||
func main() {
|
||||
r := asynq.RedisClientOpt{Addr: redisAddr}
|
||||
|
||||
srv := asynq.NewServer(r, asynq.Config{
|
||||
// Specify how many concurrent workers to use
|
||||
Concurrency: 10,
|
||||
// Optionally specify multiple queues with different priority.
|
||||
Queues: map[string]int{
|
||||
"critical": 6,
|
||||
"default": 3,
|
||||
"low": 1,
|
||||
srv := asynq.NewServer(
|
||||
asynq.RedisClientOpt{Addr: redisAddr},
|
||||
asynq.Config{
|
||||
// Specify how many concurrent workers to use
|
||||
Concurrency: 10,
|
||||
// Optionally specify multiple queues with different priority.
|
||||
Queues: map[string]int{
|
||||
"critical": 6,
|
||||
"default": 3,
|
||||
"low": 1,
|
||||
},
|
||||
// See the godoc for other configuration options
|
||||
},
|
||||
// See the godoc for other configuration options
|
||||
})
|
||||
)
|
||||
|
||||
// mux maps a type to a handler
|
||||
mux := asynq.NewServeMux()
|
||||
@@ -262,11 +275,11 @@ To learn more about `asynq` features and APIs, see the package [godoc](https://g
|
||||
|
||||
Here's a few screenshots of the Web UI:
|
||||
|
||||
**Queues view**
|
||||
**Queues view**
|
||||
|
||||

|
||||
|
||||
**Tasks view**
|
||||
**Tasks view**
|
||||
|
||||

|
||||
|
||||
@@ -274,7 +287,7 @@ Here's a few screenshots of the Web UI:
|
||||
|
||||

|
||||
|
||||
For details on how to use the tool, refer to the tool's [README](https://github.com/hibiken/asynqmon#readme).
|
||||
For details on how to use the tool, refer to the tool's [README](https://github.com/hibiken/asynqmon#readme).
|
||||
|
||||
## Command Line Tool
|
||||
|
||||
|
141
asynq.go
141
asynq.go
@@ -12,28 +12,149 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
)
|
||||
|
||||
// Task represents a unit of work to be performed.
|
||||
type Task struct {
|
||||
// Type indicates the type of task to be performed.
|
||||
Type string
|
||||
// typename indicates the type of task to be performed.
|
||||
typename string
|
||||
|
||||
// Payload holds data needed to perform the task.
|
||||
Payload Payload
|
||||
// payload holds data needed to perform the task.
|
||||
payload []byte
|
||||
}
|
||||
|
||||
func (t *Task) Type() string { return t.typename }
|
||||
func (t *Task) Payload() []byte { return t.payload }
|
||||
|
||||
// NewTask returns a new Task given a type name and payload data.
|
||||
//
|
||||
// The payload values must be serializable.
|
||||
func NewTask(typename string, payload map[string]interface{}) *Task {
|
||||
func NewTask(typename string, payload []byte) *Task {
|
||||
return &Task{
|
||||
Type: typename,
|
||||
Payload: Payload{payload},
|
||||
typename: typename,
|
||||
payload: payload,
|
||||
}
|
||||
}
|
||||
|
||||
// A TaskInfo describes a task and its metadata.
|
||||
type TaskInfo struct {
|
||||
// ID is the identifier of the task.
|
||||
ID string
|
||||
|
||||
// Queue is the name of the queue in which the task belongs.
|
||||
Queue string
|
||||
|
||||
// Type is the type name of the task.
|
||||
Type string
|
||||
|
||||
// Payload is the payload data of the task.
|
||||
Payload []byte
|
||||
|
||||
// State indicates the task state.
|
||||
State TaskState
|
||||
|
||||
// MaxRetry is the maximum number of times the task can be retried.
|
||||
MaxRetry int
|
||||
|
||||
// Retried is the number of times the task has retried so far.
|
||||
Retried int
|
||||
|
||||
// LastErr is the error message from the last failure.
|
||||
LastErr string
|
||||
|
||||
// LastFailedAt is the time time of the last failure if any.
|
||||
// If the task has no failures, LastFailedAt is zero time (i.e. time.Time{}).
|
||||
LastFailedAt time.Time
|
||||
|
||||
// Timeout is the duration the task can be processed by Handler before being retried,
|
||||
// zero if not specified
|
||||
Timeout time.Duration
|
||||
|
||||
// Deadline is the deadline for the task, zero value if not specified.
|
||||
Deadline time.Time
|
||||
|
||||
// NextProcessAt is the time the task is scheduled to be processed,
|
||||
// zero if not applicable.
|
||||
NextProcessAt time.Time
|
||||
}
|
||||
|
||||
func newTaskInfo(msg *base.TaskMessage, state base.TaskState, nextProcessAt time.Time) *TaskInfo {
|
||||
info := TaskInfo{
|
||||
ID: msg.ID.String(),
|
||||
Queue: msg.Queue,
|
||||
Type: msg.Type,
|
||||
Payload: msg.Payload, // Do we need to make a copy?
|
||||
MaxRetry: msg.Retry,
|
||||
Retried: msg.Retried,
|
||||
LastErr: msg.ErrorMsg,
|
||||
Timeout: time.Duration(msg.Timeout) * time.Second,
|
||||
NextProcessAt: nextProcessAt,
|
||||
}
|
||||
if msg.LastFailedAt == 0 {
|
||||
info.LastFailedAt = time.Time{}
|
||||
} else {
|
||||
info.LastFailedAt = time.Unix(msg.LastFailedAt, 0)
|
||||
}
|
||||
|
||||
if msg.Deadline == 0 {
|
||||
info.Deadline = time.Time{}
|
||||
} else {
|
||||
info.Deadline = time.Unix(msg.Deadline, 0)
|
||||
}
|
||||
|
||||
switch state {
|
||||
case base.TaskStateActive:
|
||||
info.State = TaskStateActive
|
||||
case base.TaskStatePending:
|
||||
info.State = TaskStatePending
|
||||
case base.TaskStateScheduled:
|
||||
info.State = TaskStateScheduled
|
||||
case base.TaskStateRetry:
|
||||
info.State = TaskStateRetry
|
||||
case base.TaskStateArchived:
|
||||
info.State = TaskStateArchived
|
||||
default:
|
||||
panic(fmt.Sprintf("internal error: unknown state: %d", state))
|
||||
}
|
||||
return &info
|
||||
}
|
||||
|
||||
// TaskState denotes the state of a task.
|
||||
type TaskState int
|
||||
|
||||
const (
|
||||
// Indicates that the task is currently being processed by Handler.
|
||||
TaskStateActive TaskState = iota + 1
|
||||
|
||||
// Indicates that the task is ready to be processed by Handler.
|
||||
TaskStatePending
|
||||
|
||||
// Indicates that the task is scheduled to be processed some time in the future.
|
||||
TaskStateScheduled
|
||||
|
||||
// Indicates that the task has previously failed and scheduled to be processed some time in the future.
|
||||
TaskStateRetry
|
||||
|
||||
// Indicates that the task is archived and stored for inspection purposes.
|
||||
TaskStateArchived
|
||||
)
|
||||
|
||||
func (s TaskState) String() string {
|
||||
switch s {
|
||||
case TaskStateActive:
|
||||
return "active"
|
||||
case TaskStatePending:
|
||||
return "pending"
|
||||
case TaskStateScheduled:
|
||||
return "scheduled"
|
||||
case TaskStateRetry:
|
||||
return "retry"
|
||||
case TaskStateArchived:
|
||||
return "archived"
|
||||
}
|
||||
panic("asynq: unknown task state")
|
||||
}
|
||||
|
||||
// RedisConnOpt is a discriminated union of types that represent Redis connection configuration option.
|
||||
//
|
||||
// RedisConnOpt represents a sum of following types:
|
||||
|
@@ -10,7 +10,7 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/google/go-cmp/cmp"
|
||||
h "github.com/hibiken/asynq/internal/asynqtest"
|
||||
"github.com/hibiken/asynq/internal/log"
|
||||
@@ -85,7 +85,7 @@ func getRedisConnOpt(tb testing.TB) RedisConnOpt {
|
||||
var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
|
||||
out := append([]*Task(nil), in...) // Copy input to avoid mutating it
|
||||
sort.Slice(out, func(i, j int) bool {
|
||||
return out[i].Type < out[j].Type
|
||||
return out[i].Type() < out[j].Type()
|
||||
})
|
||||
return out
|
||||
})
|
||||
|
@@ -6,12 +6,24 @@ package asynq
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
h "github.com/hibiken/asynq/internal/asynqtest"
|
||||
)
|
||||
|
||||
// Creates a new task of type "task<n>" with payload {"data": n}.
|
||||
func makeTask(n int) *Task {
|
||||
b, err := json.Marshal(map[string]int{"data": n})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return NewTask(fmt.Sprintf("task%d", n), b)
|
||||
}
|
||||
|
||||
// Simple E2E Benchmark testing with no scheduled tasks and retries.
|
||||
func BenchmarkEndToEndSimple(b *testing.B) {
|
||||
const count = 100000
|
||||
@@ -29,8 +41,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
|
||||
})
|
||||
// Create a bunch of tasks
|
||||
for i := 0; i < count; i++ {
|
||||
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i)); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -70,14 +81,12 @@ func BenchmarkEndToEnd(b *testing.B) {
|
||||
})
|
||||
// Create a bunch of tasks
|
||||
for i := 0; i < count; i++ {
|
||||
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i)); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
for i := 0; i < count; i++ {
|
||||
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i), ProcessIn(1*time.Second)); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -86,13 +95,18 @@ func BenchmarkEndToEnd(b *testing.B) {
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(count * 2)
|
||||
handler := func(ctx context.Context, t *Task) error {
|
||||
n, err := t.Payload.GetInt("data")
|
||||
if err != nil {
|
||||
var p map[string]int
|
||||
if err := json.Unmarshal(t.Payload(), &p); err != nil {
|
||||
b.Logf("internal error: %v", err)
|
||||
}
|
||||
n, ok := p["data"]
|
||||
if !ok {
|
||||
n = 1
|
||||
b.Logf("internal error: could not get data from payload")
|
||||
}
|
||||
retried, ok := GetRetryCount(ctx)
|
||||
if !ok {
|
||||
b.Logf("internal error: %v", err)
|
||||
b.Logf("internal error: could not get retry count from context")
|
||||
}
|
||||
// Fail 1% of tasks for the first attempt.
|
||||
if retried == 0 && n%100 == 0 {
|
||||
@@ -136,20 +150,17 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
|
||||
})
|
||||
// Create a bunch of tasks
|
||||
for i := 0; i < highCount; i++ {
|
||||
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t, Queue("high")); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i), Queue("high")); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
for i := 0; i < defaultCount; i++ {
|
||||
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i)); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
for i := 0; i < lowCount; i++ {
|
||||
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t, Queue("low")); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i), Queue("low")); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -190,15 +201,13 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
|
||||
})
|
||||
// Enqueue 10,000 tasks.
|
||||
for i := 0; i < count; i++ {
|
||||
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i)); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
// Schedule 10,000 tasks.
|
||||
for i := 0; i < count; i++ {
|
||||
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
|
||||
if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
|
||||
if _, err := client.Enqueue(makeTask(i), ProcessIn(1*time.Second)); err != nil {
|
||||
b.Fatalf("could not enqueue a task: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -213,7 +222,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
|
||||
b.Log("Starting enqueueing")
|
||||
enqueued := 0
|
||||
for enqueued < 100000 {
|
||||
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), map[string]interface{}{"data": enqueued})
|
||||
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), h.JSON(map[string]interface{}{"data": enqueued}))
|
||||
if _, err := client.Enqueue(t); err != nil {
|
||||
b.Logf("could not enqueue task %d: %v", enqueued, err)
|
||||
continue
|
||||
|
83
client.go
83
client.go
@@ -5,15 +5,15 @@
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/google/uuid"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/errors"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
)
|
||||
|
||||
@@ -93,10 +93,8 @@ func (n retryOption) Type() OptionType { return MaxRetryOpt }
|
||||
func (n retryOption) Value() interface{} { return int(n) }
|
||||
|
||||
// Queue returns an option to specify the queue to enqueue the task into.
|
||||
//
|
||||
// Queue name is case-insensitive and the lowercased version is used.
|
||||
func Queue(qname string) Option {
|
||||
return queueOption(strings.ToLower(qname))
|
||||
return queueOption(qname)
|
||||
}
|
||||
|
||||
func (qname queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(qname)) }
|
||||
@@ -176,7 +174,6 @@ func (d processInOption) String() string { return fmt.Sprintf("ProcessIn(%v)
|
||||
func (d processInOption) Type() OptionType { return ProcessInOpt }
|
||||
func (d processInOption) Value() interface{} { return time.Duration(d) }
|
||||
|
||||
|
||||
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
|
||||
//
|
||||
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
|
||||
@@ -208,11 +205,11 @@ func composeOptions(opts ...Option) (option, error) {
|
||||
case retryOption:
|
||||
res.retry = int(opt)
|
||||
case queueOption:
|
||||
trimmed := strings.TrimSpace(string(opt))
|
||||
if err := base.ValidateQueueName(trimmed); err != nil {
|
||||
qname := string(opt)
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return option{}, err
|
||||
}
|
||||
res.queue = trimmed
|
||||
res.queue = qname
|
||||
case timeoutOption:
|
||||
res.timeout = time.Duration(opt)
|
||||
case deadlineOption:
|
||||
@@ -255,41 +252,6 @@ func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
|
||||
c.opts[taskType] = opts
|
||||
}
|
||||
|
||||
// A Result holds enqueued task's metadata.
|
||||
type Result struct {
|
||||
// ID is a unique identifier for the task.
|
||||
ID string
|
||||
|
||||
// EnqueuedAt is the time the task was enqueued in UTC.
|
||||
EnqueuedAt time.Time
|
||||
|
||||
// ProcessAt indicates when the task should be processed.
|
||||
ProcessAt time.Time
|
||||
|
||||
// Retry is the maximum number of retry for the task.
|
||||
Retry int
|
||||
|
||||
// Queue is a name of the queue the task is enqueued to.
|
||||
Queue string
|
||||
|
||||
// Timeout is the timeout value for the task.
|
||||
// Counting for timeout starts when a worker starts processing the task.
|
||||
// If task processing doesn't complete within the timeout, the task will be retried.
|
||||
// The value zero means no timeout.
|
||||
//
|
||||
// If deadline is set, min(now+timeout, deadline) is used, where the now is the time when
|
||||
// a worker starts processing the task.
|
||||
Timeout time.Duration
|
||||
|
||||
// Deadline is the deadline value for the task.
|
||||
// If task processing doesn't complete before the deadline, the task will be retried.
|
||||
// The value time.Unix(0, 0) means no deadline.
|
||||
//
|
||||
// If timeout is set, min(now+timeout, deadline) is used, where the now is the time when
|
||||
// a worker starts processing the task.
|
||||
Deadline time.Time
|
||||
}
|
||||
|
||||
// Close closes the connection with redis.
|
||||
func (c *Client) Close() error {
|
||||
return c.rdb.Close()
|
||||
@@ -297,15 +259,19 @@ func (c *Client) Close() error {
|
||||
|
||||
// Enqueue enqueues the given task to be processed asynchronously.
|
||||
//
|
||||
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
|
||||
// Enqueue returns TaskInfo and nil error if the task is enqueued successfully, otherwise returns a non-nil error.
|
||||
//
|
||||
// The argument opts specifies the behavior of task processing.
|
||||
// If there are conflicting Option values the last one overrides others.
|
||||
// By deafult, max retry is set to 25 and timeout is set to 30 minutes.
|
||||
// If no ProcessAt or ProcessIn options are passed, the task will be processed immediately.
|
||||
func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
|
||||
//
|
||||
// If no ProcessAt or ProcessIn options are provided, the task will be pending immediately.
|
||||
func (c *Client) Enqueue(task *Task, opts ...Option) (*TaskInfo, error) {
|
||||
if strings.TrimSpace(task.Type()) == "" {
|
||||
return nil, fmt.Errorf("task typename cannot be empty")
|
||||
}
|
||||
c.mu.Lock()
|
||||
if defaults, ok := c.opts[task.Type]; ok {
|
||||
if defaults, ok := c.opts[task.Type()]; ok {
|
||||
opts = append(defaults, opts...)
|
||||
}
|
||||
c.mu.Unlock()
|
||||
@@ -327,12 +293,12 @@ func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
|
||||
}
|
||||
var uniqueKey string
|
||||
if opt.uniqueTTL > 0 {
|
||||
uniqueKey = base.UniqueKey(opt.queue, task.Type, task.Payload.data)
|
||||
uniqueKey = base.UniqueKey(opt.queue, task.Type(), task.Payload())
|
||||
}
|
||||
msg := &base.TaskMessage{
|
||||
ID: uuid.New(),
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Queue: opt.queue,
|
||||
Retry: opt.retry,
|
||||
Deadline: deadline.Unix(),
|
||||
@@ -340,27 +306,22 @@ func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
|
||||
UniqueKey: uniqueKey,
|
||||
}
|
||||
now := time.Now()
|
||||
var state base.TaskState
|
||||
if opt.processAt.Before(now) || opt.processAt.Equal(now) {
|
||||
opt.processAt = now
|
||||
err = c.enqueue(msg, opt.uniqueTTL)
|
||||
state = base.TaskStatePending
|
||||
} else {
|
||||
err = c.schedule(msg, opt.processAt, opt.uniqueTTL)
|
||||
state = base.TaskStateScheduled
|
||||
}
|
||||
switch {
|
||||
case err == rdb.ErrDuplicateTask:
|
||||
case errors.Is(err, errors.ErrDuplicateTask):
|
||||
return nil, fmt.Errorf("%w", ErrDuplicateTask)
|
||||
case err != nil:
|
||||
return nil, err
|
||||
}
|
||||
return &Result{
|
||||
ID: msg.ID.String(),
|
||||
EnqueuedAt: time.Now().UTC(),
|
||||
ProcessAt: opt.processAt,
|
||||
Queue: msg.Queue,
|
||||
Retry: msg.Retry,
|
||||
Timeout: timeout,
|
||||
Deadline: deadline,
|
||||
}, nil
|
||||
return newTaskInfo(msg, state, opt.processAt), nil
|
||||
}
|
||||
|
||||
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {
|
||||
|
403
client_test.go
403
client_test.go
@@ -5,6 +5,7 @@
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -20,7 +21,7 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
|
||||
client := NewClient(getRedisConnOpt(t))
|
||||
defer client.Close()
|
||||
|
||||
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
|
||||
task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
|
||||
|
||||
var (
|
||||
now = time.Now()
|
||||
@@ -32,7 +33,7 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
|
||||
task *Task
|
||||
processAt time.Time // value for ProcessAt option
|
||||
opts []Option // other options
|
||||
wantRes *Result
|
||||
wantInfo *TaskInfo
|
||||
wantPending map[string][]*base.TaskMessage
|
||||
wantScheduled map[string][]base.Z
|
||||
}{
|
||||
@@ -41,19 +42,24 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
|
||||
task: task,
|
||||
processAt: now,
|
||||
opts: []Option{},
|
||||
wantRes: &Result{
|
||||
EnqueuedAt: now.UTC(),
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "default",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -70,13 +76,18 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
|
||||
task: task,
|
||||
processAt: oneHourLater,
|
||||
opts: []Option{},
|
||||
wantRes: &Result{
|
||||
EnqueuedAt: now.UTC(),
|
||||
ProcessAt: oneHourLater,
|
||||
Queue: "default",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStateScheduled,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: oneHourLater,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {},
|
||||
@@ -85,8 +96,8 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
|
||||
"default": {
|
||||
{
|
||||
Message: &base.TaskMessage{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "default",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -103,24 +114,24 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
|
||||
h.FlushDB(t, r) // clean up db before each test case.
|
||||
|
||||
opts := append(tc.opts, ProcessAt(tc.processAt))
|
||||
gotRes, err := client.Enqueue(tc.task, opts...)
|
||||
gotInfo, err := client.Enqueue(tc.task, opts...)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
continue
|
||||
}
|
||||
cmpOptions := []cmp.Option{
|
||||
cmpopts.IgnoreFields(Result{}, "ID"),
|
||||
cmpopts.IgnoreFields(TaskInfo{}, "ID"),
|
||||
cmpopts.EquateApproxTime(500 * time.Millisecond),
|
||||
}
|
||||
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
|
||||
t.Errorf("%s;\nEnqueue(task, ProcessAt(%v)) returned %v, want %v; (-want,+got)\n%s",
|
||||
tc.desc, tc.processAt, gotRes, tc.wantRes, diff)
|
||||
tc.desc, tc.processAt, gotInfo, tc.wantInfo, diff)
|
||||
}
|
||||
|
||||
for qname, want := range tc.wantPending {
|
||||
gotPending := h.GetPendingMessages(t, r, qname)
|
||||
if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
|
||||
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
|
||||
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
|
||||
}
|
||||
}
|
||||
for qname, want := range tc.wantScheduled {
|
||||
@@ -137,14 +148,14 @@ func TestClientEnqueue(t *testing.T) {
|
||||
client := NewClient(getRedisConnOpt(t))
|
||||
defer client.Close()
|
||||
|
||||
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
|
||||
task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
|
||||
now := time.Now()
|
||||
|
||||
tests := []struct {
|
||||
desc string
|
||||
task *Task
|
||||
opts []Option
|
||||
wantRes *Result
|
||||
wantInfo *TaskInfo
|
||||
wantPending map[string][]*base.TaskMessage
|
||||
}{
|
||||
{
|
||||
@@ -153,18 +164,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
opts: []Option{
|
||||
MaxRetry(3),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: 3,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: 3,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: 3,
|
||||
Queue: "default",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -179,18 +196,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
opts: []Option{
|
||||
MaxRetry(-2),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: 0,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: 0, // Retry count should be set to zero
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: 0, // Retry count should be set to zero
|
||||
Queue: "default",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -206,18 +229,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
MaxRetry(2),
|
||||
MaxRetry(10),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: 10,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: 10, // Last option takes precedence
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: 10, // Last option takes precedence
|
||||
Queue: "default",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -232,18 +261,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
opts: []Option{
|
||||
Queue("custom"),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "custom",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "custom",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"custom": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "custom",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -253,25 +288,31 @@ func TestClientEnqueue(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "Queue option should be case-insensitive",
|
||||
desc: "Queue option should be case sensitive",
|
||||
task: task,
|
||||
opts: []Option{
|
||||
Queue("HIGH"),
|
||||
Queue("MyQueue"),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "high",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "MyQueue",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"high": {
|
||||
"MyQueue": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "high",
|
||||
Queue: "MyQueue",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
Deadline: noDeadline.Unix(),
|
||||
},
|
||||
@@ -284,18 +325,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
opts: []Option{
|
||||
Timeout(20 * time.Second),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: 20 * time.Second,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: 20 * time.Second,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "default",
|
||||
Timeout: 20,
|
||||
@@ -310,18 +357,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
opts: []Option{
|
||||
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: noTimeout,
|
||||
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: noTimeout,
|
||||
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "default",
|
||||
Timeout: int64(noTimeout.Seconds()),
|
||||
@@ -337,18 +390,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
Timeout(20 * time.Second),
|
||||
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
|
||||
},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: 20 * time.Second,
|
||||
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: 20 * time.Second,
|
||||
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "default",
|
||||
Timeout: 20,
|
||||
@@ -362,24 +421,24 @@ func TestClientEnqueue(t *testing.T) {
|
||||
for _, tc := range tests {
|
||||
h.FlushDB(t, r) // clean up db before each test case.
|
||||
|
||||
gotRes, err := client.Enqueue(tc.task, tc.opts...)
|
||||
gotInfo, err := client.Enqueue(tc.task, tc.opts...)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
continue
|
||||
}
|
||||
cmpOptions := []cmp.Option{
|
||||
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
|
||||
cmpopts.IgnoreFields(TaskInfo{}, "ID"),
|
||||
cmpopts.EquateApproxTime(500 * time.Millisecond),
|
||||
}
|
||||
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
|
||||
t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s",
|
||||
tc.desc, gotRes, tc.wantRes, diff)
|
||||
tc.desc, gotInfo, tc.wantInfo, diff)
|
||||
}
|
||||
|
||||
for qname, want := range tc.wantPending {
|
||||
got := h.GetPendingMessages(t, r, qname)
|
||||
if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" {
|
||||
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
|
||||
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -390,7 +449,7 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
|
||||
client := NewClient(getRedisConnOpt(t))
|
||||
defer client.Close()
|
||||
|
||||
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
|
||||
task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
|
||||
now := time.Now()
|
||||
|
||||
tests := []struct {
|
||||
@@ -398,7 +457,7 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
|
||||
task *Task
|
||||
delay time.Duration // value for ProcessIn option
|
||||
opts []Option // other options
|
||||
wantRes *Result
|
||||
wantInfo *TaskInfo
|
||||
wantPending map[string][]*base.TaskMessage
|
||||
wantScheduled map[string][]base.Z
|
||||
}{
|
||||
@@ -407,12 +466,18 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
|
||||
task: task,
|
||||
delay: 1 * time.Hour,
|
||||
opts: []Option{},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now.Add(1 * time.Hour),
|
||||
Queue: "default",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStateScheduled,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: time.Now().Add(1 * time.Hour),
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {},
|
||||
@@ -421,8 +486,8 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
|
||||
"default": {
|
||||
{
|
||||
Message: &base.TaskMessage{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "default",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -438,18 +503,24 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
|
||||
task: task,
|
||||
delay: 0,
|
||||
opts: []Option{},
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "default",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "default",
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
wantPending: map[string][]*base.TaskMessage{
|
||||
"default": {
|
||||
{
|
||||
Type: task.Type,
|
||||
Payload: task.Payload.data,
|
||||
Type: task.Type(),
|
||||
Payload: task.Payload(),
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "default",
|
||||
Timeout: int64(defaultTimeout.Seconds()),
|
||||
@@ -467,24 +538,24 @@ func TestClientEnqueueWithProcessInOption(t *testing.T) {
|
||||
h.FlushDB(t, r) // clean up db before each test case.
|
||||
|
||||
opts := append(tc.opts, ProcessIn(tc.delay))
|
||||
gotRes, err := client.Enqueue(tc.task, opts...)
|
||||
gotInfo, err := client.Enqueue(tc.task, opts...)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
continue
|
||||
}
|
||||
cmpOptions := []cmp.Option{
|
||||
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
|
||||
cmpopts.IgnoreFields(TaskInfo{}, "ID"),
|
||||
cmpopts.EquateApproxTime(500 * time.Millisecond),
|
||||
}
|
||||
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
|
||||
t.Errorf("%s;\nEnqueue(task, ProcessIn(%v)) returned %v, want %v; (-want,+got)\n%s",
|
||||
tc.desc, tc.delay, gotRes, tc.wantRes, diff)
|
||||
tc.desc, tc.delay, gotInfo, tc.wantInfo, diff)
|
||||
}
|
||||
|
||||
for qname, want := range tc.wantPending {
|
||||
gotPending := h.GetPendingMessages(t, r, qname)
|
||||
if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
|
||||
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
|
||||
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
|
||||
}
|
||||
}
|
||||
for qname, want := range tc.wantScheduled {
|
||||
@@ -501,7 +572,7 @@ func TestClientEnqueueError(t *testing.T) {
|
||||
client := NewClient(getRedisConnOpt(t))
|
||||
defer client.Close()
|
||||
|
||||
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
|
||||
task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
|
||||
|
||||
tests := []struct {
|
||||
desc string
|
||||
@@ -515,6 +586,16 @@ func TestClientEnqueueError(t *testing.T) {
|
||||
Queue(""),
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "With empty task typename",
|
||||
task: NewTask("", h.JSON(map[string]interface{}{})),
|
||||
opts: []Option{},
|
||||
},
|
||||
{
|
||||
desc: "With blank task typename",
|
||||
task: NewTask(" ", h.JSON(map[string]interface{}{})),
|
||||
opts: []Option{},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -537,7 +618,7 @@ func TestClientDefaultOptions(t *testing.T) {
|
||||
defaultOpts []Option // options set at the client level.
|
||||
opts []Option // options used at enqueue time.
|
||||
task *Task
|
||||
wantRes *Result
|
||||
wantInfo *TaskInfo
|
||||
queue string // queue that the message should go into.
|
||||
want *base.TaskMessage
|
||||
}{
|
||||
@@ -546,12 +627,18 @@ func TestClientDefaultOptions(t *testing.T) {
|
||||
defaultOpts: []Option{Queue("feed")},
|
||||
opts: []Option{},
|
||||
task: NewTask("feed:import", nil),
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "feed",
|
||||
Retry: defaultMaxRetry,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "feed",
|
||||
Type: "feed:import",
|
||||
Payload: nil,
|
||||
State: TaskStatePending,
|
||||
MaxRetry: defaultMaxRetry,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
queue: "feed",
|
||||
want: &base.TaskMessage{
|
||||
@@ -568,12 +655,18 @@ func TestClientDefaultOptions(t *testing.T) {
|
||||
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
|
||||
opts: []Option{},
|
||||
task: NewTask("feed:import", nil),
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "feed",
|
||||
Retry: 5,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "feed",
|
||||
Type: "feed:import",
|
||||
Payload: nil,
|
||||
State: TaskStatePending,
|
||||
MaxRetry: 5,
|
||||
Retried: 0,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
queue: "feed",
|
||||
want: &base.TaskMessage{
|
||||
@@ -590,12 +683,17 @@ func TestClientDefaultOptions(t *testing.T) {
|
||||
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
|
||||
opts: []Option{Queue("critical")},
|
||||
task: NewTask("feed:import", nil),
|
||||
wantRes: &Result{
|
||||
ProcessAt: now,
|
||||
Queue: "critical",
|
||||
Retry: 5,
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: noDeadline,
|
||||
wantInfo: &TaskInfo{
|
||||
Queue: "critical",
|
||||
Type: "feed:import",
|
||||
Payload: nil,
|
||||
State: TaskStatePending,
|
||||
MaxRetry: 5,
|
||||
LastErr: "",
|
||||
LastFailedAt: time.Time{},
|
||||
Timeout: defaultTimeout,
|
||||
Deadline: time.Time{},
|
||||
NextProcessAt: now,
|
||||
},
|
||||
queue: "critical",
|
||||
want: &base.TaskMessage{
|
||||
@@ -613,18 +711,18 @@ func TestClientDefaultOptions(t *testing.T) {
|
||||
h.FlushDB(t, r)
|
||||
c := NewClient(getRedisConnOpt(t))
|
||||
defer c.Close()
|
||||
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...)
|
||||
gotRes, err := c.Enqueue(tc.task, tc.opts...)
|
||||
c.SetDefaultOptions(tc.task.Type(), tc.defaultOpts...)
|
||||
gotInfo, err := c.Enqueue(tc.task, tc.opts...)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
cmpOptions := []cmp.Option{
|
||||
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
|
||||
cmpopts.IgnoreFields(TaskInfo{}, "ID"),
|
||||
cmpopts.EquateApproxTime(500 * time.Millisecond),
|
||||
}
|
||||
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
|
||||
t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s",
|
||||
tc.desc, gotRes, tc.wantRes, diff)
|
||||
tc.desc, gotInfo, tc.wantInfo, diff)
|
||||
}
|
||||
pending := h.GetPendingMessages(t, r, tc.queue)
|
||||
if len(pending) != 1 {
|
||||
@@ -650,7 +748,7 @@ func TestClientEnqueueUnique(t *testing.T) {
|
||||
ttl time.Duration
|
||||
}{
|
||||
{
|
||||
NewTask("email", map[string]interface{}{"user_id": 123}),
|
||||
NewTask("email", h.JSON(map[string]interface{}{"user_id": 123})),
|
||||
time.Hour,
|
||||
},
|
||||
}
|
||||
@@ -664,7 +762,7 @@ func TestClientEnqueueUnique(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
|
||||
gotTTL := r.TTL(context.Background(), base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val()
|
||||
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
|
||||
t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl)
|
||||
continue
|
||||
@@ -709,7 +807,7 @@ func TestClientEnqueueUniqueWithProcessInOption(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
|
||||
gotTTL := r.TTL(context.Background(), base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val()
|
||||
wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second
|
||||
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
|
||||
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
|
||||
@@ -755,7 +853,7 @@ func TestClientEnqueueUniqueWithProcessAtOption(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
|
||||
gotTTL := r.TTL(context.Background(), base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val()
|
||||
wantTTL := tc.at.Add(tc.ttl).Sub(time.Now())
|
||||
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
|
||||
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
|
||||
@@ -774,4 +872,3 @@ func TestClientEnqueueUniqueWithProcessAtOption(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
27
doc.go
27
doc.go
@@ -11,7 +11,7 @@ specify the connection using one of RedisConnOpt types.
|
||||
redisConnOpt = asynq.RedisClientOpt{
|
||||
Addr: "127.0.0.1:6379",
|
||||
Password: "xxxxx",
|
||||
DB: 3,
|
||||
DB: 2,
|
||||
}
|
||||
|
||||
The Client is used to enqueue a task.
|
||||
@@ -20,15 +20,19 @@ The Client is used to enqueue a task.
|
||||
client := asynq.NewClient(redisConnOpt)
|
||||
|
||||
// Task is created with two parameters: its type and payload.
|
||||
t := asynq.NewTask(
|
||||
"send_email",
|
||||
map[string]interface{}{"user_id": 42})
|
||||
// Payload data is simply an array of bytes. It can be encoded in JSON, Protocol Buffer, Gob, etc.
|
||||
b, err := json.Marshal(ExamplePayload{UserID: 42})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
task := asynq.NewTask("example", b)
|
||||
|
||||
// Enqueue the task to be processed immediately.
|
||||
res, err := client.Enqueue(t)
|
||||
info, err := client.Enqueue(task)
|
||||
|
||||
// Schedule the task to be processed after one minute.
|
||||
res, err = client.Enqueue(t, asynq.ProcessIn(1*time.Minute))
|
||||
info, err = client.Enqueue(t, asynq.ProcessIn(1*time.Minute))
|
||||
|
||||
The Server is used to run the task processing workers with a given
|
||||
handler.
|
||||
@@ -52,10 +56,13 @@ Example of a type that implements the Handler interface.
|
||||
|
||||
func (h *TaskHandler) ProcessTask(ctx context.Context, task *asynq.Task) error {
|
||||
switch task.Type {
|
||||
case "send_email":
|
||||
id, err := task.Payload.GetInt("user_id")
|
||||
// send email
|
||||
//...
|
||||
case "example":
|
||||
var data ExamplePayload
|
||||
if err := json.Unmarshal(task.Payload(), &data); err != nil {
|
||||
return err
|
||||
}
|
||||
// perform task with the data
|
||||
|
||||
default:
|
||||
return fmt.Errorf("unexpected task type %q", task.Type)
|
||||
}
|
||||
|
@@ -30,7 +30,7 @@ func ExampleServer_Run() {
|
||||
}
|
||||
}
|
||||
|
||||
func ExampleServer_Stop() {
|
||||
func ExampleServer_Shutdown() {
|
||||
srv := asynq.NewServer(
|
||||
asynq.RedisClientOpt{Addr: ":6379"},
|
||||
asynq.Config{Concurrency: 20},
|
||||
@@ -47,10 +47,10 @@ func ExampleServer_Stop() {
|
||||
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
|
||||
<-sigs // wait for termination signal
|
||||
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
}
|
||||
|
||||
func ExampleServer_Quiet() {
|
||||
func ExampleServer_Stop() {
|
||||
srv := asynq.NewServer(
|
||||
asynq.RedisClientOpt{Addr: ":6379"},
|
||||
asynq.Config{Concurrency: 20},
|
||||
@@ -70,13 +70,13 @@ func ExampleServer_Quiet() {
|
||||
for {
|
||||
s := <-sigs
|
||||
if s == unix.SIGTSTP {
|
||||
srv.Quiet() // stop processing new tasks
|
||||
srv.Stop() // stop processing new tasks
|
||||
continue
|
||||
}
|
||||
break
|
||||
break // received SIGTERM or SIGINT signal
|
||||
}
|
||||
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
}
|
||||
|
||||
func ExampleScheduler() {
|
||||
|
@@ -45,7 +45,7 @@ func newForwarder(params forwarderParams) *forwarder {
|
||||
}
|
||||
}
|
||||
|
||||
func (f *forwarder) terminate() {
|
||||
func (f *forwarder) shutdown() {
|
||||
f.logger.Debug("Forwarder shutting down...")
|
||||
// Signal the forwarder goroutine to stop polling.
|
||||
f.done <- struct{}{}
|
||||
@@ -69,7 +69,7 @@ func (f *forwarder) start(wg *sync.WaitGroup) {
|
||||
}
|
||||
|
||||
func (f *forwarder) exec() {
|
||||
if err := f.broker.CheckAndEnqueue(f.queues...); err != nil {
|
||||
if err := f.broker.ForwardIfReady(f.queues...); err != nil {
|
||||
f.logger.Errorf("Could not enqueue scheduled tasks: %v", err)
|
||||
}
|
||||
}
|
||||
|
@@ -111,7 +111,7 @@ func TestForwarder(t *testing.T) {
|
||||
var wg sync.WaitGroup
|
||||
s.start(&wg)
|
||||
time.Sleep(tc.wait)
|
||||
s.terminate()
|
||||
s.shutdown()
|
||||
|
||||
for qname, want := range tc.wantScheduled {
|
||||
gotScheduled := h.GetScheduledMessages(t, r, qname)
|
||||
@@ -130,7 +130,7 @@ func TestForwarder(t *testing.T) {
|
||||
for qname, want := range tc.wantPending {
|
||||
gotPending := h.GetPendingMessages(t, r, qname)
|
||||
if diff := cmp.Diff(want, gotPending, h.SortMsgOpt); diff != "" {
|
||||
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.QueueKey(qname), diff)
|
||||
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.PendingKey(qname), diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
14
go.mod
14
go.mod
@@ -3,13 +3,17 @@ module github.com/hibiken/asynq
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
github.com/go-redis/redis/v7 v7.4.0
|
||||
github.com/google/go-cmp v0.4.0
|
||||
github.com/google/uuid v1.1.1
|
||||
github.com/go-redis/redis/v8 v8.11.2
|
||||
github.com/golang/protobuf v1.4.2
|
||||
github.com/google/go-cmp v0.5.6
|
||||
github.com/google/uuid v1.2.0
|
||||
github.com/kr/pretty v0.1.0 // indirect
|
||||
github.com/robfig/cron/v3 v3.0.1
|
||||
github.com/spf13/cast v1.3.1
|
||||
github.com/stretchr/testify v1.6.1 // indirect
|
||||
go.uber.org/goleak v0.10.0
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
|
||||
golang.org/x/sys v0.0.0-20210112080510-489259a85091
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
|
||||
gopkg.in/yaml.v2 v2.2.7 // indirect
|
||||
google.golang.org/protobuf v1.25.0
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 // indirect
|
||||
)
|
||||
|
144
go.sum
144
go.sum
@@ -1,70 +1,152 @@
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
|
||||
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
|
||||
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
|
||||
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
|
||||
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
|
||||
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
|
||||
github.com/go-redis/redis/v8 v8.11.2 h1:WqlSpAwz8mxDSMCvbyz1Mkiqe0LE5OY4j3lgkvu1Ts0=
|
||||
github.com/go-redis/redis/v8 v8.11.2/go.mod h1:DLomh7y2e3ggQXQLd1YgmvIfecPJoFl7WU5SOQ/r06M=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
|
||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
|
||||
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
|
||||
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
|
||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
|
||||
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
|
||||
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
|
||||
github.com/onsi/ginkgo v1.15.0 h1:1V1NfVQR87RtWAgp1lv9JZJ5Jap+XFGKPi00andXGi4=
|
||||
github.com/onsi/ginkgo v1.15.0/go.mod h1:hF8qUzuuC8DJGygJH3726JnCZX4MYbRB8yFfISqnKUg=
|
||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
||||
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
|
||||
github.com/onsi/gomega v1.10.5 h1:7n6FEkpFmfCoo2t+YYqXH0evK+a9ICQz0xcAy9dYcaQ=
|
||||
github.com/onsi/gomega v1.10.5/go.mod h1:gza4q3jKQJijlu05nKWRCW/GavJumGt8aNRxWg7mt48=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
|
||||
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4=
|
||||
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb h1:eBmm0M9fYhWpKZLjQUUKka/LtIxf46G4fxeEz5KJr9U=
|
||||
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210112080510-489259a85091 h1:DMyOG0U+gKfu8JZzg2UQe9MeaC1X+xQWlAKcRnjxjCw=
|
||||
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
||||
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
|
||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
|
||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
|
||||
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
|
@@ -45,7 +45,7 @@ func newHealthChecker(params healthcheckerParams) *healthchecker {
|
||||
}
|
||||
}
|
||||
|
||||
func (hc *healthchecker) terminate() {
|
||||
func (hc *healthchecker) shutdown() {
|
||||
if hc.healthcheckFunc == nil {
|
||||
return
|
||||
}
|
||||
|
@@ -51,7 +51,7 @@ func TestHealthChecker(t *testing.T) {
|
||||
}
|
||||
mu.Unlock()
|
||||
|
||||
hc.terminate()
|
||||
hc.shutdown()
|
||||
}
|
||||
|
||||
func TestHealthCheckerWhenRedisDown(t *testing.T) {
|
||||
@@ -99,5 +99,5 @@ func TestHealthCheckerWhenRedisDown(t *testing.T) {
|
||||
}
|
||||
mu.Unlock()
|
||||
|
||||
hc.terminate()
|
||||
hc.shutdown()
|
||||
}
|
||||
|
12
heartbeat.go
12
heartbeat.go
@@ -40,8 +40,8 @@ type heartbeater struct {
|
||||
started time.Time
|
||||
workers map[string]*workerInfo
|
||||
|
||||
// status is shared with other goroutine but is concurrency safe.
|
||||
status *base.ServerStatus
|
||||
// state is shared with other goroutine but is concurrency safe.
|
||||
state *base.ServerState
|
||||
|
||||
// channels to receive updates on active workers.
|
||||
starting <-chan *workerInfo
|
||||
@@ -55,7 +55,7 @@ type heartbeaterParams struct {
|
||||
concurrency int
|
||||
queues map[string]int
|
||||
strictPriority bool
|
||||
status *base.ServerStatus
|
||||
state *base.ServerState
|
||||
starting <-chan *workerInfo
|
||||
finished <-chan *base.TaskMessage
|
||||
}
|
||||
@@ -79,14 +79,14 @@ func newHeartbeater(params heartbeaterParams) *heartbeater {
|
||||
queues: params.queues,
|
||||
strictPriority: params.strictPriority,
|
||||
|
||||
status: params.status,
|
||||
state: params.state,
|
||||
workers: make(map[string]*workerInfo),
|
||||
starting: params.starting,
|
||||
finished: params.finished,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *heartbeater) terminate() {
|
||||
func (h *heartbeater) shutdown() {
|
||||
h.logger.Debug("Heartbeater shutting down...")
|
||||
// Signal the heartbeater goroutine to stop.
|
||||
h.done <- struct{}{}
|
||||
@@ -142,7 +142,7 @@ func (h *heartbeater) beat() {
|
||||
Concurrency: h.concurrency,
|
||||
Queues: h.queues,
|
||||
StrictPriority: h.strictPriority,
|
||||
Status: h.status.String(),
|
||||
Status: h.state.String(),
|
||||
Started: h.started,
|
||||
ActiveWorkerCount: len(h.workers),
|
||||
}
|
||||
|
@@ -38,7 +38,7 @@ func TestHeartbeater(t *testing.T) {
|
||||
for _, tc := range tests {
|
||||
h.FlushDB(t, r)
|
||||
|
||||
status := base.NewServerStatus(base.StatusIdle)
|
||||
state := base.NewServerState()
|
||||
hb := newHeartbeater(heartbeaterParams{
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
@@ -46,7 +46,7 @@ func TestHeartbeater(t *testing.T) {
|
||||
concurrency: tc.concurrency,
|
||||
queues: tc.queues,
|
||||
strictPriority: false,
|
||||
status: status,
|
||||
state: state,
|
||||
starting: make(chan *workerInfo),
|
||||
finished: make(chan *base.TaskMessage),
|
||||
})
|
||||
@@ -55,7 +55,7 @@ func TestHeartbeater(t *testing.T) {
|
||||
hb.host = tc.host
|
||||
hb.pid = tc.pid
|
||||
|
||||
status.Set(base.StatusRunning)
|
||||
state.Set(base.StateActive)
|
||||
var wg sync.WaitGroup
|
||||
hb.start(&wg)
|
||||
|
||||
@@ -65,7 +65,7 @@ func TestHeartbeater(t *testing.T) {
|
||||
Queues: tc.queues,
|
||||
Concurrency: tc.concurrency,
|
||||
Started: time.Now(),
|
||||
Status: "running",
|
||||
Status: "active",
|
||||
}
|
||||
|
||||
// allow for heartbeater to write to redis
|
||||
@@ -74,49 +74,49 @@ func TestHeartbeater(t *testing.T) {
|
||||
ss, err := rdbClient.ListServers()
|
||||
if err != nil {
|
||||
t.Errorf("could not read server info from redis: %v", err)
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
continue
|
||||
}
|
||||
|
||||
if len(ss) != 1 {
|
||||
t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
continue
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
|
||||
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
continue
|
||||
}
|
||||
|
||||
// status change
|
||||
status.Set(base.StatusStopped)
|
||||
state.Set(base.StateClosed)
|
||||
|
||||
// allow for heartbeater to write to redis
|
||||
time.Sleep(tc.interval * 2)
|
||||
|
||||
want.Status = "stopped"
|
||||
want.Status = "closed"
|
||||
ss, err = rdbClient.ListServers()
|
||||
if err != nil {
|
||||
t.Errorf("could not read process status from redis: %v", err)
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
continue
|
||||
}
|
||||
|
||||
if len(ss) != 1 {
|
||||
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
continue
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
|
||||
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
continue
|
||||
}
|
||||
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,6 +131,8 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
|
||||
r := rdb.NewRDB(setup(t))
|
||||
defer r.Close()
|
||||
testBroker := testbroker.NewTestBroker(r)
|
||||
state := base.NewServerState()
|
||||
state.Set(base.StateActive)
|
||||
hb := newHeartbeater(heartbeaterParams{
|
||||
logger: testLogger,
|
||||
broker: testBroker,
|
||||
@@ -138,7 +140,7 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
|
||||
concurrency: 10,
|
||||
queues: map[string]int{"default": 1},
|
||||
strictPriority: false,
|
||||
status: base.NewServerStatus(base.StatusRunning),
|
||||
state: state,
|
||||
starting: make(chan *workerInfo),
|
||||
finished: make(chan *base.TaskMessage),
|
||||
})
|
||||
@@ -150,5 +152,5 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
|
||||
// wait for heartbeater to try writing data to redis
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
hb.terminate()
|
||||
hb.shutdown()
|
||||
}
|
||||
|
@@ -2,7 +2,7 @@
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
package inspeq
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
@@ -10,10 +10,10 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/google/uuid"
|
||||
"github.com/hibiken/asynq"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/errors"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
)
|
||||
|
||||
@@ -24,7 +24,7 @@ type Inspector struct {
|
||||
}
|
||||
|
||||
// New returns a new instance of Inspector.
|
||||
func New(r asynq.RedisConnOpt) *Inspector {
|
||||
func NewInspector(r RedisConnOpt) *Inspector {
|
||||
c, ok := r.MakeRedisClient().(redis.UniversalClient)
|
||||
if !ok {
|
||||
panic(fmt.Sprintf("inspeq: unsupported RedisConnOpt type %T", r))
|
||||
@@ -44,15 +44,19 @@ func (i *Inspector) Queues() ([]string, error) {
|
||||
return i.rdb.AllQueues()
|
||||
}
|
||||
|
||||
// QueueStats represents a state of queues at a certain time.
|
||||
type QueueStats struct {
|
||||
// QueueInfo represents a state of queues at a certain time.
|
||||
type QueueInfo struct {
|
||||
// Name of the queue.
|
||||
Queue string
|
||||
|
||||
// Total number of bytes that the queue and its tasks require to be stored in redis.
|
||||
// It is an approximate memory usage value in bytes since the value is computed by sampling.
|
||||
MemoryUsage int64
|
||||
|
||||
// Size is the total number of tasks in the queue.
|
||||
// The value is the sum of Pending, Active, Scheduled, Retry, and Archived.
|
||||
Size int
|
||||
|
||||
// Number of pending tasks.
|
||||
Pending int
|
||||
// Number of active tasks.
|
||||
@@ -63,20 +67,23 @@ type QueueStats struct {
|
||||
Retry int
|
||||
// Number of archived tasks.
|
||||
Archived int
|
||||
|
||||
// Total number of tasks being processed during the given date.
|
||||
// The number includes both succeeded and failed tasks.
|
||||
Processed int
|
||||
// Total number of tasks failed to be processed during the given date.
|
||||
Failed int
|
||||
|
||||
// Paused indicates whether the queue is paused.
|
||||
// If true, tasks in the queue will not be processed.
|
||||
Paused bool
|
||||
// Time when this stats was taken.
|
||||
|
||||
// Time when this queue info snapshot was taken.
|
||||
Timestamp time.Time
|
||||
}
|
||||
|
||||
// CurrentStats returns a current stats of the given queue.
|
||||
func (i *Inspector) CurrentStats(qname string) (*QueueStats, error) {
|
||||
// GetQueueInfo returns current information of the given queue.
|
||||
func (i *Inspector) GetQueueInfo(qname string) (*QueueInfo, error) {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -84,7 +91,7 @@ func (i *Inspector) CurrentStats(qname string) (*QueueStats, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &QueueStats{
|
||||
return &QueueInfo{
|
||||
Queue: stats.Queue,
|
||||
MemoryUsage: stats.MemoryUsage,
|
||||
Size: stats.Size,
|
||||
@@ -134,23 +141,16 @@ func (i *Inspector) History(qname string, n int) ([]*DailyStats, error) {
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// ErrQueueNotFound indicates that the specified queue does not exist.
|
||||
type ErrQueueNotFound struct {
|
||||
qname string
|
||||
}
|
||||
var (
|
||||
// ErrQueueNotFound indicates that the specified queue does not exist.
|
||||
ErrQueueNotFound = errors.New("queue not found")
|
||||
|
||||
func (e *ErrQueueNotFound) Error() string {
|
||||
return fmt.Sprintf("queue %q does not exist", e.qname)
|
||||
}
|
||||
// ErrQueueNotEmpty indicates that the specified queue is not empty.
|
||||
ErrQueueNotEmpty = errors.New("queue is not empty")
|
||||
|
||||
// ErrQueueNotEmpty indicates that the specified queue is not empty.
|
||||
type ErrQueueNotEmpty struct {
|
||||
qname string
|
||||
}
|
||||
|
||||
func (e *ErrQueueNotEmpty) Error() string {
|
||||
return fmt.Sprintf("queue %q is not empty", e.qname)
|
||||
}
|
||||
// ErrTaskNotFound indicates that the specified task cannot be found in the queue.
|
||||
ErrTaskNotFound = errors.New("task not found")
|
||||
)
|
||||
|
||||
// DeleteQueue removes the specified queue.
|
||||
//
|
||||
@@ -164,134 +164,34 @@ func (e *ErrQueueNotEmpty) Error() string {
|
||||
// returns ErrQueueNotEmpty.
|
||||
func (i *Inspector) DeleteQueue(qname string, force bool) error {
|
||||
err := i.rdb.RemoveQueue(qname, force)
|
||||
if _, ok := err.(*rdb.ErrQueueNotFound); ok {
|
||||
return &ErrQueueNotFound{qname}
|
||||
if errors.IsQueueNotFound(err) {
|
||||
return fmt.Errorf("%w: queue=%q", ErrQueueNotFound, qname)
|
||||
}
|
||||
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
|
||||
return &ErrQueueNotEmpty{qname}
|
||||
if errors.IsQueueNotEmpty(err) {
|
||||
return fmt.Errorf("%w: queue=%q", ErrQueueNotEmpty, qname)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// PendingTask is a task in a queue and is ready to be processed.
|
||||
type PendingTask struct {
|
||||
*asynq.Task
|
||||
ID string
|
||||
Queue string
|
||||
MaxRetry int
|
||||
Retried int
|
||||
LastError string
|
||||
}
|
||||
|
||||
// ActiveTask is a task that's currently being processed.
|
||||
type ActiveTask struct {
|
||||
*asynq.Task
|
||||
ID string
|
||||
Queue string
|
||||
MaxRetry int
|
||||
Retried int
|
||||
LastError string
|
||||
}
|
||||
|
||||
// ScheduledTask is a task scheduled to be processed in the future.
|
||||
type ScheduledTask struct {
|
||||
*asynq.Task
|
||||
ID string
|
||||
Queue string
|
||||
MaxRetry int
|
||||
Retried int
|
||||
LastError string
|
||||
NextProcessAt time.Time
|
||||
|
||||
score int64
|
||||
}
|
||||
|
||||
// RetryTask is a task scheduled to be retried in the future.
|
||||
type RetryTask struct {
|
||||
*asynq.Task
|
||||
ID string
|
||||
Queue string
|
||||
NextProcessAt time.Time
|
||||
MaxRetry int
|
||||
Retried int
|
||||
LastError string
|
||||
// TODO: LastFailedAt time.Time
|
||||
|
||||
score int64
|
||||
}
|
||||
|
||||
// ArchivedTask is a task archived for debugging and inspection purposes, and
|
||||
// it won't be retried automatically.
|
||||
// A task can be archived when the task exhausts its retry counts or manually
|
||||
// archived by a user via the CLI or Inspector.
|
||||
type ArchivedTask struct {
|
||||
*asynq.Task
|
||||
ID string
|
||||
Queue string
|
||||
MaxRetry int
|
||||
Retried int
|
||||
LastFailedAt time.Time
|
||||
LastError string
|
||||
|
||||
score int64
|
||||
}
|
||||
|
||||
// Format string used for task key.
|
||||
// Format is <prefix>:<uuid>:<score>.
|
||||
const taskKeyFormat = "%s:%v:%v"
|
||||
|
||||
// Prefix used for task key.
|
||||
const (
|
||||
keyPrefixPending = "p"
|
||||
keyPrefixScheduled = "s"
|
||||
keyPrefixRetry = "r"
|
||||
keyPrefixArchived = "a"
|
||||
|
||||
allKeyPrefixes = keyPrefixPending + keyPrefixScheduled + keyPrefixRetry + keyPrefixArchived
|
||||
)
|
||||
|
||||
// Key returns a key used to delete, and archive the pending task.
|
||||
func (t *PendingTask) Key() string {
|
||||
// Note: Pending tasks are stored in redis LIST, therefore no score.
|
||||
// Use zero for the score to use the same key format.
|
||||
return fmt.Sprintf(taskKeyFormat, keyPrefixPending, t.ID, 0)
|
||||
}
|
||||
|
||||
// Key returns a key used to delete, run, and archive the scheduled task.
|
||||
func (t *ScheduledTask) Key() string {
|
||||
return fmt.Sprintf(taskKeyFormat, keyPrefixScheduled, t.ID, t.score)
|
||||
}
|
||||
|
||||
// Key returns a key used to delete, run, and archive the retry task.
|
||||
func (t *RetryTask) Key() string {
|
||||
return fmt.Sprintf(taskKeyFormat, keyPrefixRetry, t.ID, t.score)
|
||||
}
|
||||
|
||||
// Key returns a key used to delete and run the archived task.
|
||||
func (t *ArchivedTask) Key() string {
|
||||
return fmt.Sprintf(taskKeyFormat, keyPrefixArchived, t.ID, t.score)
|
||||
}
|
||||
|
||||
// parseTaskKey parses a key string and returns each part of key with proper
|
||||
// type if valid, otherwise it reports an error.
|
||||
func parseTaskKey(key string) (prefix string, id uuid.UUID, score int64, err error) {
|
||||
parts := strings.Split(key, ":")
|
||||
if len(parts) != 3 {
|
||||
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
|
||||
}
|
||||
id, err = uuid.Parse(parts[1])
|
||||
// GetTaskInfo retrieves task information given a task id and queue name.
|
||||
//
|
||||
// Returns ErrQueueNotFound if a queue with the given name doesn't exist.
|
||||
// Returns ErrTaskNotFound if a task with the given id doesn't exist in the queue.
|
||||
func (i *Inspector) GetTaskInfo(qname, id string) (*TaskInfo, error) {
|
||||
taskid, err := uuid.Parse(id)
|
||||
if err != nil {
|
||||
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
|
||||
return nil, fmt.Errorf("asynq: %s is not a valid task id", id)
|
||||
}
|
||||
score, err = strconv.ParseInt(parts[2], 10, 64)
|
||||
if err != nil {
|
||||
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
|
||||
info, err := i.rdb.GetTaskInfo(qname, taskid)
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case errors.IsTaskNotFound(err):
|
||||
return nil, fmt.Errorf("asynq: %w", ErrTaskNotFound)
|
||||
case err != nil:
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
prefix = parts[0]
|
||||
if len(prefix) != 1 || !strings.Contains(allKeyPrefixes, prefix) {
|
||||
return "", uuid.Nil, 0, fmt.Errorf("invalid id")
|
||||
}
|
||||
return prefix, id, score, nil
|
||||
return newTaskInfo(info.Message, info.State, info.NextProcessAt), nil
|
||||
}
|
||||
|
||||
// ListOption specifies behavior of list operation.
|
||||
@@ -358,26 +258,23 @@ func Page(n int) ListOption {
|
||||
// ListPendingTasks retrieves pending tasks from the specified queue.
|
||||
//
|
||||
// By default, it retrieves the first 30 tasks.
|
||||
func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*PendingTask, error) {
|
||||
func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
opt := composeListOptions(opts...)
|
||||
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
|
||||
msgs, err := i.rdb.ListPending(qname, pgn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case err != nil:
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
var tasks []*PendingTask
|
||||
now := time.Now()
|
||||
var tasks []*TaskInfo
|
||||
for _, m := range msgs {
|
||||
tasks = append(tasks, &PendingTask{
|
||||
Task: asynq.NewTask(m.Type, m.Payload),
|
||||
ID: m.ID.String(),
|
||||
Queue: m.Queue,
|
||||
MaxRetry: m.Retry,
|
||||
Retried: m.Retried,
|
||||
LastError: m.ErrorMsg,
|
||||
})
|
||||
tasks = append(tasks, newTaskInfo(m, base.TaskStatePending, now))
|
||||
}
|
||||
return tasks, err
|
||||
}
|
||||
@@ -385,124 +282,106 @@ func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*Pendi
|
||||
// ListActiveTasks retrieves active tasks from the specified queue.
|
||||
//
|
||||
// By default, it retrieves the first 30 tasks.
|
||||
func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*ActiveTask, error) {
|
||||
func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
opt := composeListOptions(opts...)
|
||||
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
|
||||
msgs, err := i.rdb.ListActive(qname, pgn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case err != nil:
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
var tasks []*ActiveTask
|
||||
var tasks []*TaskInfo
|
||||
for _, m := range msgs {
|
||||
|
||||
tasks = append(tasks, &ActiveTask{
|
||||
Task: asynq.NewTask(m.Type, m.Payload),
|
||||
ID: m.ID.String(),
|
||||
Queue: m.Queue,
|
||||
MaxRetry: m.Retry,
|
||||
Retried: m.Retried,
|
||||
LastError: m.ErrorMsg,
|
||||
})
|
||||
tasks = append(tasks, newTaskInfo(m, base.TaskStateActive, time.Time{}))
|
||||
}
|
||||
return tasks, err
|
||||
}
|
||||
|
||||
// ListScheduledTasks retrieves scheduled tasks from the specified queue.
|
||||
// Tasks are sorted by NextProcessAt field in ascending order.
|
||||
// Tasks are sorted by NextProcessAt in ascending order.
|
||||
//
|
||||
// By default, it retrieves the first 30 tasks.
|
||||
func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*ScheduledTask, error) {
|
||||
func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
opt := composeListOptions(opts...)
|
||||
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
|
||||
zs, err := i.rdb.ListScheduled(qname, pgn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case err != nil:
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
var tasks []*ScheduledTask
|
||||
var tasks []*TaskInfo
|
||||
for _, z := range zs {
|
||||
processAt := time.Unix(z.Score, 0)
|
||||
t := asynq.NewTask(z.Message.Type, z.Message.Payload)
|
||||
tasks = append(tasks, &ScheduledTask{
|
||||
Task: t,
|
||||
ID: z.Message.ID.String(),
|
||||
Queue: z.Message.Queue,
|
||||
MaxRetry: z.Message.Retry,
|
||||
Retried: z.Message.Retried,
|
||||
LastError: z.Message.ErrorMsg,
|
||||
NextProcessAt: processAt,
|
||||
score: z.Score,
|
||||
})
|
||||
tasks = append(tasks, newTaskInfo(
|
||||
z.Message,
|
||||
base.TaskStateScheduled,
|
||||
time.Unix(z.Score, 0),
|
||||
))
|
||||
}
|
||||
return tasks, nil
|
||||
}
|
||||
|
||||
// ListRetryTasks retrieves retry tasks from the specified queue.
|
||||
// Tasks are sorted by NextProcessAt field in ascending order.
|
||||
// Tasks are sorted by NextProcessAt in ascending order.
|
||||
//
|
||||
// By default, it retrieves the first 30 tasks.
|
||||
func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*RetryTask, error) {
|
||||
func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
opt := composeListOptions(opts...)
|
||||
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
|
||||
zs, err := i.rdb.ListRetry(qname, pgn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case err != nil:
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
var tasks []*RetryTask
|
||||
var tasks []*TaskInfo
|
||||
for _, z := range zs {
|
||||
processAt := time.Unix(z.Score, 0)
|
||||
t := asynq.NewTask(z.Message.Type, z.Message.Payload)
|
||||
tasks = append(tasks, &RetryTask{
|
||||
Task: t,
|
||||
ID: z.Message.ID.String(),
|
||||
Queue: z.Message.Queue,
|
||||
NextProcessAt: processAt,
|
||||
MaxRetry: z.Message.Retry,
|
||||
Retried: z.Message.Retried,
|
||||
// TODO: LastFailedAt: z.Message.LastFailedAt
|
||||
LastError: z.Message.ErrorMsg,
|
||||
score: z.Score,
|
||||
})
|
||||
tasks = append(tasks, newTaskInfo(
|
||||
z.Message,
|
||||
base.TaskStateRetry,
|
||||
time.Unix(z.Score, 0),
|
||||
))
|
||||
}
|
||||
return tasks, nil
|
||||
}
|
||||
|
||||
// ListArchivedTasks retrieves archived tasks from the specified queue.
|
||||
// Tasks are sorted by LastFailedAt field in descending order.
|
||||
// Tasks are sorted by LastFailedAt in descending order.
|
||||
//
|
||||
// By default, it retrieves the first 30 tasks.
|
||||
func (i *Inspector) ListArchivedTasks(qname string, opts ...ListOption) ([]*ArchivedTask, error) {
|
||||
func (i *Inspector) ListArchivedTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
opt := composeListOptions(opts...)
|
||||
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
|
||||
zs, err := i.rdb.ListArchived(qname, pgn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case err != nil:
|
||||
return nil, fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
var tasks []*ArchivedTask
|
||||
var tasks []*TaskInfo
|
||||
for _, z := range zs {
|
||||
failedAt := time.Unix(z.Score, 0)
|
||||
t := asynq.NewTask(z.Message.Type, z.Message.Payload)
|
||||
tasks = append(tasks, &ArchivedTask{
|
||||
Task: t,
|
||||
ID: z.Message.ID.String(),
|
||||
Queue: z.Message.Queue,
|
||||
MaxRetry: z.Message.Retry,
|
||||
Retried: z.Message.Retried,
|
||||
LastFailedAt: failedAt,
|
||||
LastError: z.Message.ErrorMsg,
|
||||
score: z.Score,
|
||||
})
|
||||
tasks = append(tasks, newTaskInfo(
|
||||
z.Message,
|
||||
base.TaskStateArchived,
|
||||
time.Time{},
|
||||
))
|
||||
}
|
||||
return tasks, nil
|
||||
}
|
||||
@@ -547,27 +426,32 @@ func (i *Inspector) DeleteAllArchivedTasks(qname string) (int, error) {
|
||||
return int(n), err
|
||||
}
|
||||
|
||||
// DeleteTaskByKey deletes a task with the given key from the given queue.
|
||||
func (i *Inspector) DeleteTaskByKey(qname, key string) error {
|
||||
// DeleteTask deletes a task with the given id from the given queue.
|
||||
// The task needs to be in pending, scheduled, retry, or archived state,
|
||||
// otherwise DeleteTask will return an error.
|
||||
//
|
||||
// If a queue with the given name doesn't exist, it returns ErrQueueNotFound.
|
||||
// If a task with the given id doesn't exist in the queue, it returns ErrTaskNotFound.
|
||||
// If the task is in active state, it returns a non-nil error.
|
||||
func (i *Inspector) DeleteTask(qname, id string) error {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return err
|
||||
return fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
prefix, id, score, err := parseTaskKey(key)
|
||||
taskid, err := uuid.Parse(id)
|
||||
if err != nil {
|
||||
return err
|
||||
return fmt.Errorf("asynq: %s is not a valid task id", id)
|
||||
}
|
||||
switch prefix {
|
||||
case keyPrefixPending:
|
||||
return i.rdb.DeletePendingTask(qname, id)
|
||||
case keyPrefixScheduled:
|
||||
return i.rdb.DeleteScheduledTask(qname, id, score)
|
||||
case keyPrefixRetry:
|
||||
return i.rdb.DeleteRetryTask(qname, id, score)
|
||||
case keyPrefixArchived:
|
||||
return i.rdb.DeleteArchivedTask(qname, id, score)
|
||||
default:
|
||||
return fmt.Errorf("invalid key")
|
||||
err = i.rdb.DeleteTask(qname, taskid)
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case errors.IsTaskNotFound(err):
|
||||
return fmt.Errorf("asynq: %w", ErrTaskNotFound)
|
||||
case err != nil:
|
||||
return fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
return nil
|
||||
|
||||
}
|
||||
|
||||
// RunAllScheduledTasks transition all scheduled tasks to pending state from the given queue,
|
||||
@@ -600,27 +484,31 @@ func (i *Inspector) RunAllArchivedTasks(qname string) (int, error) {
|
||||
return int(n), err
|
||||
}
|
||||
|
||||
// RunTaskByKey transition a task to pending state given task key and queue name.
|
||||
func (i *Inspector) RunTaskByKey(qname, key string) error {
|
||||
// RunTask updates the task to pending state given a queue name and task id.
|
||||
// The task needs to be in scheduled, retry, or archived state, otherwise RunTask
|
||||
// will return an error.
|
||||
//
|
||||
// If a queue with the given name doesn't exist, it returns ErrQueueNotFound.
|
||||
// If a task with the given id doesn't exist in the queue, it returns ErrTaskNotFound.
|
||||
// If the task is in pending or active state, it returns a non-nil error.
|
||||
func (i *Inspector) RunTask(qname, id string) error {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return err
|
||||
return fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
prefix, id, score, err := parseTaskKey(key)
|
||||
taskid, err := uuid.Parse(id)
|
||||
if err != nil {
|
||||
return err
|
||||
return fmt.Errorf("asynq: %s is not a valid task id", id)
|
||||
}
|
||||
switch prefix {
|
||||
case keyPrefixScheduled:
|
||||
return i.rdb.RunScheduledTask(qname, id, score)
|
||||
case keyPrefixRetry:
|
||||
return i.rdb.RunRetryTask(qname, id, score)
|
||||
case keyPrefixArchived:
|
||||
return i.rdb.RunArchivedTask(qname, id, score)
|
||||
case keyPrefixPending:
|
||||
return fmt.Errorf("task is already pending for run")
|
||||
default:
|
||||
return fmt.Errorf("invalid key")
|
||||
err = i.rdb.RunTask(qname, taskid)
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case errors.IsTaskNotFound(err):
|
||||
return fmt.Errorf("asynq: %w", ErrTaskNotFound)
|
||||
case err != nil:
|
||||
return fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ArchiveAllPendingTasks archives all pending tasks from the given queue,
|
||||
@@ -653,34 +541,38 @@ func (i *Inspector) ArchiveAllRetryTasks(qname string) (int, error) {
|
||||
return int(n), err
|
||||
}
|
||||
|
||||
// ArchiveTaskByKey archives a task with the given key in the given queue.
|
||||
func (i *Inspector) ArchiveTaskByKey(qname, key string) error {
|
||||
// ArchiveTask archives a task with the given id in the given queue.
|
||||
// The task needs to be in pending, scheduled, or retry state, otherwise ArchiveTask
|
||||
// will return an error.
|
||||
//
|
||||
// If a queue with the given name doesn't exist, it returns ErrQueueNotFound.
|
||||
// If a task with the given id doesn't exist in the queue, it returns ErrTaskNotFound.
|
||||
// If the task is in already archived, it returns a non-nil error.
|
||||
func (i *Inspector) ArchiveTask(qname, id string) error {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
return err
|
||||
return fmt.Errorf("asynq: err")
|
||||
}
|
||||
prefix, id, score, err := parseTaskKey(key)
|
||||
taskid, err := uuid.Parse(id)
|
||||
if err != nil {
|
||||
return err
|
||||
return fmt.Errorf("asynq: %s is not a valid task id", id)
|
||||
}
|
||||
switch prefix {
|
||||
case keyPrefixPending:
|
||||
return i.rdb.ArchivePendingTask(qname, id)
|
||||
case keyPrefixScheduled:
|
||||
return i.rdb.ArchiveScheduledTask(qname, id, score)
|
||||
case keyPrefixRetry:
|
||||
return i.rdb.ArchiveRetryTask(qname, id, score)
|
||||
case keyPrefixArchived:
|
||||
return fmt.Errorf("task is already archived")
|
||||
default:
|
||||
return fmt.Errorf("invalid key")
|
||||
err = i.rdb.ArchiveTask(qname, taskid)
|
||||
switch {
|
||||
case errors.IsQueueNotFound(err):
|
||||
return fmt.Errorf("asynq: %w", ErrQueueNotFound)
|
||||
case errors.IsTaskNotFound(err):
|
||||
return fmt.Errorf("asynq: %w", ErrTaskNotFound)
|
||||
case err != nil:
|
||||
return fmt.Errorf("asynq: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CancelActiveTask sends a signal to cancel processing of the task with
|
||||
// the given id. CancelActiveTask is best-effort, which means that it does not
|
||||
// CancelProcessing sends a signal to cancel processing of the task
|
||||
// given a task id. CancelProcessing is best-effort, which means that it does not
|
||||
// guarantee that the task with the given id will be canceled. The return
|
||||
// value only indicates whether the cancelation signal has been sent.
|
||||
func (i *Inspector) CancelActiveTask(id string) error {
|
||||
func (i *Inspector) CancelProcessing(id string) error {
|
||||
return i.rdb.PublishCancelation(id)
|
||||
}
|
||||
|
||||
@@ -732,13 +624,12 @@ func (i *Inspector) Servers() ([]*ServerInfo, error) {
|
||||
continue
|
||||
}
|
||||
wrkInfo := &WorkerInfo{
|
||||
Started: w.Started,
|
||||
Deadline: w.Deadline,
|
||||
Task: &ActiveTask{
|
||||
Task: asynq.NewTask(w.Type, w.Payload),
|
||||
ID: w.ID,
|
||||
Queue: w.Queue,
|
||||
},
|
||||
TaskID: w.ID,
|
||||
TaskType: w.Type,
|
||||
TaskPayload: w.Payload,
|
||||
Queue: w.Queue,
|
||||
Started: w.Started,
|
||||
Deadline: w.Deadline,
|
||||
}
|
||||
srvInfo.ActiveWorkers = append(srvInfo.ActiveWorkers, wrkInfo)
|
||||
}
|
||||
@@ -775,8 +666,14 @@ type ServerInfo struct {
|
||||
|
||||
// WorkerInfo describes a running worker processing a task.
|
||||
type WorkerInfo struct {
|
||||
// The task the worker is processing.
|
||||
Task *ActiveTask
|
||||
// ID of the task the worker is processing.
|
||||
TaskID string
|
||||
// Type of the task the worker is processing.
|
||||
TaskType string
|
||||
// Payload of the task the worker is processing.
|
||||
TaskPayload []byte
|
||||
// Queue from which the worker got its task.
|
||||
Queue string
|
||||
// Time the worker started processing the task.
|
||||
Started time.Time
|
||||
// Time the worker needs to finish processing the task by.
|
||||
@@ -798,14 +695,16 @@ type ClusterNode struct {
|
||||
}
|
||||
|
||||
// ClusterNodes returns a list of nodes the given queue belongs to.
|
||||
func (i *Inspector) ClusterNodes(qname string) ([]ClusterNode, error) {
|
||||
//
|
||||
// Only relevant if task queues are stored in redis cluster.
|
||||
func (i *Inspector) ClusterNodes(qname string) ([]*ClusterNode, error) {
|
||||
nodes, err := i.rdb.ClusterNodes(qname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var res []ClusterNode
|
||||
var res []*ClusterNode
|
||||
for _, node := range nodes {
|
||||
res = append(res, ClusterNode{ID: node.ID, Addr: node.Addr})
|
||||
res = append(res, &ClusterNode{ID: node.ID, Addr: node.Addr})
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
@@ -819,10 +718,10 @@ type SchedulerEntry struct {
|
||||
Spec string
|
||||
|
||||
// Periodic Task registered for this entry.
|
||||
Task *asynq.Task
|
||||
Task *Task
|
||||
|
||||
// Opts is the options for the periodic task.
|
||||
Opts []asynq.Option
|
||||
Opts []Option
|
||||
|
||||
// Next shows the next time the task will be enqueued.
|
||||
Next time.Time
|
||||
@@ -841,8 +740,8 @@ func (i *Inspector) SchedulerEntries() ([]*SchedulerEntry, error) {
|
||||
return nil, err
|
||||
}
|
||||
for _, e := range res {
|
||||
task := asynq.NewTask(e.Type, e.Payload)
|
||||
var opts []asynq.Option
|
||||
task := NewTask(e.Type, e.Payload)
|
||||
var opts []Option
|
||||
for _, s := range e.Opts {
|
||||
if o, err := parseOption(s); err == nil {
|
||||
// ignore bad data
|
||||
@@ -863,7 +762,7 @@ func (i *Inspector) SchedulerEntries() ([]*SchedulerEntry, error) {
|
||||
|
||||
// parseOption interprets a string s as an Option and returns the Option if parsing is successful,
|
||||
// otherwise returns non-nil error.
|
||||
func parseOption(s string) (asynq.Option, error) {
|
||||
func parseOption(s string) (Option, error) {
|
||||
fn, arg := parseOptionFunc(s), parseOptionArg(s)
|
||||
switch fn {
|
||||
case "Queue":
|
||||
@@ -871,43 +770,43 @@ func parseOption(s string) (asynq.Option, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.Queue(qname), nil
|
||||
return Queue(qname), nil
|
||||
case "MaxRetry":
|
||||
n, err := strconv.Atoi(arg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.MaxRetry(n), nil
|
||||
return MaxRetry(n), nil
|
||||
case "Timeout":
|
||||
d, err := time.ParseDuration(arg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.Timeout(d), nil
|
||||
return Timeout(d), nil
|
||||
case "Deadline":
|
||||
t, err := time.Parse(time.UnixDate, arg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.Deadline(t), nil
|
||||
return Deadline(t), nil
|
||||
case "Unique":
|
||||
d, err := time.ParseDuration(arg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.Unique(d), nil
|
||||
return Unique(d), nil
|
||||
case "ProcessAt":
|
||||
t, err := time.Parse(time.UnixDate, arg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.ProcessAt(t), nil
|
||||
return ProcessAt(t), nil
|
||||
case "ProcessIn":
|
||||
d, err := time.ParseDuration(arg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return asynq.ProcessIn(d), nil
|
||||
return ProcessIn(d), nil
|
||||
default:
|
||||
return nil, fmt.Errorf("cannot not parse option string %q", s)
|
||||
}
|
File diff suppressed because it is too large
Load Diff
@@ -1,22 +0,0 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
/*
|
||||
Package inspeq provides helper types and functions to inspect queues and tasks managed by Asynq.
|
||||
|
||||
Inspector is used to query and mutate the state of queues and tasks.
|
||||
|
||||
Example:
|
||||
|
||||
inspector := inspeq.New(asynq.RedisClientOpt{Addr: "localhost:6379"})
|
||||
|
||||
tasks, err := inspector.ListArchivedTasks("my-queue")
|
||||
|
||||
for _, t := range tasks {
|
||||
if err := inspector.DeleteTaskByKey(t.Key()); err != nil {
|
||||
// handle error
|
||||
}
|
||||
}
|
||||
*/
|
||||
package inspeq
|
@@ -6,12 +6,14 @@
|
||||
package asynqtest
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"math"
|
||||
"sort"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
"github.com/google/uuid"
|
||||
@@ -94,13 +96,13 @@ var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []
|
||||
var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
|
||||
|
||||
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
|
||||
func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage {
|
||||
func NewTaskMessage(taskType string, payload []byte) *base.TaskMessage {
|
||||
return NewTaskMessageWithQueue(taskType, payload, base.DefaultQueueName)
|
||||
}
|
||||
|
||||
// NewTaskMessageWithQueue returns a new instance of TaskMessage given a
|
||||
// task type, payload and queue name.
|
||||
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage {
|
||||
func NewTaskMessageWithQueue(taskType string, payload []byte, qname string) *base.TaskMessage {
|
||||
return &base.TaskMessage{
|
||||
ID: uuid.New(),
|
||||
Type: taskType,
|
||||
@@ -112,17 +114,28 @@ func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qn
|
||||
}
|
||||
}
|
||||
|
||||
// JSON serializes the given key-value pairs into stream of bytes in JSON.
|
||||
func JSON(kv map[string]interface{}) []byte {
|
||||
b, err := json.Marshal(kv)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// TaskMessageAfterRetry returns an updated copy of t after retry.
|
||||
// It increments retry count and sets the error message.
|
||||
func TaskMessageAfterRetry(t base.TaskMessage, errMsg string) *base.TaskMessage {
|
||||
// It increments retry count and sets the error message and last_failed_at time.
|
||||
func TaskMessageAfterRetry(t base.TaskMessage, errMsg string, failedAt time.Time) *base.TaskMessage {
|
||||
t.Retried = t.Retried + 1
|
||||
t.ErrorMsg = errMsg
|
||||
t.LastFailedAt = failedAt.Unix()
|
||||
return &t
|
||||
}
|
||||
|
||||
// TaskMessageWithError returns an updated copy of t with the given error message.
|
||||
func TaskMessageWithError(t base.TaskMessage, errMsg string) *base.TaskMessage {
|
||||
func TaskMessageWithError(t base.TaskMessage, errMsg string, failedAt time.Time) *base.TaskMessage {
|
||||
t.ErrorMsg = errMsg
|
||||
t.LastFailedAt = failedAt.Unix()
|
||||
return &t
|
||||
}
|
||||
|
||||
@@ -130,7 +143,7 @@ func TaskMessageWithError(t base.TaskMessage, errMsg string) *base.TaskMessage {
|
||||
// Calling test will fail if marshaling errors out.
|
||||
func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
|
||||
tb.Helper()
|
||||
data, err := json.Marshal(msg)
|
||||
data, err := base.EncodeMessage(msg)
|
||||
if err != nil {
|
||||
tb.Fatal(err)
|
||||
}
|
||||
@@ -141,34 +154,11 @@ func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
|
||||
// Calling test will fail if unmarshaling errors out.
|
||||
func MustUnmarshal(tb testing.TB, data string) *base.TaskMessage {
|
||||
tb.Helper()
|
||||
var msg base.TaskMessage
|
||||
err := json.Unmarshal([]byte(data), &msg)
|
||||
msg, err := base.DecodeMessage([]byte(data))
|
||||
if err != nil {
|
||||
tb.Fatal(err)
|
||||
}
|
||||
return &msg
|
||||
}
|
||||
|
||||
// MustMarshalSlice marshals a slice of task messages and return a slice of
|
||||
// json strings. Calling test will fail if marshaling errors out.
|
||||
func MustMarshalSlice(tb testing.TB, msgs []*base.TaskMessage) []string {
|
||||
tb.Helper()
|
||||
var data []string
|
||||
for _, m := range msgs {
|
||||
data = append(data, MustMarshal(tb, m))
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
// MustUnmarshalSlice unmarshals a slice of strings into a slice of task message structs.
|
||||
// Calling test will fail if marshaling errors out.
|
||||
func MustUnmarshalSlice(tb testing.TB, data []string) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
var msgs []*base.TaskMessage
|
||||
for _, s := range data {
|
||||
msgs = append(msgs, MustUnmarshal(tb, s))
|
||||
}
|
||||
return msgs
|
||||
return msg
|
||||
}
|
||||
|
||||
// FlushDB deletes all the keys of the currently selected DB.
|
||||
@@ -176,12 +166,12 @@ func FlushDB(tb testing.TB, r redis.UniversalClient) {
|
||||
tb.Helper()
|
||||
switch r := r.(type) {
|
||||
case *redis.Client:
|
||||
if err := r.FlushDB().Err(); err != nil {
|
||||
if err := r.FlushDB(context.Background()).Err(); err != nil {
|
||||
tb.Fatal(err)
|
||||
}
|
||||
case *redis.ClusterClient:
|
||||
err := r.ForEachMaster(func(c *redis.Client) error {
|
||||
if err := c.FlushAll().Err(); err != nil {
|
||||
err := r.ForEachMaster(context.Background(), func(ctx context.Context, c *redis.Client) error {
|
||||
if err := c.FlushAll(ctx).Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
@@ -195,49 +185,50 @@ func FlushDB(tb testing.TB, r redis.UniversalClient) {
|
||||
// SeedPendingQueue initializes the specified queue with the given messages.
|
||||
func SeedPendingQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
|
||||
tb.Helper()
|
||||
r.SAdd(base.AllQueues, qname)
|
||||
seedRedisList(tb, r, base.QueueKey(qname), msgs)
|
||||
r.SAdd(context.Background(), base.AllQueues, qname)
|
||||
seedRedisList(tb, r, base.PendingKey(qname), msgs, base.TaskStatePending)
|
||||
}
|
||||
|
||||
// SeedActiveQueue initializes the active queue with the given messages.
|
||||
func SeedActiveQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
|
||||
tb.Helper()
|
||||
r.SAdd(base.AllQueues, qname)
|
||||
seedRedisList(tb, r, base.ActiveKey(qname), msgs)
|
||||
r.SAdd(context.Background(), base.AllQueues, qname)
|
||||
seedRedisList(tb, r, base.ActiveKey(qname), msgs, base.TaskStateActive)
|
||||
}
|
||||
|
||||
// SeedScheduledQueue initializes the scheduled queue with the given messages.
|
||||
func SeedScheduledQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
|
||||
tb.Helper()
|
||||
r.SAdd(base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.ScheduledKey(qname), entries)
|
||||
r.SAdd(context.Background(), base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.ScheduledKey(qname), entries, base.TaskStateScheduled)
|
||||
}
|
||||
|
||||
// SeedRetryQueue initializes the retry queue with the given messages.
|
||||
func SeedRetryQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
|
||||
tb.Helper()
|
||||
r.SAdd(base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.RetryKey(qname), entries)
|
||||
r.SAdd(context.Background(), base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.RetryKey(qname), entries, base.TaskStateRetry)
|
||||
}
|
||||
|
||||
// SeedArchivedQueue initializes the archived queue with the given messages.
|
||||
func SeedArchivedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
|
||||
tb.Helper()
|
||||
r.SAdd(base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.ArchivedKey(qname), entries)
|
||||
r.SAdd(context.Background(), base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.ArchivedKey(qname), entries, base.TaskStateArchived)
|
||||
}
|
||||
|
||||
// SeedDeadlines initializes the deadlines set with the given entries.
|
||||
func SeedDeadlines(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
|
||||
tb.Helper()
|
||||
r.SAdd(base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.DeadlinesKey(qname), entries)
|
||||
r.SAdd(context.Background(), base.AllQueues, qname)
|
||||
seedRedisZSet(tb, r, base.DeadlinesKey(qname), entries, base.TaskStateActive)
|
||||
}
|
||||
|
||||
// SeedAllPendingQueues initializes all of the specified queues with the given messages.
|
||||
//
|
||||
// pending maps a queue name to a list of messages.
|
||||
func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[string][]*base.TaskMessage) {
|
||||
tb.Helper()
|
||||
for q, msgs := range pending {
|
||||
SeedPendingQueue(tb, r, msgs, q)
|
||||
}
|
||||
@@ -245,6 +236,7 @@ func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[st
|
||||
|
||||
// SeedAllActiveQueues initializes all of the specified active queues with the given messages.
|
||||
func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[string][]*base.TaskMessage) {
|
||||
tb.Helper()
|
||||
for q, msgs := range active {
|
||||
SeedActiveQueue(tb, r, msgs, q)
|
||||
}
|
||||
@@ -252,6 +244,7 @@ func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[stri
|
||||
|
||||
// SeedAllScheduledQueues initializes all of the specified scheduled queues with the given entries.
|
||||
func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled map[string][]base.Z) {
|
||||
tb.Helper()
|
||||
for q, entries := range scheduled {
|
||||
SeedScheduledQueue(tb, r, entries, q)
|
||||
}
|
||||
@@ -259,6 +252,7 @@ func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled ma
|
||||
|
||||
// SeedAllRetryQueues initializes all of the specified retry queues with the given entries.
|
||||
func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string][]base.Z) {
|
||||
tb.Helper()
|
||||
for q, entries := range retry {
|
||||
SeedRetryQueue(tb, r, entries, q)
|
||||
}
|
||||
@@ -266,6 +260,7 @@ func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string
|
||||
|
||||
// SeedAllArchivedQueues initializes all of the specified archived queues with the given entries.
|
||||
func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[string][]base.Z) {
|
||||
tb.Helper()
|
||||
for q, entries := range archived {
|
||||
SeedArchivedQueue(tb, r, entries, q)
|
||||
}
|
||||
@@ -273,101 +268,181 @@ func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[
|
||||
|
||||
// SeedAllDeadlines initializes all of the deadlines with the given entries.
|
||||
func SeedAllDeadlines(tb testing.TB, r redis.UniversalClient, deadlines map[string][]base.Z) {
|
||||
tb.Helper()
|
||||
for q, entries := range deadlines {
|
||||
SeedDeadlines(tb, r, entries, q)
|
||||
}
|
||||
}
|
||||
|
||||
func seedRedisList(tb testing.TB, c redis.UniversalClient, key string, msgs []*base.TaskMessage) {
|
||||
data := MustMarshalSlice(tb, msgs)
|
||||
for _, s := range data {
|
||||
if err := c.LPush(key, s).Err(); err != nil {
|
||||
func seedRedisList(tb testing.TB, c redis.UniversalClient, key string,
|
||||
msgs []*base.TaskMessage, state base.TaskState) {
|
||||
tb.Helper()
|
||||
for _, msg := range msgs {
|
||||
encoded := MustMarshal(tb, msg)
|
||||
if err := c.LPush(context.Background(), key, msg.ID.String()).Err(); err != nil {
|
||||
tb.Fatal(err)
|
||||
}
|
||||
key := base.TaskKey(msg.Queue, msg.ID.String())
|
||||
data := map[string]interface{}{
|
||||
"msg": encoded,
|
||||
"state": state.String(),
|
||||
"timeout": msg.Timeout,
|
||||
"deadline": msg.Deadline,
|
||||
"unique_key": msg.UniqueKey,
|
||||
}
|
||||
if err := c.HSet(context.Background(), key, data).Err(); err != nil {
|
||||
tb.Fatal(err)
|
||||
}
|
||||
if len(msg.UniqueKey) > 0 {
|
||||
err := c.SetNX(context.Background(), msg.UniqueKey, msg.ID.String(), 1*time.Minute).Err()
|
||||
if err != nil {
|
||||
tb.Fatalf("Failed to set unique lock in redis: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string, items []base.Z) {
|
||||
func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string,
|
||||
items []base.Z, state base.TaskState) {
|
||||
tb.Helper()
|
||||
for _, item := range items {
|
||||
z := &redis.Z{Member: MustMarshal(tb, item.Message), Score: float64(item.Score)}
|
||||
if err := c.ZAdd(key, z).Err(); err != nil {
|
||||
msg := item.Message
|
||||
encoded := MustMarshal(tb, msg)
|
||||
z := &redis.Z{Member: msg.ID.String(), Score: float64(item.Score)}
|
||||
if err := c.ZAdd(context.Background(), key, z).Err(); err != nil {
|
||||
tb.Fatal(err)
|
||||
}
|
||||
key := base.TaskKey(msg.Queue, msg.ID.String())
|
||||
data := map[string]interface{}{
|
||||
"msg": encoded,
|
||||
"state": state.String(),
|
||||
"timeout": msg.Timeout,
|
||||
"deadline": msg.Deadline,
|
||||
"unique_key": msg.UniqueKey,
|
||||
}
|
||||
if err := c.HSet(context.Background(), key, data).Err(); err != nil {
|
||||
tb.Fatal(err)
|
||||
}
|
||||
if len(msg.UniqueKey) > 0 {
|
||||
err := c.SetNX(context.Background(), msg.UniqueKey, msg.ID.String(), 1*time.Minute).Err()
|
||||
if err != nil {
|
||||
tb.Fatalf("Failed to set unique lock in redis: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// GetPendingMessages returns all pending messages in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetPendingMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
return getListMessages(tb, r, base.QueueKey(qname))
|
||||
return getMessagesFromList(tb, r, qname, base.PendingKey, base.TaskStatePending)
|
||||
}
|
||||
|
||||
// GetActiveMessages returns all active messages in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetActiveMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
return getListMessages(tb, r, base.ActiveKey(qname))
|
||||
return getMessagesFromList(tb, r, qname, base.ActiveKey, base.TaskStateActive)
|
||||
}
|
||||
|
||||
// GetScheduledMessages returns all scheduled task messages in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetScheduledMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
return getZSetMessages(tb, r, base.ScheduledKey(qname))
|
||||
return getMessagesFromZSet(tb, r, qname, base.ScheduledKey, base.TaskStateScheduled)
|
||||
}
|
||||
|
||||
// GetRetryMessages returns all retry messages in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetRetryMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
return getZSetMessages(tb, r, base.RetryKey(qname))
|
||||
return getMessagesFromZSet(tb, r, qname, base.RetryKey, base.TaskStateRetry)
|
||||
}
|
||||
|
||||
// GetArchivedMessages returns all archived messages in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetArchivedMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
return getZSetMessages(tb, r, base.ArchivedKey(qname))
|
||||
return getMessagesFromZSet(tb, r, qname, base.ArchivedKey, base.TaskStateArchived)
|
||||
}
|
||||
|
||||
// GetScheduledEntries returns all scheduled messages and its score in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetScheduledEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
|
||||
tb.Helper()
|
||||
return getZSetEntries(tb, r, base.ScheduledKey(qname))
|
||||
return getMessagesFromZSetWithScores(tb, r, qname, base.ScheduledKey, base.TaskStateScheduled)
|
||||
}
|
||||
|
||||
// GetRetryEntries returns all retry messages and its score in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetRetryEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
|
||||
tb.Helper()
|
||||
return getZSetEntries(tb, r, base.RetryKey(qname))
|
||||
return getMessagesFromZSetWithScores(tb, r, qname, base.RetryKey, base.TaskStateRetry)
|
||||
}
|
||||
|
||||
// GetArchivedEntries returns all archived messages and its score in the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetArchivedEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
|
||||
tb.Helper()
|
||||
return getZSetEntries(tb, r, base.ArchivedKey(qname))
|
||||
return getMessagesFromZSetWithScores(tb, r, qname, base.ArchivedKey, base.TaskStateArchived)
|
||||
}
|
||||
|
||||
// GetDeadlinesEntries returns all task messages and its score in the deadlines set for the given queue.
|
||||
// It also asserts the state field of the task.
|
||||
func GetDeadlinesEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
|
||||
tb.Helper()
|
||||
return getZSetEntries(tb, r, base.DeadlinesKey(qname))
|
||||
return getMessagesFromZSetWithScores(tb, r, qname, base.DeadlinesKey, base.TaskStateActive)
|
||||
}
|
||||
|
||||
func getListMessages(tb testing.TB, r redis.UniversalClient, list string) []*base.TaskMessage {
|
||||
data := r.LRange(list, 0, -1).Val()
|
||||
return MustUnmarshalSlice(tb, data)
|
||||
}
|
||||
|
||||
func getZSetMessages(tb testing.TB, r redis.UniversalClient, zset string) []*base.TaskMessage {
|
||||
data := r.ZRange(zset, 0, -1).Val()
|
||||
return MustUnmarshalSlice(tb, data)
|
||||
}
|
||||
|
||||
func getZSetEntries(tb testing.TB, r redis.UniversalClient, zset string) []base.Z {
|
||||
data := r.ZRangeWithScores(zset, 0, -1).Val()
|
||||
var entries []base.Z
|
||||
for _, z := range data {
|
||||
entries = append(entries, base.Z{
|
||||
Message: MustUnmarshal(tb, z.Member.(string)),
|
||||
Score: int64(z.Score),
|
||||
})
|
||||
// Retrieves all messages stored under `keyFn(qname)` key in redis list.
|
||||
func getMessagesFromList(tb testing.TB, r redis.UniversalClient, qname string,
|
||||
keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
ids := r.LRange(context.Background(), keyFn(qname), 0, -1).Val()
|
||||
var msgs []*base.TaskMessage
|
||||
for _, id := range ids {
|
||||
taskKey := base.TaskKey(qname, id)
|
||||
data := r.HGet(context.Background(), taskKey, "msg").Val()
|
||||
msgs = append(msgs, MustUnmarshal(tb, data))
|
||||
if gotState := r.HGet(context.Background(), taskKey, "state").Val(); gotState != state.String() {
|
||||
tb.Errorf("task (id=%q) is in %q state, want %v", id, gotState, state)
|
||||
}
|
||||
}
|
||||
return entries
|
||||
return msgs
|
||||
}
|
||||
|
||||
// Retrieves all messages stored under `keyFn(qname)` key in redis zset (sorted-set).
|
||||
func getMessagesFromZSet(tb testing.TB, r redis.UniversalClient, qname string,
|
||||
keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage {
|
||||
tb.Helper()
|
||||
ids := r.ZRange(context.Background(), keyFn(qname), 0, -1).Val()
|
||||
var msgs []*base.TaskMessage
|
||||
for _, id := range ids {
|
||||
taskKey := base.TaskKey(qname, id)
|
||||
msg := r.HGet(context.Background(), taskKey, "msg").Val()
|
||||
msgs = append(msgs, MustUnmarshal(tb, msg))
|
||||
if gotState := r.HGet(context.Background(), taskKey, "state").Val(); gotState != state.String() {
|
||||
tb.Errorf("task (id=%q) is in %q state, want %v", id, gotState, state)
|
||||
}
|
||||
}
|
||||
return msgs
|
||||
}
|
||||
|
||||
// Retrieves all messages along with their scores stored under `keyFn(qname)` key in redis zset (sorted-set).
|
||||
func getMessagesFromZSetWithScores(tb testing.TB, r redis.UniversalClient,
|
||||
qname string, keyFn func(qname string) string, state base.TaskState) []base.Z {
|
||||
tb.Helper()
|
||||
zs := r.ZRangeWithScores(context.Background(), keyFn(qname), 0, -1).Val()
|
||||
var res []base.Z
|
||||
for _, z := range zs {
|
||||
taskID := z.Member.(string)
|
||||
taskKey := base.TaskKey(qname, taskID)
|
||||
msg := r.HGet(context.Background(), taskKey, "msg").Val()
|
||||
res = append(res, base.Z{Message: MustUnmarshal(tb, msg), Score: int64(z.Score)})
|
||||
if gotState := r.HGet(context.Background(), taskKey, "state").Val(); gotState != state.String() {
|
||||
tb.Errorf("task (id=%q) is in %q state, want %v", taskID, gotState, state)
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
@@ -7,25 +7,29 @@ package base
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/golang/protobuf/ptypes"
|
||||
"github.com/google/uuid"
|
||||
"github.com/hibiken/asynq/internal/errors"
|
||||
pb "github.com/hibiken/asynq/internal/proto"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
// Version of asynq library and CLI.
|
||||
const Version = "0.17.2"
|
||||
const Version = "0.18.6"
|
||||
|
||||
// DefaultQueueName is the queue name used if none are specified by user.
|
||||
const DefaultQueueName = "default"
|
||||
|
||||
// DefaultQueue is the redis key for the default queue.
|
||||
var DefaultQueue = QueueKey(DefaultQueueName)
|
||||
var DefaultQueue = PendingKey(DefaultQueueName)
|
||||
|
||||
// Global Redis keys.
|
||||
const (
|
||||
@@ -36,58 +40,116 @@ const (
|
||||
CancelChannel = "asynq:cancel" // PubSub channel
|
||||
)
|
||||
|
||||
// TaskState denotes the state of a task.
|
||||
type TaskState int
|
||||
|
||||
const (
|
||||
TaskStateActive TaskState = iota + 1
|
||||
TaskStatePending
|
||||
TaskStateScheduled
|
||||
TaskStateRetry
|
||||
TaskStateArchived
|
||||
)
|
||||
|
||||
func (s TaskState) String() string {
|
||||
switch s {
|
||||
case TaskStateActive:
|
||||
return "active"
|
||||
case TaskStatePending:
|
||||
return "pending"
|
||||
case TaskStateScheduled:
|
||||
return "scheduled"
|
||||
case TaskStateRetry:
|
||||
return "retry"
|
||||
case TaskStateArchived:
|
||||
return "archived"
|
||||
}
|
||||
panic(fmt.Sprintf("internal error: unknown task state %d", s))
|
||||
}
|
||||
|
||||
func TaskStateFromString(s string) (TaskState, error) {
|
||||
switch s {
|
||||
case "active":
|
||||
return TaskStateActive, nil
|
||||
case "pending":
|
||||
return TaskStatePending, nil
|
||||
case "scheduled":
|
||||
return TaskStateScheduled, nil
|
||||
case "retry":
|
||||
return TaskStateRetry, nil
|
||||
case "archived":
|
||||
return TaskStateArchived, nil
|
||||
}
|
||||
return 0, errors.E(errors.FailedPrecondition, fmt.Sprintf("%q is not supported task state", s))
|
||||
}
|
||||
|
||||
// ValidateQueueName validates a given qname to be used as a queue name.
|
||||
// Returns nil if valid, otherwise returns non-nil error.
|
||||
func ValidateQueueName(qname string) error {
|
||||
if len(qname) == 0 {
|
||||
if len(strings.TrimSpace(qname)) == 0 {
|
||||
return fmt.Errorf("queue name must contain one or more characters")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// QueueKey returns a redis key for the given queue name.
|
||||
func QueueKey(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}", qname)
|
||||
// QueueKeyPrefix returns a prefix for all keys in the given queue.
|
||||
func QueueKeyPrefix(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}:", qname)
|
||||
}
|
||||
|
||||
// TaskKeyPrefix returns a prefix for task key.
|
||||
func TaskKeyPrefix(qname string) string {
|
||||
return fmt.Sprintf("%st:", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// TaskKey returns a redis key for the given task message.
|
||||
func TaskKey(qname, id string) string {
|
||||
return fmt.Sprintf("%s%s", TaskKeyPrefix(qname), id)
|
||||
}
|
||||
|
||||
// PendingKey returns a redis key for the given queue name.
|
||||
func PendingKey(qname string) string {
|
||||
return fmt.Sprintf("%spending", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// ActiveKey returns a redis key for the active tasks.
|
||||
func ActiveKey(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}:active", qname)
|
||||
return fmt.Sprintf("%sactive", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// ScheduledKey returns a redis key for the scheduled tasks.
|
||||
func ScheduledKey(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}:scheduled", qname)
|
||||
return fmt.Sprintf("%sscheduled", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// RetryKey returns a redis key for the retry tasks.
|
||||
func RetryKey(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}:retry", qname)
|
||||
return fmt.Sprintf("%sretry", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// ArchivedKey returns a redis key for the archived tasks.
|
||||
func ArchivedKey(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}:archived", qname)
|
||||
return fmt.Sprintf("%sarchived", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// DeadlinesKey returns a redis key for the deadlines.
|
||||
func DeadlinesKey(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}:deadlines", qname)
|
||||
return fmt.Sprintf("%sdeadlines", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// PausedKey returns a redis key to indicate that the given queue is paused.
|
||||
func PausedKey(qname string) string {
|
||||
return fmt.Sprintf("asynq:{%s}:paused", qname)
|
||||
return fmt.Sprintf("%spaused", QueueKeyPrefix(qname))
|
||||
}
|
||||
|
||||
// ProcessedKey returns a redis key for processed count for the given day for the queue.
|
||||
func ProcessedKey(qname string, t time.Time) string {
|
||||
return fmt.Sprintf("asynq:{%s}:processed:%s", qname, t.UTC().Format("2006-01-02"))
|
||||
return fmt.Sprintf("%sprocessed:%s", QueueKeyPrefix(qname), t.UTC().Format("2006-01-02"))
|
||||
}
|
||||
|
||||
// FailedKey returns a redis key for failure count for the given day for the queue.
|
||||
func FailedKey(qname string, t time.Time) string {
|
||||
return fmt.Sprintf("asynq:{%s}:failed:%s", qname, t.UTC().Format("2006-01-02"))
|
||||
return fmt.Sprintf("%sfailed:%s", QueueKeyPrefix(qname), t.UTC().Format("2006-01-02"))
|
||||
}
|
||||
|
||||
// ServerInfoKey returns a redis key for process info.
|
||||
@@ -111,32 +173,12 @@ func SchedulerHistoryKey(entryID string) string {
|
||||
}
|
||||
|
||||
// UniqueKey returns a redis key with the given type, payload, and queue name.
|
||||
func UniqueKey(qname, tasktype string, payload map[string]interface{}) string {
|
||||
return fmt.Sprintf("asynq:{%s}:unique:%s:%s", qname, tasktype, serializePayload(payload))
|
||||
}
|
||||
|
||||
func serializePayload(payload map[string]interface{}) string {
|
||||
func UniqueKey(qname, tasktype string, payload []byte) string {
|
||||
if payload == nil {
|
||||
return "nil"
|
||||
return fmt.Sprintf("%sunique:%s:", QueueKeyPrefix(qname), tasktype)
|
||||
}
|
||||
type entry struct {
|
||||
k string
|
||||
v interface{}
|
||||
}
|
||||
var es []entry
|
||||
for k, v := range payload {
|
||||
es = append(es, entry{k, v})
|
||||
}
|
||||
// sort entries by key
|
||||
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
|
||||
var b strings.Builder
|
||||
for _, e := range es {
|
||||
if b.Len() > 0 {
|
||||
b.WriteString(",")
|
||||
}
|
||||
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
|
||||
}
|
||||
return b.String()
|
||||
checksum := md5.Sum(payload)
|
||||
return fmt.Sprintf("%sunique:%s:%s", QueueKeyPrefix(qname), tasktype, hex.EncodeToString(checksum[:]))
|
||||
}
|
||||
|
||||
// TaskMessage is the internal representation of a task with additional metadata fields.
|
||||
@@ -146,7 +188,7 @@ type TaskMessage struct {
|
||||
Type string
|
||||
|
||||
// Payload holds data needed to process the task.
|
||||
Payload map[string]interface{}
|
||||
Payload []byte
|
||||
|
||||
// ID is a unique identifier for each task.
|
||||
ID uuid.UUID
|
||||
@@ -163,6 +205,12 @@ type TaskMessage struct {
|
||||
// ErrorMsg holds the error message from the last failure.
|
||||
ErrorMsg string
|
||||
|
||||
// Time of last failure in Unix time,
|
||||
// the number of seconds elapsed since January 1, 1970 UTC.
|
||||
//
|
||||
// Use zero to indicate no last failure
|
||||
LastFailedAt int64
|
||||
|
||||
// Timeout specifies timeout in seconds.
|
||||
// If task processing doesn't complete within the timeout, the task will be retried
|
||||
// if retry count is remaining. Otherwise it will be moved to the archive.
|
||||
@@ -184,24 +232,52 @@ type TaskMessage struct {
|
||||
UniqueKey string
|
||||
}
|
||||
|
||||
// EncodeMessage marshals the given task message in JSON and returns an encoded string.
|
||||
func EncodeMessage(msg *TaskMessage) (string, error) {
|
||||
b, err := json.Marshal(msg)
|
||||
if err != nil {
|
||||
return "", err
|
||||
// EncodeMessage marshals the given task message and returns an encoded bytes.
|
||||
func EncodeMessage(msg *TaskMessage) ([]byte, error) {
|
||||
if msg == nil {
|
||||
return nil, fmt.Errorf("cannot encode nil message")
|
||||
}
|
||||
return string(b), nil
|
||||
return proto.Marshal(&pb.TaskMessage{
|
||||
Type: msg.Type,
|
||||
Payload: msg.Payload,
|
||||
Id: msg.ID.String(),
|
||||
Queue: msg.Queue,
|
||||
Retry: int32(msg.Retry),
|
||||
Retried: int32(msg.Retried),
|
||||
ErrorMsg: msg.ErrorMsg,
|
||||
LastFailedAt: msg.LastFailedAt,
|
||||
Timeout: msg.Timeout,
|
||||
Deadline: msg.Deadline,
|
||||
UniqueKey: msg.UniqueKey,
|
||||
})
|
||||
}
|
||||
|
||||
// DecodeMessage unmarshals the given encoded string and returns a decoded task message.
|
||||
func DecodeMessage(s string) (*TaskMessage, error) {
|
||||
d := json.NewDecoder(strings.NewReader(s))
|
||||
d.UseNumber()
|
||||
var msg TaskMessage
|
||||
if err := d.Decode(&msg); err != nil {
|
||||
// DecodeMessage unmarshals the given bytes and returns a decoded task message.
|
||||
func DecodeMessage(data []byte) (*TaskMessage, error) {
|
||||
var pbmsg pb.TaskMessage
|
||||
if err := proto.Unmarshal(data, &pbmsg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &msg, nil
|
||||
return &TaskMessage{
|
||||
Type: pbmsg.GetType(),
|
||||
Payload: pbmsg.GetPayload(),
|
||||
ID: uuid.MustParse(pbmsg.GetId()),
|
||||
Queue: pbmsg.GetQueue(),
|
||||
Retry: int(pbmsg.GetRetry()),
|
||||
Retried: int(pbmsg.GetRetried()),
|
||||
ErrorMsg: pbmsg.GetErrorMsg(),
|
||||
LastFailedAt: pbmsg.GetLastFailedAt(),
|
||||
Timeout: pbmsg.GetTimeout(),
|
||||
Deadline: pbmsg.GetDeadline(),
|
||||
UniqueKey: pbmsg.GetUniqueKey(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// TaskInfo describes a task message and its metadata.
|
||||
type TaskInfo struct {
|
||||
Message *TaskMessage
|
||||
State TaskState
|
||||
NextProcessAt time.Time
|
||||
}
|
||||
|
||||
// Z represents sorted set member.
|
||||
@@ -210,52 +286,55 @@ type Z struct {
|
||||
Score int64
|
||||
}
|
||||
|
||||
// ServerStatus represents status of a server.
|
||||
// ServerStatus methods are concurrency safe.
|
||||
type ServerStatus struct {
|
||||
// ServerState represents state of a server.
|
||||
// ServerState methods are concurrency safe.
|
||||
type ServerState struct {
|
||||
mu sync.Mutex
|
||||
val ServerStatusValue
|
||||
val ServerStateValue
|
||||
}
|
||||
|
||||
// NewServerStatus returns a new status instance given an initial value.
|
||||
func NewServerStatus(v ServerStatusValue) *ServerStatus {
|
||||
return &ServerStatus{val: v}
|
||||
// NewServerState returns a new state instance.
|
||||
// Initial state is set to StateNew.
|
||||
func NewServerState() *ServerState {
|
||||
return &ServerState{val: StateNew}
|
||||
}
|
||||
|
||||
type ServerStatusValue int
|
||||
type ServerStateValue int
|
||||
|
||||
const (
|
||||
// StatusIdle indicates the server is in idle state.
|
||||
StatusIdle ServerStatusValue = iota
|
||||
// StateNew represents a new server. Server begins in
|
||||
// this state and then transition to StatusActive when
|
||||
// Start or Run is callled.
|
||||
StateNew ServerStateValue = iota
|
||||
|
||||
// StatusRunning indicates the server is up and active.
|
||||
StatusRunning
|
||||
// StateActive indicates the server is up and active.
|
||||
StateActive
|
||||
|
||||
// StatusQuiet indicates the server is up but not active.
|
||||
StatusQuiet
|
||||
// StateStopped indicates the server is up but no longer processing new tasks.
|
||||
StateStopped
|
||||
|
||||
// StatusStopped indicates the server server has been stopped.
|
||||
StatusStopped
|
||||
// StateClosed indicates the server has been shutdown.
|
||||
StateClosed
|
||||
)
|
||||
|
||||
var statuses = []string{
|
||||
"idle",
|
||||
"running",
|
||||
"quiet",
|
||||
var serverStates = []string{
|
||||
"new",
|
||||
"active",
|
||||
"stopped",
|
||||
"closed",
|
||||
}
|
||||
|
||||
func (s *ServerStatus) String() string {
|
||||
func (s *ServerState) String() string {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
if StatusIdle <= s.val && s.val <= StatusStopped {
|
||||
return statuses[s.val]
|
||||
if StateNew <= s.val && s.val <= StateClosed {
|
||||
return serverStates[s.val]
|
||||
}
|
||||
return "unknown status"
|
||||
}
|
||||
|
||||
// Get returns the status value.
|
||||
func (s *ServerStatus) Get() ServerStatusValue {
|
||||
func (s *ServerState) Get() ServerStateValue {
|
||||
s.mu.Lock()
|
||||
v := s.val
|
||||
s.mu.Unlock()
|
||||
@@ -263,7 +342,7 @@ func (s *ServerStatus) Get() ServerStatusValue {
|
||||
}
|
||||
|
||||
// Set sets the status value.
|
||||
func (s *ServerStatus) Set(v ServerStatusValue) {
|
||||
func (s *ServerState) Set(v ServerStateValue) {
|
||||
s.mu.Lock()
|
||||
s.val = v
|
||||
s.mu.Unlock()
|
||||
@@ -282,6 +361,59 @@ type ServerInfo struct {
|
||||
ActiveWorkerCount int
|
||||
}
|
||||
|
||||
// EncodeServerInfo marshals the given ServerInfo and returns the encoded bytes.
|
||||
func EncodeServerInfo(info *ServerInfo) ([]byte, error) {
|
||||
if info == nil {
|
||||
return nil, fmt.Errorf("cannot encode nil server info")
|
||||
}
|
||||
queues := make(map[string]int32)
|
||||
for q, p := range info.Queues {
|
||||
queues[q] = int32(p)
|
||||
}
|
||||
started, err := ptypes.TimestampProto(info.Started)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return proto.Marshal(&pb.ServerInfo{
|
||||
Host: info.Host,
|
||||
Pid: int32(info.PID),
|
||||
ServerId: info.ServerID,
|
||||
Concurrency: int32(info.Concurrency),
|
||||
Queues: queues,
|
||||
StrictPriority: info.StrictPriority,
|
||||
Status: info.Status,
|
||||
StartTime: started,
|
||||
ActiveWorkerCount: int32(info.ActiveWorkerCount),
|
||||
})
|
||||
}
|
||||
|
||||
// DecodeServerInfo decodes the given bytes into ServerInfo.
|
||||
func DecodeServerInfo(b []byte) (*ServerInfo, error) {
|
||||
var pbmsg pb.ServerInfo
|
||||
if err := proto.Unmarshal(b, &pbmsg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
queues := make(map[string]int)
|
||||
for q, p := range pbmsg.GetQueues() {
|
||||
queues[q] = int(p)
|
||||
}
|
||||
startTime, err := ptypes.Timestamp(pbmsg.GetStartTime())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &ServerInfo{
|
||||
Host: pbmsg.GetHost(),
|
||||
PID: int(pbmsg.GetPid()),
|
||||
ServerID: pbmsg.GetServerId(),
|
||||
Concurrency: int(pbmsg.GetConcurrency()),
|
||||
Queues: queues,
|
||||
StrictPriority: pbmsg.GetStrictPriority(),
|
||||
Status: pbmsg.GetStatus(),
|
||||
Started: startTime,
|
||||
ActiveWorkerCount: int(pbmsg.GetActiveWorkerCount()),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// WorkerInfo holds information about a running worker.
|
||||
type WorkerInfo struct {
|
||||
Host string
|
||||
@@ -289,12 +421,65 @@ type WorkerInfo struct {
|
||||
ServerID string
|
||||
ID string
|
||||
Type string
|
||||
Payload []byte
|
||||
Queue string
|
||||
Payload map[string]interface{}
|
||||
Started time.Time
|
||||
Deadline time.Time
|
||||
}
|
||||
|
||||
// EncodeWorkerInfo marshals the given WorkerInfo and returns the encoded bytes.
|
||||
func EncodeWorkerInfo(info *WorkerInfo) ([]byte, error) {
|
||||
if info == nil {
|
||||
return nil, fmt.Errorf("cannot encode nil worker info")
|
||||
}
|
||||
startTime, err := ptypes.TimestampProto(info.Started)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
deadline, err := ptypes.TimestampProto(info.Deadline)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return proto.Marshal(&pb.WorkerInfo{
|
||||
Host: info.Host,
|
||||
Pid: int32(info.PID),
|
||||
ServerId: info.ServerID,
|
||||
TaskId: info.ID,
|
||||
TaskType: info.Type,
|
||||
TaskPayload: info.Payload,
|
||||
Queue: info.Queue,
|
||||
StartTime: startTime,
|
||||
Deadline: deadline,
|
||||
})
|
||||
}
|
||||
|
||||
// DecodeWorkerInfo decodes the given bytes into WorkerInfo.
|
||||
func DecodeWorkerInfo(b []byte) (*WorkerInfo, error) {
|
||||
var pbmsg pb.WorkerInfo
|
||||
if err := proto.Unmarshal(b, &pbmsg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
startTime, err := ptypes.Timestamp(pbmsg.GetStartTime())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
deadline, err := ptypes.Timestamp(pbmsg.GetDeadline())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &WorkerInfo{
|
||||
Host: pbmsg.GetHost(),
|
||||
PID: int(pbmsg.GetPid()),
|
||||
ServerID: pbmsg.GetServerId(),
|
||||
ID: pbmsg.GetTaskId(),
|
||||
Type: pbmsg.GetTaskType(),
|
||||
Payload: pbmsg.GetTaskPayload(),
|
||||
Queue: pbmsg.GetQueue(),
|
||||
Started: startTime,
|
||||
Deadline: deadline,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// SchedulerEntry holds information about a periodic task registered with a scheduler.
|
||||
type SchedulerEntry struct {
|
||||
// Identifier of this entry.
|
||||
@@ -307,7 +492,7 @@ type SchedulerEntry struct {
|
||||
Type string
|
||||
|
||||
// Payload is the payload of the periodic task.
|
||||
Payload map[string]interface{}
|
||||
Payload []byte
|
||||
|
||||
// Opts is the options for the periodic task.
|
||||
Opts []string
|
||||
@@ -320,6 +505,55 @@ type SchedulerEntry struct {
|
||||
Prev time.Time
|
||||
}
|
||||
|
||||
// EncodeSchedulerEntry marshals the given entry and returns an encoded bytes.
|
||||
func EncodeSchedulerEntry(entry *SchedulerEntry) ([]byte, error) {
|
||||
if entry == nil {
|
||||
return nil, fmt.Errorf("cannot encode nil scheduler entry")
|
||||
}
|
||||
next, err := ptypes.TimestampProto(entry.Next)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
prev, err := ptypes.TimestampProto(entry.Prev)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return proto.Marshal(&pb.SchedulerEntry{
|
||||
Id: entry.ID,
|
||||
Spec: entry.Spec,
|
||||
TaskType: entry.Type,
|
||||
TaskPayload: entry.Payload,
|
||||
EnqueueOptions: entry.Opts,
|
||||
NextEnqueueTime: next,
|
||||
PrevEnqueueTime: prev,
|
||||
})
|
||||
}
|
||||
|
||||
// DecodeSchedulerEntry unmarshals the given bytes and returns a decoded SchedulerEntry.
|
||||
func DecodeSchedulerEntry(b []byte) (*SchedulerEntry, error) {
|
||||
var pbmsg pb.SchedulerEntry
|
||||
if err := proto.Unmarshal(b, &pbmsg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
next, err := ptypes.Timestamp(pbmsg.GetNextEnqueueTime())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
prev, err := ptypes.Timestamp(pbmsg.GetPrevEnqueueTime())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &SchedulerEntry{
|
||||
ID: pbmsg.GetId(),
|
||||
Spec: pbmsg.GetSpec(),
|
||||
Type: pbmsg.GetTaskType(),
|
||||
Payload: pbmsg.GetTaskPayload(),
|
||||
Opts: pbmsg.GetEnqueueOptions(),
|
||||
Next: next,
|
||||
Prev: prev,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// SchedulerEnqueueEvent holds information about an enqueue event by a scheduler.
|
||||
type SchedulerEnqueueEvent struct {
|
||||
// ID of the task that was enqueued.
|
||||
@@ -329,6 +563,39 @@ type SchedulerEnqueueEvent struct {
|
||||
EnqueuedAt time.Time
|
||||
}
|
||||
|
||||
// EncodeSchedulerEnqueueEvent marshals the given event
|
||||
// and returns an encoded bytes.
|
||||
func EncodeSchedulerEnqueueEvent(event *SchedulerEnqueueEvent) ([]byte, error) {
|
||||
if event == nil {
|
||||
return nil, fmt.Errorf("cannot encode nil enqueue event")
|
||||
}
|
||||
enqueuedAt, err := ptypes.TimestampProto(event.EnqueuedAt)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return proto.Marshal(&pb.SchedulerEnqueueEvent{
|
||||
TaskId: event.TaskID,
|
||||
EnqueueTime: enqueuedAt,
|
||||
})
|
||||
}
|
||||
|
||||
// DecodeSchedulerEnqueueEvent unmarshals the given bytes
|
||||
// and returns a decoded SchedulerEnqueueEvent.
|
||||
func DecodeSchedulerEnqueueEvent(b []byte) (*SchedulerEnqueueEvent, error) {
|
||||
var pbmsg pb.SchedulerEnqueueEvent
|
||||
if err := proto.Unmarshal(b, &pbmsg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
enqueuedAt, err := ptypes.Timestamp(pbmsg.GetEnqueueTime())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &SchedulerEnqueueEvent{
|
||||
TaskID: pbmsg.GetTaskId(),
|
||||
EnqueuedAt: enqueuedAt,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Cancelations is a collection that holds cancel functions for all active tasks.
|
||||
//
|
||||
// Cancelations are safe for concurrent use by multipel goroutines.
|
||||
@@ -378,9 +645,9 @@ type Broker interface {
|
||||
Requeue(msg *TaskMessage) error
|
||||
Schedule(msg *TaskMessage, processAt time.Time) error
|
||||
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
|
||||
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
|
||||
Retry(msg *TaskMessage, processAt time.Time, errMsg string, isFailure bool) error
|
||||
Archive(msg *TaskMessage, errMsg string) error
|
||||
CheckAndEnqueue(qnames ...string) error
|
||||
ForwardIfReady(qnames ...string) error
|
||||
ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*TaskMessage, error)
|
||||
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
|
||||
ClearServerState(host string, pid int, serverID string) error
|
||||
|
@@ -6,7 +6,10 @@ package base
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -15,17 +18,36 @@ import (
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
func TestTaskKey(t *testing.T) {
|
||||
id := uuid.NewString()
|
||||
|
||||
tests := []struct {
|
||||
qname string
|
||||
id string
|
||||
want string
|
||||
}{
|
||||
{"default", id, fmt.Sprintf("asynq:{default}:t:%s", id)},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := TaskKey(tc.qname, tc.id)
|
||||
if got != tc.want {
|
||||
t.Errorf("TaskKey(%q, %s) = %q, want %q", tc.qname, tc.id, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestQueueKey(t *testing.T) {
|
||||
tests := []struct {
|
||||
qname string
|
||||
want string
|
||||
}{
|
||||
{"default", "asynq:{default}"},
|
||||
{"custom", "asynq:{custom}"},
|
||||
{"default", "asynq:{default}:pending"},
|
||||
{"custom", "asynq:{custom}:pending"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := QueueKey(tc.qname)
|
||||
got := PendingKey(tc.qname)
|
||||
if got != tc.want {
|
||||
t.Errorf("QueueKey(%q) = %q, want %q", tc.qname, got, tc.want)
|
||||
}
|
||||
@@ -247,52 +269,69 @@ func TestSchedulerHistoryKey(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func toBytes(m map[string]interface{}) []byte {
|
||||
b, err := json.Marshal(m)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func TestUniqueKey(t *testing.T) {
|
||||
payload1 := toBytes(map[string]interface{}{"a": 123, "b": "hello", "c": true})
|
||||
payload2 := toBytes(map[string]interface{}{"b": "hello", "c": true, "a": 123})
|
||||
payload3 := toBytes(map[string]interface{}{
|
||||
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
|
||||
"names": []string{"bob", "mike", "rob"}})
|
||||
payload4 := toBytes(map[string]interface{}{
|
||||
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
|
||||
"duration": time.Hour})
|
||||
|
||||
checksum := func(data []byte) string {
|
||||
sum := md5.Sum(data)
|
||||
return hex.EncodeToString(sum[:])
|
||||
}
|
||||
tests := []struct {
|
||||
desc string
|
||||
qname string
|
||||
tasktype string
|
||||
payload map[string]interface{}
|
||||
payload []byte
|
||||
want string
|
||||
}{
|
||||
{
|
||||
"with primitive types",
|
||||
"default",
|
||||
"email:send",
|
||||
map[string]interface{}{"a": 123, "b": "hello", "c": true},
|
||||
"asynq:{default}:unique:email:send:a=123,b=hello,c=true",
|
||||
payload1,
|
||||
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload1)),
|
||||
},
|
||||
{
|
||||
"with unsorted keys",
|
||||
"default",
|
||||
"email:send",
|
||||
map[string]interface{}{"b": "hello", "c": true, "a": 123},
|
||||
"asynq:{default}:unique:email:send:a=123,b=hello,c=true",
|
||||
payload2,
|
||||
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload2)),
|
||||
},
|
||||
{
|
||||
"with composite types",
|
||||
"default",
|
||||
"email:send",
|
||||
map[string]interface{}{
|
||||
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
|
||||
"names": []string{"bob", "mike", "rob"}},
|
||||
"asynq:{default}:unique:email:send:address=map[city:Boston line:123 Main St state:MA],names=[bob mike rob]",
|
||||
payload3,
|
||||
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload3)),
|
||||
},
|
||||
{
|
||||
"with complex types",
|
||||
"default",
|
||||
"email:send",
|
||||
map[string]interface{}{
|
||||
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
|
||||
"duration": time.Hour},
|
||||
"asynq:{default}:unique:email:send:duration=1h0m0s,time=2020-07-28 00:00:00 +0000 UTC",
|
||||
payload4,
|
||||
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload4)),
|
||||
},
|
||||
{
|
||||
"with nil payload",
|
||||
"default",
|
||||
"reindex",
|
||||
nil,
|
||||
"asynq:{default}:unique:reindex:nil",
|
||||
"asynq:{default}:unique:reindex:",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -313,7 +352,7 @@ func TestMessageEncoding(t *testing.T) {
|
||||
{
|
||||
in: &TaskMessage{
|
||||
Type: "task1",
|
||||
Payload: map[string]interface{}{"a": 1, "b": "hello!", "c": true},
|
||||
Payload: toBytes(map[string]interface{}{"a": 1, "b": "hello!", "c": true}),
|
||||
ID: id,
|
||||
Queue: "default",
|
||||
Retry: 10,
|
||||
@@ -323,7 +362,7 @@ func TestMessageEncoding(t *testing.T) {
|
||||
},
|
||||
out: &TaskMessage{
|
||||
Type: "task1",
|
||||
Payload: map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true},
|
||||
Payload: toBytes(map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true}),
|
||||
ID: id,
|
||||
Queue: "default",
|
||||
Retry: 10,
|
||||
@@ -352,10 +391,149 @@ func TestMessageEncoding(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerInfoEncoding(t *testing.T) {
|
||||
tests := []struct {
|
||||
info ServerInfo
|
||||
}{
|
||||
{
|
||||
info: ServerInfo{
|
||||
Host: "127.0.0.1",
|
||||
PID: 9876,
|
||||
ServerID: "abc123",
|
||||
Concurrency: 10,
|
||||
Queues: map[string]int{"default": 1, "critical": 2},
|
||||
StrictPriority: false,
|
||||
Status: "active",
|
||||
Started: time.Now().Add(-3 * time.Hour),
|
||||
ActiveWorkerCount: 8,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
encoded, err := EncodeServerInfo(&tc.info)
|
||||
if err != nil {
|
||||
t.Errorf("EncodeServerInfo(info) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
decoded, err := DecodeServerInfo(encoded)
|
||||
if err != nil {
|
||||
t.Errorf("DecodeServerInfo(encoded) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
if diff := cmp.Diff(&tc.info, decoded); diff != "" {
|
||||
t.Errorf("Decoded ServerInfo == %+v, want %+v;(-want,+got)\n%s",
|
||||
decoded, tc.info, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkerInfoEncoding(t *testing.T) {
|
||||
tests := []struct {
|
||||
info WorkerInfo
|
||||
}{
|
||||
{
|
||||
info: WorkerInfo{
|
||||
Host: "127.0.0.1",
|
||||
PID: 9876,
|
||||
ServerID: "abc123",
|
||||
ID: uuid.NewString(),
|
||||
Type: "taskA",
|
||||
Payload: toBytes(map[string]interface{}{"foo": "bar"}),
|
||||
Queue: "default",
|
||||
Started: time.Now().Add(-3 * time.Hour),
|
||||
Deadline: time.Now().Add(30 * time.Second),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
encoded, err := EncodeWorkerInfo(&tc.info)
|
||||
if err != nil {
|
||||
t.Errorf("EncodeWorkerInfo(info) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
decoded, err := DecodeWorkerInfo(encoded)
|
||||
if err != nil {
|
||||
t.Errorf("DecodeWorkerInfo(encoded) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
if diff := cmp.Diff(&tc.info, decoded); diff != "" {
|
||||
t.Errorf("Decoded WorkerInfo == %+v, want %+v;(-want,+got)\n%s",
|
||||
decoded, tc.info, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSchedulerEntryEncoding(t *testing.T) {
|
||||
tests := []struct {
|
||||
entry SchedulerEntry
|
||||
}{
|
||||
{
|
||||
entry: SchedulerEntry{
|
||||
ID: uuid.NewString(),
|
||||
Spec: "* * * * *",
|
||||
Type: "task_A",
|
||||
Payload: toBytes(map[string]interface{}{"foo": "bar"}),
|
||||
Opts: []string{"Queue('email')"},
|
||||
Next: time.Now().Add(30 * time.Second).UTC(),
|
||||
Prev: time.Now().Add(-2 * time.Minute).UTC(),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
encoded, err := EncodeSchedulerEntry(&tc.entry)
|
||||
if err != nil {
|
||||
t.Errorf("EncodeSchedulerEntry(entry) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
decoded, err := DecodeSchedulerEntry(encoded)
|
||||
if err != nil {
|
||||
t.Errorf("DecodeSchedulerEntry(encoded) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
if diff := cmp.Diff(&tc.entry, decoded); diff != "" {
|
||||
t.Errorf("Decoded SchedulerEntry == %+v, want %+v;(-want,+got)\n%s",
|
||||
decoded, tc.entry, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSchedulerEnqueueEventEncoding(t *testing.T) {
|
||||
tests := []struct {
|
||||
event SchedulerEnqueueEvent
|
||||
}{
|
||||
{
|
||||
event: SchedulerEnqueueEvent{
|
||||
TaskID: uuid.NewString(),
|
||||
EnqueuedAt: time.Now().Add(-30 * time.Second).UTC(),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
encoded, err := EncodeSchedulerEnqueueEvent(&tc.event)
|
||||
if err != nil {
|
||||
t.Errorf("EncodeSchedulerEnqueueEvent(event) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
decoded, err := DecodeSchedulerEnqueueEvent(encoded)
|
||||
if err != nil {
|
||||
t.Errorf("DecodeSchedulerEnqueueEvent(encoded) returned error: %v", err)
|
||||
continue
|
||||
}
|
||||
if diff := cmp.Diff(&tc.event, decoded); diff != "" {
|
||||
t.Errorf("Decoded SchedulerEnqueueEvent == %+v, want %+v;(-want,+got)\n%s",
|
||||
decoded, tc.event, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test for status being accessed by multiple goroutines.
|
||||
// Run with -race flag to check for data race.
|
||||
func TestStatusConcurrentAccess(t *testing.T) {
|
||||
status := NewServerStatus(StatusIdle)
|
||||
status := NewServerState()
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
||||
@@ -369,7 +547,7 @@ func TestStatusConcurrentAccess(t *testing.T) {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
status.Set(StatusStopped)
|
||||
status.Set(StateClosed)
|
||||
_ = status.String()
|
||||
}()
|
||||
|
||||
|
285
internal/errors/errors.go
Normal file
285
internal/errors/errors.go
Normal file
@@ -0,0 +1,285 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
// Package errors defines the error type and functions used by
|
||||
// asynq and its internal packages.
|
||||
package errors
|
||||
|
||||
// Note: This package is inspired by a blog post about error handling in project Upspin
|
||||
// https://commandcenter.blogspot.com/2017/12/error-handling-in-upspin.html.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Error is the type that implements the error interface.
|
||||
// It contains a number of fields, each of different type.
|
||||
// An Error value may leave some values unset.
|
||||
type Error struct {
|
||||
Code Code
|
||||
Op Op
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e *Error) DebugString() string {
|
||||
var b strings.Builder
|
||||
if e.Op != "" {
|
||||
b.WriteString(string(e.Op))
|
||||
}
|
||||
if e.Code != Unspecified {
|
||||
if b.Len() > 0 {
|
||||
b.WriteString(": ")
|
||||
}
|
||||
b.WriteString(e.Code.String())
|
||||
}
|
||||
if e.Err != nil {
|
||||
if b.Len() > 0 {
|
||||
b.WriteString(": ")
|
||||
}
|
||||
b.WriteString(e.Err.Error())
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (e *Error) Error() string {
|
||||
var b strings.Builder
|
||||
if e.Code != Unspecified {
|
||||
b.WriteString(e.Code.String())
|
||||
}
|
||||
if e.Err != nil {
|
||||
if b.Len() > 0 {
|
||||
b.WriteString(": ")
|
||||
}
|
||||
b.WriteString(e.Err.Error())
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (e *Error) Unwrap() error {
|
||||
return e.Err
|
||||
}
|
||||
|
||||
// Code defines the canonical error code.
|
||||
type Code uint8
|
||||
|
||||
// List of canonical error codes.
|
||||
const (
|
||||
Unspecified Code = iota
|
||||
NotFound
|
||||
FailedPrecondition
|
||||
Internal
|
||||
AlreadyExists
|
||||
Unknown
|
||||
// Note: If you add a new value here, make sure to update String method.
|
||||
)
|
||||
|
||||
func (c Code) String() string {
|
||||
switch c {
|
||||
case Unspecified:
|
||||
return "ERROR_CODE_UNSPECIFIED"
|
||||
case NotFound:
|
||||
return "NOT_FOUND"
|
||||
case FailedPrecondition:
|
||||
return "FAILED_PRECONDITION"
|
||||
case Internal:
|
||||
return "INTERNAL_ERROR"
|
||||
case AlreadyExists:
|
||||
return "ALREADY_EXISTS"
|
||||
case Unknown:
|
||||
return "UNKNOWN"
|
||||
}
|
||||
panic(fmt.Sprintf("unknown error code %d", c))
|
||||
}
|
||||
|
||||
// Op describes an operation, usually as the package and method,
|
||||
// such as "rdb.Enqueue".
|
||||
type Op string
|
||||
|
||||
// E builds an error value from its arguments.
|
||||
// There must be at least one argument or E panics.
|
||||
// The type of each argument determines its meaning.
|
||||
// If more than one argument of a given type is presented,
|
||||
// only the last one is recorded.
|
||||
//
|
||||
// The types are:
|
||||
// errors.Op
|
||||
// The operation being performed, usually the method
|
||||
// being invoked (Get, Put, etc.).
|
||||
// errors.Code
|
||||
// The canonical error code, such as NOT_FOUND.
|
||||
// string
|
||||
// Treated as an error message and assigned to the
|
||||
// Err field after a call to errors.New.
|
||||
// error
|
||||
// The underlying error that triggered this one.
|
||||
//
|
||||
// If the error is printed, only those items that have been
|
||||
// set to non-zero values will appear in the result.
|
||||
func E(args ...interface{}) error {
|
||||
if len(args) == 0 {
|
||||
panic("call to errors.E with no arguments")
|
||||
}
|
||||
e := &Error{}
|
||||
for _, arg := range args {
|
||||
switch arg := arg.(type) {
|
||||
case Op:
|
||||
e.Op = arg
|
||||
case Code:
|
||||
e.Code = arg
|
||||
case error:
|
||||
e.Err = arg
|
||||
case string:
|
||||
e.Err = errors.New(arg)
|
||||
default:
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
log.Printf("errors.E: bad call from %s:%d: %v", file, line, args)
|
||||
return fmt.Errorf("unknown type %T, value %v in error call", arg, arg)
|
||||
}
|
||||
}
|
||||
return e
|
||||
}
|
||||
|
||||
// CanonicalCode returns the canonical code of the given error if one is present.
|
||||
// Otherwise it returns Unspecified.
|
||||
func CanonicalCode(err error) Code {
|
||||
if err == nil {
|
||||
return Unspecified
|
||||
}
|
||||
e, ok := err.(*Error)
|
||||
if !ok {
|
||||
return Unspecified
|
||||
}
|
||||
if e.Code == Unspecified {
|
||||
return CanonicalCode(e.Err)
|
||||
}
|
||||
return e.Code
|
||||
}
|
||||
|
||||
/******************************************
|
||||
Domin Specific Error Types & Values
|
||||
*******************************************/
|
||||
|
||||
var (
|
||||
// ErrNoProcessableTask indicates that there are no tasks ready to be processed.
|
||||
ErrNoProcessableTask = errors.New("no tasks are ready for processing")
|
||||
|
||||
// ErrDuplicateTask indicates that another task with the same unique key holds the uniqueness lock.
|
||||
ErrDuplicateTask = errors.New("task already exists")
|
||||
)
|
||||
|
||||
// TaskNotFoundError indicates that a task with the given ID does not exist
|
||||
// in the given queue.
|
||||
type TaskNotFoundError struct {
|
||||
Queue string // queue name
|
||||
ID string // task id
|
||||
}
|
||||
|
||||
func (e *TaskNotFoundError) Error() string {
|
||||
return fmt.Sprintf("cannot find task with id=%s in queue %q", e.ID, e.Queue)
|
||||
}
|
||||
|
||||
// IsTaskNotFound reports whether any error in err's chain is of type TaskNotFoundError.
|
||||
func IsTaskNotFound(err error) bool {
|
||||
var target *TaskNotFoundError
|
||||
return As(err, &target)
|
||||
}
|
||||
|
||||
// QueueNotFoundError indicates that a queue with the given name does not exist.
|
||||
type QueueNotFoundError struct {
|
||||
Queue string // queue name
|
||||
}
|
||||
|
||||
func (e *QueueNotFoundError) Error() string {
|
||||
return fmt.Sprintf("queue %q does not exist", e.Queue)
|
||||
}
|
||||
|
||||
// IsQueueNotFound reports whether any error in err's chain is of type QueueNotFoundError.
|
||||
func IsQueueNotFound(err error) bool {
|
||||
var target *QueueNotFoundError
|
||||
return As(err, &target)
|
||||
}
|
||||
|
||||
// QueueNotEmptyError indicates that the given queue is not empty.
|
||||
type QueueNotEmptyError struct {
|
||||
Queue string // queue name
|
||||
}
|
||||
|
||||
func (e *QueueNotEmptyError) Error() string {
|
||||
return fmt.Sprintf("queue %q is not empty", e.Queue)
|
||||
}
|
||||
|
||||
// IsQueueNotEmpty reports whether any error in err's chain is of type QueueNotEmptyError.
|
||||
func IsQueueNotEmpty(err error) bool {
|
||||
var target *QueueNotEmptyError
|
||||
return As(err, &target)
|
||||
}
|
||||
|
||||
// TaskAlreadyArchivedError indicates that the task in question is already archived.
|
||||
type TaskAlreadyArchivedError struct {
|
||||
Queue string // queue name
|
||||
ID string // task id
|
||||
}
|
||||
|
||||
func (e *TaskAlreadyArchivedError) Error() string {
|
||||
return fmt.Sprintf("task is already archived: id=%s, queue=%s", e.ID, e.Queue)
|
||||
}
|
||||
|
||||
// IsTaskAlreadyArchived reports whether any error in err's chain is of type TaskAlreadyArchivedError.
|
||||
func IsTaskAlreadyArchived(err error) bool {
|
||||
var target *TaskAlreadyArchivedError
|
||||
return As(err, &target)
|
||||
}
|
||||
|
||||
// RedisCommandError indicates that the given redis command returned error.
|
||||
type RedisCommandError struct {
|
||||
Command string // redis command (e.g. LRANGE, ZADD, etc)
|
||||
Err error // underlying error
|
||||
}
|
||||
|
||||
func (e *RedisCommandError) Error() string {
|
||||
return fmt.Sprintf("redis command error: %s failed: %v", strings.ToUpper(e.Command), e.Err)
|
||||
}
|
||||
|
||||
func (e *RedisCommandError) Unwrap() error { return e.Err }
|
||||
|
||||
// IsRedisCommandError reports whether any error in err's chain is of type RedisCommandError.
|
||||
func IsRedisCommandError(err error) bool {
|
||||
var target *RedisCommandError
|
||||
return As(err, &target)
|
||||
}
|
||||
|
||||
/*************************************************
|
||||
Standard Library errors package functions
|
||||
*************************************************/
|
||||
|
||||
// New returns an error that formats as the given text.
|
||||
// Each call to New returns a distinct error value even if the text is identical.
|
||||
//
|
||||
// This function is the errors.New function from the standard libarary (https://golang.org/pkg/errors/#New).
|
||||
// It is exported from this package for import convinience.
|
||||
func New(text string) error { return errors.New(text) }
|
||||
|
||||
// Is reports whether any error in err's chain matches target.
|
||||
//
|
||||
// This function is the errors.Is function from the standard libarary (https://golang.org/pkg/errors/#Is).
|
||||
// It is exported from this package for import convinience.
|
||||
func Is(err, target error) bool { return errors.Is(err, target) }
|
||||
|
||||
// As finds the first error in err's chain that matches target, and if so, sets target to that error value and returns true.
|
||||
// Otherwise, it returns false.
|
||||
//
|
||||
// This function is the errors.As function from the standard libarary (https://golang.org/pkg/errors/#As).
|
||||
// It is exported from this package for import convinience.
|
||||
func As(err error, target interface{}) bool { return errors.As(err, target) }
|
||||
|
||||
// Unwrap returns the result of calling the Unwrap method on err, if err's type contains an Unwrap method returning error.
|
||||
// Otherwise, Unwrap returns nil.
|
||||
//
|
||||
// This function is the errors.Unwrap function from the standard libarary (https://golang.org/pkg/errors/#Unwrap).
|
||||
// It is exported from this package for import convinience.
|
||||
func Unwrap(err error) error { return errors.Unwrap(err) }
|
176
internal/errors/errors_test.go
Normal file
176
internal/errors/errors_test.go
Normal file
@@ -0,0 +1,176 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
package errors
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestErrorDebugString(t *testing.T) {
|
||||
// DebugString should include Op since its meant to be used by
|
||||
// maintainers/contributors of the asynq package.
|
||||
tests := []struct {
|
||||
desc string
|
||||
err error
|
||||
want string
|
||||
}{
|
||||
{
|
||||
desc: "With Op, Code, and string",
|
||||
err: E(Op("rdb.DeleteTask"), NotFound, "cannot find task with id=123"),
|
||||
want: "rdb.DeleteTask: NOT_FOUND: cannot find task with id=123",
|
||||
},
|
||||
{
|
||||
desc: "With Op, Code and error",
|
||||
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
|
||||
want: `rdb.DeleteTask: NOT_FOUND: cannot find task with id=123 in queue "default"`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
if got := tc.err.(*Error).DebugString(); got != tc.want {
|
||||
t.Errorf("%s: got=%q, want=%q", tc.desc, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrorString(t *testing.T) {
|
||||
// String method should omit Op since op is an internal detail
|
||||
// and we don't want to provide it to users of the package.
|
||||
tests := []struct {
|
||||
desc string
|
||||
err error
|
||||
want string
|
||||
}{
|
||||
{
|
||||
desc: "With Op, Code, and string",
|
||||
err: E(Op("rdb.DeleteTask"), NotFound, "cannot find task with id=123"),
|
||||
want: "NOT_FOUND: cannot find task with id=123",
|
||||
},
|
||||
{
|
||||
desc: "With Op, Code and error",
|
||||
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
|
||||
want: `NOT_FOUND: cannot find task with id=123 in queue "default"`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
if got := tc.err.Error(); got != tc.want {
|
||||
t.Errorf("%s: got=%q, want=%q", tc.desc, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrorIs(t *testing.T) {
|
||||
var ErrCustom = New("custom sentinel error")
|
||||
|
||||
tests := []struct {
|
||||
desc string
|
||||
err error
|
||||
target error
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
desc: "should unwrap one level",
|
||||
err: E(Op("rdb.DeleteTask"), ErrCustom),
|
||||
target: ErrCustom,
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
if got := Is(tc.err, tc.target); got != tc.want {
|
||||
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrorAs(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
err error
|
||||
target interface{}
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
desc: "should unwrap one level",
|
||||
err: E(Op("rdb.DeleteTask"), NotFound, &QueueNotFoundError{Queue: "email"}),
|
||||
target: &QueueNotFoundError{},
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
if got := As(tc.err, &tc.target); got != tc.want {
|
||||
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestErrorPredicates(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
fn func(err error) bool
|
||||
err error
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
desc: "IsTaskNotFound should detect presence of TaskNotFoundError in err's chain",
|
||||
fn: IsTaskNotFound,
|
||||
err: E(Op("rdb.ArchiveTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "9876"}),
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
desc: "IsTaskNotFound should detect absence of TaskNotFoundError in err's chain",
|
||||
fn: IsTaskNotFound,
|
||||
err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}),
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
desc: "IsQueueNotFound should detect presence of QueueNotFoundError in err's chain",
|
||||
fn: IsQueueNotFound,
|
||||
err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}),
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
if got := tc.fn(tc.err); got != tc.want {
|
||||
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCanonicalCode(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
err error
|
||||
want Code
|
||||
}{
|
||||
{
|
||||
desc: "without nesting",
|
||||
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
|
||||
want: NotFound,
|
||||
},
|
||||
{
|
||||
desc: "with nesting",
|
||||
err: E(FailedPrecondition, E(NotFound)),
|
||||
want: FailedPrecondition,
|
||||
},
|
||||
{
|
||||
desc: "returns Unspecified if err is not *Error",
|
||||
err: New("some other error"),
|
||||
want: Unspecified,
|
||||
},
|
||||
{
|
||||
desc: "returns Unspecified if err is nil",
|
||||
err: nil,
|
||||
want: Unspecified,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
if got := CanonicalCode(tc.err); got != tc.want {
|
||||
t.Errorf("%s: got=%s, want=%s", tc.desc, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
812
internal/proto/asynq.pb.go
Normal file
812
internal/proto/asynq.pb.go
Normal file
@@ -0,0 +1,812 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
// Code generated by protoc-gen-go. DO NOT EDIT.
|
||||
// versions:
|
||||
// protoc-gen-go v1.25.0
|
||||
// protoc v3.14.0
|
||||
// source: asynq.proto
|
||||
|
||||
package proto
|
||||
|
||||
import (
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
||||
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
||||
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
|
||||
reflect "reflect"
|
||||
sync "sync"
|
||||
)
|
||||
|
||||
const (
|
||||
// Verify that this generated code is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
||||
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
||||
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
||||
)
|
||||
|
||||
// This is a compile-time assertion that a sufficiently up-to-date version
|
||||
// of the legacy proto package is being used.
|
||||
const _ = proto.ProtoPackageIsVersion4
|
||||
|
||||
// TaskMessage is the internal representation of a task with additional
|
||||
// metadata fields.
|
||||
type TaskMessage struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// Type indicates the kind of the task to be performed.
|
||||
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
|
||||
// Payload holds data needed to process the task.
|
||||
Payload []byte `protobuf:"bytes,2,opt,name=payload,proto3" json:"payload,omitempty"`
|
||||
// Unique identifier for the task.
|
||||
Id string `protobuf:"bytes,3,opt,name=id,proto3" json:"id,omitempty"`
|
||||
// Name of the queue to which this task belongs.
|
||||
Queue string `protobuf:"bytes,4,opt,name=queue,proto3" json:"queue,omitempty"`
|
||||
// Max number of retries for this task.
|
||||
Retry int32 `protobuf:"varint,5,opt,name=retry,proto3" json:"retry,omitempty"`
|
||||
// Number of times this task has been retried so far.
|
||||
Retried int32 `protobuf:"varint,6,opt,name=retried,proto3" json:"retried,omitempty"`
|
||||
// Error message from the last failure.
|
||||
ErrorMsg string `protobuf:"bytes,7,opt,name=error_msg,json=errorMsg,proto3" json:"error_msg,omitempty"`
|
||||
// Time of last failure in Unix time,
|
||||
// the number of seconds elapsed since January 1, 1970 UTC.
|
||||
// Use zero to indicate no last failure.
|
||||
LastFailedAt int64 `protobuf:"varint,11,opt,name=last_failed_at,json=lastFailedAt,proto3" json:"last_failed_at,omitempty"`
|
||||
// Timeout specifies timeout in seconds.
|
||||
// Use zero to indicate no timeout.
|
||||
Timeout int64 `protobuf:"varint,8,opt,name=timeout,proto3" json:"timeout,omitempty"`
|
||||
// Deadline specifies the deadline for the task in Unix time,
|
||||
// the number of seconds elapsed since January 1, 1970 UTC.
|
||||
// Use zero to indicate no deadline.
|
||||
Deadline int64 `protobuf:"varint,9,opt,name=deadline,proto3" json:"deadline,omitempty"`
|
||||
// UniqueKey holds the redis key used for uniqueness lock for this task.
|
||||
// Empty string indicates that no uniqueness lock was used.
|
||||
UniqueKey string `protobuf:"bytes,10,opt,name=unique_key,json=uniqueKey,proto3" json:"unique_key,omitempty"`
|
||||
}
|
||||
|
||||
func (x *TaskMessage) Reset() {
|
||||
*x = TaskMessage{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_asynq_proto_msgTypes[0]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *TaskMessage) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*TaskMessage) ProtoMessage() {}
|
||||
|
||||
func (x *TaskMessage) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_asynq_proto_msgTypes[0]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use TaskMessage.ProtoReflect.Descriptor instead.
|
||||
func (*TaskMessage) Descriptor() ([]byte, []int) {
|
||||
return file_asynq_proto_rawDescGZIP(), []int{0}
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetType() string {
|
||||
if x != nil {
|
||||
return x.Type
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetPayload() []byte {
|
||||
if x != nil {
|
||||
return x.Payload
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetId() string {
|
||||
if x != nil {
|
||||
return x.Id
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetQueue() string {
|
||||
if x != nil {
|
||||
return x.Queue
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetRetry() int32 {
|
||||
if x != nil {
|
||||
return x.Retry
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetRetried() int32 {
|
||||
if x != nil {
|
||||
return x.Retried
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetErrorMsg() string {
|
||||
if x != nil {
|
||||
return x.ErrorMsg
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetLastFailedAt() int64 {
|
||||
if x != nil {
|
||||
return x.LastFailedAt
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetTimeout() int64 {
|
||||
if x != nil {
|
||||
return x.Timeout
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetDeadline() int64 {
|
||||
if x != nil {
|
||||
return x.Deadline
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *TaskMessage) GetUniqueKey() string {
|
||||
if x != nil {
|
||||
return x.UniqueKey
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// ServerInfo holds information about a running server.
|
||||
type ServerInfo struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// Host machine the server is running on.
|
||||
Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"`
|
||||
// PID of the server process.
|
||||
Pid int32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
|
||||
// Unique identifier for this server.
|
||||
ServerId string `protobuf:"bytes,3,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"`
|
||||
// Maximum number of concurrency this server will use.
|
||||
Concurrency int32 `protobuf:"varint,4,opt,name=concurrency,proto3" json:"concurrency,omitempty"`
|
||||
// List of queue names with their priorities.
|
||||
// The server will consume tasks from the queues and prioritize
|
||||
// queues with higher priority numbers.
|
||||
Queues map[string]int32 `protobuf:"bytes,5,rep,name=queues,proto3" json:"queues,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
|
||||
// If set, the server will always consume tasks from a queue with higher
|
||||
// priority.
|
||||
StrictPriority bool `protobuf:"varint,6,opt,name=strict_priority,json=strictPriority,proto3" json:"strict_priority,omitempty"`
|
||||
// Status indicates the status of the server.
|
||||
Status string `protobuf:"bytes,7,opt,name=status,proto3" json:"status,omitempty"`
|
||||
// Time this server was started.
|
||||
StartTime *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"`
|
||||
// Number of workers currently processing tasks.
|
||||
ActiveWorkerCount int32 `protobuf:"varint,9,opt,name=active_worker_count,json=activeWorkerCount,proto3" json:"active_worker_count,omitempty"`
|
||||
}
|
||||
|
||||
func (x *ServerInfo) Reset() {
|
||||
*x = ServerInfo{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_asynq_proto_msgTypes[1]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *ServerInfo) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*ServerInfo) ProtoMessage() {}
|
||||
|
||||
func (x *ServerInfo) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_asynq_proto_msgTypes[1]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use ServerInfo.ProtoReflect.Descriptor instead.
|
||||
func (*ServerInfo) Descriptor() ([]byte, []int) {
|
||||
return file_asynq_proto_rawDescGZIP(), []int{1}
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetHost() string {
|
||||
if x != nil {
|
||||
return x.Host
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetPid() int32 {
|
||||
if x != nil {
|
||||
return x.Pid
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetServerId() string {
|
||||
if x != nil {
|
||||
return x.ServerId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetConcurrency() int32 {
|
||||
if x != nil {
|
||||
return x.Concurrency
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetQueues() map[string]int32 {
|
||||
if x != nil {
|
||||
return x.Queues
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetStrictPriority() bool {
|
||||
if x != nil {
|
||||
return x.StrictPriority
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetStatus() string {
|
||||
if x != nil {
|
||||
return x.Status
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetStartTime() *timestamppb.Timestamp {
|
||||
if x != nil {
|
||||
return x.StartTime
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *ServerInfo) GetActiveWorkerCount() int32 {
|
||||
if x != nil {
|
||||
return x.ActiveWorkerCount
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// WorkerInfo holds information about a running worker.
|
||||
type WorkerInfo struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// Host matchine this worker is running on.
|
||||
Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"`
|
||||
// PID of the process in which this worker is running.
|
||||
Pid int32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
|
||||
// ID of the server in which this worker is running.
|
||||
ServerId string `protobuf:"bytes,3,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"`
|
||||
// ID of the task this worker is processing.
|
||||
TaskId string `protobuf:"bytes,4,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
||||
// Type of the task this worker is processing.
|
||||
TaskType string `protobuf:"bytes,5,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
|
||||
// Payload of the task this worker is processing.
|
||||
TaskPayload []byte `protobuf:"bytes,6,opt,name=task_payload,json=taskPayload,proto3" json:"task_payload,omitempty"`
|
||||
// Name of the queue the task the worker is processing belongs.
|
||||
Queue string `protobuf:"bytes,7,opt,name=queue,proto3" json:"queue,omitempty"`
|
||||
// Time this worker started processing the task.
|
||||
StartTime *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"`
|
||||
// Deadline by which the worker needs to complete processing
|
||||
// the task. If worker exceeds the deadline, the task will fail.
|
||||
Deadline *timestamppb.Timestamp `protobuf:"bytes,9,opt,name=deadline,proto3" json:"deadline,omitempty"`
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) Reset() {
|
||||
*x = WorkerInfo{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_asynq_proto_msgTypes[2]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*WorkerInfo) ProtoMessage() {}
|
||||
|
||||
func (x *WorkerInfo) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_asynq_proto_msgTypes[2]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use WorkerInfo.ProtoReflect.Descriptor instead.
|
||||
func (*WorkerInfo) Descriptor() ([]byte, []int) {
|
||||
return file_asynq_proto_rawDescGZIP(), []int{2}
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetHost() string {
|
||||
if x != nil {
|
||||
return x.Host
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetPid() int32 {
|
||||
if x != nil {
|
||||
return x.Pid
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetServerId() string {
|
||||
if x != nil {
|
||||
return x.ServerId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetTaskId() string {
|
||||
if x != nil {
|
||||
return x.TaskId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetTaskType() string {
|
||||
if x != nil {
|
||||
return x.TaskType
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetTaskPayload() []byte {
|
||||
if x != nil {
|
||||
return x.TaskPayload
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetQueue() string {
|
||||
if x != nil {
|
||||
return x.Queue
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetStartTime() *timestamppb.Timestamp {
|
||||
if x != nil {
|
||||
return x.StartTime
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *WorkerInfo) GetDeadline() *timestamppb.Timestamp {
|
||||
if x != nil {
|
||||
return x.Deadline
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SchedulerEntry holds information about a periodic task registered
|
||||
// with a scheduler.
|
||||
type SchedulerEntry struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// Identifier of the scheduler entry.
|
||||
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
|
||||
// Periodic schedule spec of the entry.
|
||||
Spec string `protobuf:"bytes,2,opt,name=spec,proto3" json:"spec,omitempty"`
|
||||
// Task type of the periodic task.
|
||||
TaskType string `protobuf:"bytes,3,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
|
||||
// Task payload of the periodic task.
|
||||
TaskPayload []byte `protobuf:"bytes,4,opt,name=task_payload,json=taskPayload,proto3" json:"task_payload,omitempty"`
|
||||
// Options used to enqueue the periodic task.
|
||||
EnqueueOptions []string `protobuf:"bytes,5,rep,name=enqueue_options,json=enqueueOptions,proto3" json:"enqueue_options,omitempty"`
|
||||
// Next time the task will be enqueued.
|
||||
NextEnqueueTime *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=next_enqueue_time,json=nextEnqueueTime,proto3" json:"next_enqueue_time,omitempty"`
|
||||
// Last time the task was enqueued.
|
||||
// Zero time if task was never enqueued.
|
||||
PrevEnqueueTime *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=prev_enqueue_time,json=prevEnqueueTime,proto3" json:"prev_enqueue_time,omitempty"`
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) Reset() {
|
||||
*x = SchedulerEntry{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_asynq_proto_msgTypes[3]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*SchedulerEntry) ProtoMessage() {}
|
||||
|
||||
func (x *SchedulerEntry) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_asynq_proto_msgTypes[3]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use SchedulerEntry.ProtoReflect.Descriptor instead.
|
||||
func (*SchedulerEntry) Descriptor() ([]byte, []int) {
|
||||
return file_asynq_proto_rawDescGZIP(), []int{3}
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) GetId() string {
|
||||
if x != nil {
|
||||
return x.Id
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) GetSpec() string {
|
||||
if x != nil {
|
||||
return x.Spec
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) GetTaskType() string {
|
||||
if x != nil {
|
||||
return x.TaskType
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) GetTaskPayload() []byte {
|
||||
if x != nil {
|
||||
return x.TaskPayload
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) GetEnqueueOptions() []string {
|
||||
if x != nil {
|
||||
return x.EnqueueOptions
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) GetNextEnqueueTime() *timestamppb.Timestamp {
|
||||
if x != nil {
|
||||
return x.NextEnqueueTime
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *SchedulerEntry) GetPrevEnqueueTime() *timestamppb.Timestamp {
|
||||
if x != nil {
|
||||
return x.PrevEnqueueTime
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SchedulerEnqueueEvent holds information about an enqueue event
|
||||
// by a scheduler.
|
||||
type SchedulerEnqueueEvent struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
// ID of the task that was enqueued.
|
||||
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
||||
// Time the task was enqueued.
|
||||
EnqueueTime *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=enqueue_time,json=enqueueTime,proto3" json:"enqueue_time,omitempty"`
|
||||
}
|
||||
|
||||
func (x *SchedulerEnqueueEvent) Reset() {
|
||||
*x = SchedulerEnqueueEvent{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_asynq_proto_msgTypes[4]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *SchedulerEnqueueEvent) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*SchedulerEnqueueEvent) ProtoMessage() {}
|
||||
|
||||
func (x *SchedulerEnqueueEvent) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_asynq_proto_msgTypes[4]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use SchedulerEnqueueEvent.ProtoReflect.Descriptor instead.
|
||||
func (*SchedulerEnqueueEvent) Descriptor() ([]byte, []int) {
|
||||
return file_asynq_proto_rawDescGZIP(), []int{4}
|
||||
}
|
||||
|
||||
func (x *SchedulerEnqueueEvent) GetTaskId() string {
|
||||
if x != nil {
|
||||
return x.TaskId
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *SchedulerEnqueueEvent) GetEnqueueTime() *timestamppb.Timestamp {
|
||||
if x != nil {
|
||||
return x.EnqueueTime
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var File_asynq_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_asynq_proto_rawDesc = []byte{
|
||||
0x0a, 0x0b, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x61,
|
||||
0x73, 0x79, 0x6e, 0x71, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f,
|
||||
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e,
|
||||
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xa9, 0x02, 0x0a, 0x0b, 0x54, 0x61, 0x73, 0x6b, 0x4d, 0x65,
|
||||
0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20,
|
||||
0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79,
|
||||
0x6c, 0x6f, 0x61, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c,
|
||||
0x6f, 0x61, 0x64, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
|
||||
0x02, 0x69, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x04, 0x20, 0x01,
|
||||
0x28, 0x09, 0x52, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x72, 0x65, 0x74,
|
||||
0x72, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x72, 0x65, 0x74, 0x72, 0x79, 0x12,
|
||||
0x18, 0x0a, 0x07, 0x72, 0x65, 0x74, 0x72, 0x69, 0x65, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x05,
|
||||
0x52, 0x07, 0x72, 0x65, 0x74, 0x72, 0x69, 0x65, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x65, 0x72, 0x72,
|
||||
0x6f, 0x72, 0x5f, 0x6d, 0x73, 0x67, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x65, 0x72,
|
||||
0x72, 0x6f, 0x72, 0x4d, 0x73, 0x67, 0x12, 0x24, 0x0a, 0x0e, 0x6c, 0x61, 0x73, 0x74, 0x5f, 0x66,
|
||||
0x61, 0x69, 0x6c, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0c,
|
||||
0x6c, 0x61, 0x73, 0x74, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64, 0x41, 0x74, 0x12, 0x18, 0x0a, 0x07,
|
||||
0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x18, 0x08, 0x20, 0x01, 0x28, 0x03, 0x52, 0x07, 0x74,
|
||||
0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x12, 0x1a, 0x0a, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69,
|
||||
0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69,
|
||||
0x6e, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x5f, 0x6b, 0x65, 0x79,
|
||||
0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x4b, 0x65,
|
||||
0x79, 0x22, 0x8f, 0x03, 0x0a, 0x0a, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f,
|
||||
0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
|
||||
0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28,
|
||||
0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72,
|
||||
0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72, 0x76, 0x65,
|
||||
0x72, 0x49, 0x64, 0x12, 0x20, 0x0a, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e,
|
||||
0x63, 0x79, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72,
|
||||
0x72, 0x65, 0x6e, 0x63, 0x79, 0x12, 0x35, 0x0a, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x18,
|
||||
0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x53, 0x65,
|
||||
0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x51, 0x75, 0x65, 0x75, 0x65, 0x73, 0x45,
|
||||
0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x12, 0x27, 0x0a, 0x0f,
|
||||
0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x5f, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x18,
|
||||
0x06, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0e, 0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x50, 0x72, 0x69,
|
||||
0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18,
|
||||
0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x39, 0x0a,
|
||||
0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28,
|
||||
0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x73,
|
||||
0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x2e, 0x0a, 0x13, 0x61, 0x63, 0x74, 0x69,
|
||||
0x76, 0x65, 0x5f, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18,
|
||||
0x09, 0x20, 0x01, 0x28, 0x05, 0x52, 0x11, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x57, 0x6f, 0x72,
|
||||
0x6b, 0x65, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x1a, 0x39, 0x0a, 0x0b, 0x51, 0x75, 0x65, 0x75,
|
||||
0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01,
|
||||
0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c,
|
||||
0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a,
|
||||
0x02, 0x38, 0x01, 0x22, 0xb1, 0x02, 0x0a, 0x0a, 0x57, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x49, 0x6e,
|
||||
0x66, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
|
||||
0x52, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20,
|
||||
0x01, 0x28, 0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76,
|
||||
0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72,
|
||||
0x76, 0x65, 0x72, 0x49, 0x64, 0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64,
|
||||
0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x1b,
|
||||
0x0a, 0x09, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28,
|
||||
0x09, 0x52, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74,
|
||||
0x61, 0x73, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28,
|
||||
0x0c, 0x52, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x14,
|
||||
0x0a, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x71,
|
||||
0x75, 0x65, 0x75, 0x65, 0x12, 0x39, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69,
|
||||
0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
|
||||
0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73,
|
||||
0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12,
|
||||
0x36, 0x0a, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28,
|
||||
0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x64,
|
||||
0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x22, 0xad, 0x02, 0x0a, 0x0e, 0x53, 0x63, 0x68, 0x65,
|
||||
0x64, 0x75, 0x6c, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64,
|
||||
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x70,
|
||||
0x65, 0x63, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x73, 0x70, 0x65, 0x63, 0x12, 0x1b,
|
||||
0x0a, 0x09, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28,
|
||||
0x09, 0x52, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74,
|
||||
0x61, 0x73, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28,
|
||||
0x0c, 0x52, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x27,
|
||||
0x0a, 0x0f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e,
|
||||
0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0e, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65,
|
||||
0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x46, 0x0a, 0x11, 0x6e, 0x65, 0x78, 0x74, 0x5f,
|
||||
0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x06, 0x20, 0x01,
|
||||
0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74,
|
||||
0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f,
|
||||
0x6e, 0x65, 0x78, 0x74, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x12,
|
||||
0x46, 0x0a, 0x11, 0x70, 0x72, 0x65, 0x76, 0x5f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f,
|
||||
0x74, 0x69, 0x6d, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f,
|
||||
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d,
|
||||
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f, 0x70, 0x72, 0x65, 0x76, 0x45, 0x6e, 0x71, 0x75,
|
||||
0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x22, 0x6f, 0x0a, 0x15, 0x53, 0x63, 0x68, 0x65, 0x64,
|
||||
0x75, 0x6c, 0x65, 0x72, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x45, 0x76, 0x65, 0x6e, 0x74,
|
||||
0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
|
||||
0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x3d, 0x0a, 0x0c, 0x65, 0x6e, 0x71,
|
||||
0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32,
|
||||
0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
|
||||
0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0b, 0x65, 0x6e, 0x71,
|
||||
0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x42, 0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68,
|
||||
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x69, 0x62, 0x69, 0x6b, 0x65, 0x6e, 0x2f, 0x61,
|
||||
0x73, 0x79, 0x6e, 0x71, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x72,
|
||||
0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
file_asynq_proto_rawDescOnce sync.Once
|
||||
file_asynq_proto_rawDescData = file_asynq_proto_rawDesc
|
||||
)
|
||||
|
||||
func file_asynq_proto_rawDescGZIP() []byte {
|
||||
file_asynq_proto_rawDescOnce.Do(func() {
|
||||
file_asynq_proto_rawDescData = protoimpl.X.CompressGZIP(file_asynq_proto_rawDescData)
|
||||
})
|
||||
return file_asynq_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_asynq_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
|
||||
var file_asynq_proto_goTypes = []interface{}{
|
||||
(*TaskMessage)(nil), // 0: asynq.TaskMessage
|
||||
(*ServerInfo)(nil), // 1: asynq.ServerInfo
|
||||
(*WorkerInfo)(nil), // 2: asynq.WorkerInfo
|
||||
(*SchedulerEntry)(nil), // 3: asynq.SchedulerEntry
|
||||
(*SchedulerEnqueueEvent)(nil), // 4: asynq.SchedulerEnqueueEvent
|
||||
nil, // 5: asynq.ServerInfo.QueuesEntry
|
||||
(*timestamppb.Timestamp)(nil), // 6: google.protobuf.Timestamp
|
||||
}
|
||||
var file_asynq_proto_depIdxs = []int32{
|
||||
5, // 0: asynq.ServerInfo.queues:type_name -> asynq.ServerInfo.QueuesEntry
|
||||
6, // 1: asynq.ServerInfo.start_time:type_name -> google.protobuf.Timestamp
|
||||
6, // 2: asynq.WorkerInfo.start_time:type_name -> google.protobuf.Timestamp
|
||||
6, // 3: asynq.WorkerInfo.deadline:type_name -> google.protobuf.Timestamp
|
||||
6, // 4: asynq.SchedulerEntry.next_enqueue_time:type_name -> google.protobuf.Timestamp
|
||||
6, // 5: asynq.SchedulerEntry.prev_enqueue_time:type_name -> google.protobuf.Timestamp
|
||||
6, // 6: asynq.SchedulerEnqueueEvent.enqueue_time:type_name -> google.protobuf.Timestamp
|
||||
7, // [7:7] is the sub-list for method output_type
|
||||
7, // [7:7] is the sub-list for method input_type
|
||||
7, // [7:7] is the sub-list for extension type_name
|
||||
7, // [7:7] is the sub-list for extension extendee
|
||||
0, // [0:7] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_asynq_proto_init() }
|
||||
func file_asynq_proto_init() {
|
||||
if File_asynq_proto != nil {
|
||||
return
|
||||
}
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_asynq_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*TaskMessage); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_asynq_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*ServerInfo); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_asynq_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*WorkerInfo); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_asynq_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*SchedulerEntry); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_asynq_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*SchedulerEnqueueEvent); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
File: protoimpl.DescBuilder{
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_asynq_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumMessages: 6,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
GoTypes: file_asynq_proto_goTypes,
|
||||
DependencyIndexes: file_asynq_proto_depIdxs,
|
||||
MessageInfos: file_asynq_proto_msgTypes,
|
||||
}.Build()
|
||||
File_asynq_proto = out.File
|
||||
file_asynq_proto_rawDesc = nil
|
||||
file_asynq_proto_goTypes = nil
|
||||
file_asynq_proto_depIdxs = nil
|
||||
}
|
154
internal/proto/asynq.proto
Normal file
154
internal/proto/asynq.proto
Normal file
@@ -0,0 +1,154 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
syntax = "proto3";
|
||||
package asynq;
|
||||
|
||||
import "google/protobuf/timestamp.proto";
|
||||
|
||||
option go_package = "github.com/hibiken/asynq/internal/proto";
|
||||
|
||||
// TaskMessage is the internal representation of a task with additional
|
||||
// metadata fields.
|
||||
message TaskMessage {
|
||||
// Type indicates the kind of the task to be performed.
|
||||
string type = 1;
|
||||
|
||||
// Payload holds data needed to process the task.
|
||||
bytes payload = 2;
|
||||
|
||||
// Unique identifier for the task.
|
||||
string id = 3;
|
||||
|
||||
// Name of the queue to which this task belongs.
|
||||
string queue = 4;
|
||||
|
||||
// Max number of retries for this task.
|
||||
int32 retry = 5;
|
||||
|
||||
// Number of times this task has been retried so far.
|
||||
int32 retried = 6;
|
||||
|
||||
// Error message from the last failure.
|
||||
string error_msg = 7;
|
||||
|
||||
// Time of last failure in Unix time,
|
||||
// the number of seconds elapsed since January 1, 1970 UTC.
|
||||
// Use zero to indicate no last failure.
|
||||
int64 last_failed_at = 11;
|
||||
|
||||
// Timeout specifies timeout in seconds.
|
||||
// Use zero to indicate no timeout.
|
||||
int64 timeout = 8;
|
||||
|
||||
// Deadline specifies the deadline for the task in Unix time,
|
||||
// the number of seconds elapsed since January 1, 1970 UTC.
|
||||
// Use zero to indicate no deadline.
|
||||
int64 deadline = 9;
|
||||
|
||||
// UniqueKey holds the redis key used for uniqueness lock for this task.
|
||||
// Empty string indicates that no uniqueness lock was used.
|
||||
string unique_key = 10;
|
||||
|
||||
};
|
||||
|
||||
// ServerInfo holds information about a running server.
|
||||
message ServerInfo {
|
||||
// Host machine the server is running on.
|
||||
string host = 1;
|
||||
|
||||
// PID of the server process.
|
||||
int32 pid = 2;
|
||||
|
||||
// Unique identifier for this server.
|
||||
string server_id = 3;
|
||||
|
||||
// Maximum number of concurrency this server will use.
|
||||
int32 concurrency = 4;
|
||||
|
||||
// List of queue names with their priorities.
|
||||
// The server will consume tasks from the queues and prioritize
|
||||
// queues with higher priority numbers.
|
||||
map<string, int32> queues = 5;
|
||||
|
||||
// If set, the server will always consume tasks from a queue with higher
|
||||
// priority.
|
||||
bool strict_priority = 6;
|
||||
|
||||
// Status indicates the status of the server.
|
||||
string status = 7;
|
||||
|
||||
// Time this server was started.
|
||||
google.protobuf.Timestamp start_time = 8;
|
||||
|
||||
// Number of workers currently processing tasks.
|
||||
int32 active_worker_count = 9;
|
||||
};
|
||||
|
||||
// WorkerInfo holds information about a running worker.
|
||||
message WorkerInfo {
|
||||
// Host matchine this worker is running on.
|
||||
string host = 1;
|
||||
|
||||
// PID of the process in which this worker is running.
|
||||
int32 pid = 2;
|
||||
|
||||
// ID of the server in which this worker is running.
|
||||
string server_id = 3;
|
||||
|
||||
// ID of the task this worker is processing.
|
||||
string task_id = 4;
|
||||
|
||||
// Type of the task this worker is processing.
|
||||
string task_type = 5;
|
||||
|
||||
// Payload of the task this worker is processing.
|
||||
bytes task_payload = 6;
|
||||
|
||||
// Name of the queue the task the worker is processing belongs.
|
||||
string queue = 7;
|
||||
|
||||
// Time this worker started processing the task.
|
||||
google.protobuf.Timestamp start_time = 8;
|
||||
|
||||
// Deadline by which the worker needs to complete processing
|
||||
// the task. If worker exceeds the deadline, the task will fail.
|
||||
google.protobuf.Timestamp deadline = 9;
|
||||
};
|
||||
|
||||
// SchedulerEntry holds information about a periodic task registered
|
||||
// with a scheduler.
|
||||
message SchedulerEntry {
|
||||
// Identifier of the scheduler entry.
|
||||
string id = 1;
|
||||
|
||||
// Periodic schedule spec of the entry.
|
||||
string spec = 2;
|
||||
|
||||
// Task type of the periodic task.
|
||||
string task_type = 3;
|
||||
|
||||
// Task payload of the periodic task.
|
||||
bytes task_payload = 4;
|
||||
|
||||
// Options used to enqueue the periodic task.
|
||||
repeated string enqueue_options = 5;
|
||||
|
||||
// Next time the task will be enqueued.
|
||||
google.protobuf.Timestamp next_enqueue_time = 6;
|
||||
|
||||
// Last time the task was enqueued.
|
||||
// Zero time if task was never enqueued.
|
||||
google.protobuf.Timestamp prev_enqueue_time = 7;
|
||||
};
|
||||
|
||||
// SchedulerEnqueueEvent holds information about an enqueue event
|
||||
// by a scheduler.
|
||||
message SchedulerEnqueueEvent {
|
||||
// ID of the task that was enqueued.
|
||||
string task_id = 1;
|
||||
|
||||
// Time the task was enqueued.
|
||||
google.protobuf.Timestamp enqueue_time = 2;
|
||||
};
|
@@ -184,7 +184,7 @@ func BenchmarkRetry(b *testing.B) {
|
||||
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName)
|
||||
b.StartTimer()
|
||||
|
||||
if err := r.Retry(msgs[0], time.Now().Add(1*time.Minute), "error"); err != nil {
|
||||
if err := r.Retry(msgs[0], time.Now().Add(1*time.Minute), "error", true /*isFailure*/); err != nil {
|
||||
b.Fatalf("Retry failed: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -259,8 +259,8 @@ func BenchmarkCheckAndEnqueue(b *testing.B) {
|
||||
asynqtest.SeedScheduledQueue(b, r.client, zs, base.DefaultQueueName)
|
||||
b.StartTimer()
|
||||
|
||||
if err := r.CheckAndEnqueue(base.DefaultQueueName); err != nil {
|
||||
b.Fatalf("CheckAndEnqueue failed: %v", err)
|
||||
if err := r.ForwardIfReady(base.DefaultQueueName); err != nil {
|
||||
b.Fatalf("ForwardIfReady failed: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -10,7 +10,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
)
|
||||
|
||||
@@ -108,13 +108,13 @@ func (tb *TestBroker) ScheduleUnique(msg *base.TaskMessage, processAt time.Time,
|
||||
return tb.real.ScheduleUnique(msg, processAt, ttl)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
|
||||
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string, isFailure bool) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Retry(msg, processAt, errMsg)
|
||||
return tb.real.Retry(msg, processAt, errMsg, isFailure)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Archive(msg *base.TaskMessage, errMsg string) error {
|
||||
@@ -126,13 +126,13 @@ func (tb *TestBroker) Archive(msg *base.TaskMessage, errMsg string) error {
|
||||
return tb.real.Archive(msg, errMsg)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) CheckAndEnqueue(qnames ...string) error {
|
||||
func (tb *TestBroker) ForwardIfReady(qnames ...string) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.CheckAndEnqueue(qnames...)
|
||||
return tb.real.ForwardIfReady(qnames...)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) {
|
||||
|
230
payload.go
230
payload.go
@@ -1,230 +0,0 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cast"
|
||||
)
|
||||
|
||||
// Payload holds arbitrary data needed for task execution.
|
||||
type Payload struct {
|
||||
data map[string]interface{}
|
||||
}
|
||||
|
||||
type errKeyNotFound struct {
|
||||
key string
|
||||
}
|
||||
|
||||
func (e *errKeyNotFound) Error() string {
|
||||
return fmt.Sprintf("key %q does not exist", e.key)
|
||||
}
|
||||
|
||||
// Has reports whether key exists.
|
||||
func (p Payload) Has(key string) bool {
|
||||
_, ok := p.data[key]
|
||||
return ok
|
||||
}
|
||||
|
||||
func toInt(v interface{}) (int, error) {
|
||||
switch v := v.(type) {
|
||||
case json.Number:
|
||||
val, err := v.Int64()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return int(val), nil
|
||||
default:
|
||||
return cast.ToIntE(v)
|
||||
}
|
||||
}
|
||||
|
||||
// String returns a string representation of payload data.
|
||||
func (p Payload) String() string {
|
||||
return fmt.Sprint(p.data)
|
||||
}
|
||||
|
||||
// MarshalJSON returns the JSON encoding of payload data.
|
||||
func (p Payload) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(p.data)
|
||||
}
|
||||
|
||||
// GetString returns a string value if a string type is associated with
|
||||
// the key, otherwise reports an error.
|
||||
func (p Payload) GetString(key string) (string, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return "", &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToStringE(v)
|
||||
}
|
||||
|
||||
// GetInt returns an int value if a numeric type is associated with
|
||||
// the key, otherwise reports an error.
|
||||
func (p Payload) GetInt(key string) (int, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return 0, &errKeyNotFound{key}
|
||||
}
|
||||
return toInt(v)
|
||||
}
|
||||
|
||||
// GetFloat64 returns a float64 value if a numeric type is associated with
|
||||
// the key, otherwise reports an error.
|
||||
func (p Payload) GetFloat64(key string) (float64, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return 0, &errKeyNotFound{key}
|
||||
}
|
||||
switch v := v.(type) {
|
||||
case json.Number:
|
||||
return v.Float64()
|
||||
default:
|
||||
return cast.ToFloat64E(v)
|
||||
}
|
||||
}
|
||||
|
||||
// GetBool returns a boolean value if a boolean type is associated with
|
||||
// the key, otherwise reports an error.
|
||||
func (p Payload) GetBool(key string) (bool, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return false, &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToBoolE(v)
|
||||
}
|
||||
|
||||
// GetStringSlice returns a slice of strings if a string slice type is associated with
|
||||
// the key, otherwise reports an error.
|
||||
func (p Payload) GetStringSlice(key string) ([]string, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return nil, &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToStringSliceE(v)
|
||||
}
|
||||
|
||||
// GetIntSlice returns a slice of ints if a int slice type is associated with
|
||||
// the key, otherwise reports an error.
|
||||
func (p Payload) GetIntSlice(key string) ([]int, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return nil, &errKeyNotFound{key}
|
||||
}
|
||||
switch v := v.(type) {
|
||||
case []interface{}:
|
||||
var res []int
|
||||
for _, elem := range v {
|
||||
val, err := toInt(elem)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res = append(res, int(val))
|
||||
}
|
||||
return res, nil
|
||||
default:
|
||||
return cast.ToIntSliceE(v)
|
||||
}
|
||||
}
|
||||
|
||||
// GetStringMap returns a map of string to empty interface
|
||||
// if a correct map type is associated with the key,
|
||||
// otherwise reports an error.
|
||||
func (p Payload) GetStringMap(key string) (map[string]interface{}, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return nil, &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToStringMapE(v)
|
||||
}
|
||||
|
||||
// GetStringMapString returns a map of string to string
|
||||
// if a correct map type is associated with the key,
|
||||
// otherwise reports an error.
|
||||
func (p Payload) GetStringMapString(key string) (map[string]string, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return nil, &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToStringMapStringE(v)
|
||||
}
|
||||
|
||||
// GetStringMapStringSlice returns a map of string to string slice
|
||||
// if a correct map type is associated with the key,
|
||||
// otherwise reports an error.
|
||||
func (p Payload) GetStringMapStringSlice(key string) (map[string][]string, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return nil, &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToStringMapStringSliceE(v)
|
||||
}
|
||||
|
||||
// GetStringMapInt returns a map of string to int
|
||||
// if a correct map type is associated with the key,
|
||||
// otherwise reports an error.
|
||||
func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return nil, &errKeyNotFound{key}
|
||||
}
|
||||
switch v := v.(type) {
|
||||
case map[string]interface{}:
|
||||
res := make(map[string]int)
|
||||
for key, val := range v {
|
||||
ival, err := toInt(val)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res[key] = ival
|
||||
}
|
||||
return res, nil
|
||||
default:
|
||||
return cast.ToStringMapIntE(v)
|
||||
}
|
||||
}
|
||||
|
||||
// GetStringMapBool returns a map of string to boolean
|
||||
// if a correct map type is associated with the key,
|
||||
// otherwise reports an error.
|
||||
func (p Payload) GetStringMapBool(key string) (map[string]bool, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return nil, &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToStringMapBoolE(v)
|
||||
}
|
||||
|
||||
// GetTime returns a time value if a correct map type is associated with the key,
|
||||
// otherwise reports an error.
|
||||
func (p Payload) GetTime(key string) (time.Time, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return time.Time{}, &errKeyNotFound{key}
|
||||
}
|
||||
return cast.ToTimeE(v)
|
||||
}
|
||||
|
||||
// GetDuration returns a duration value if a correct map type is associated with the key,
|
||||
// otherwise reports an error.
|
||||
func (p Payload) GetDuration(key string) (time.Duration, error) {
|
||||
v, ok := p.data[key]
|
||||
if !ok {
|
||||
return 0, &errKeyNotFound{key}
|
||||
}
|
||||
switch v := v.(type) {
|
||||
case json.Number:
|
||||
val, err := v.Int64()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return time.Duration(val), nil
|
||||
default:
|
||||
return cast.ToDurationE(v)
|
||||
}
|
||||
}
|
675
payload_test.go
675
payload_test.go
@@ -1,675 +0,0 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
h "github.com/hibiken/asynq/internal/asynqtest"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
)
|
||||
|
||||
type payloadTest struct {
|
||||
data map[string]interface{}
|
||||
key string
|
||||
nonkey string
|
||||
}
|
||||
|
||||
func TestPayloadString(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"name": "gopher"},
|
||||
key: "name",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetString(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetString(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("With Marshaling: Payload.GetString(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetString(tc.nonkey)
|
||||
if err == nil || got != "" {
|
||||
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadInt(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"user_id": 42},
|
||||
key: "user_id",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetInt(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("Payload.GetInt(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetInt(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("With Marshaling: Payload.GetInt(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetInt(tc.nonkey)
|
||||
if err == nil || got != 0 {
|
||||
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadFloat64(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"pi": 3.14},
|
||||
key: "pi",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetFloat64(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetFloat64(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("With Marshaling: Payload.GetFloat64(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetFloat64(tc.nonkey)
|
||||
if err == nil || got != 0 {
|
||||
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want 0, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadBool(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"enabled": true},
|
||||
key: "enabled",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetBool(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("Payload.GetBool(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetBool(tc.key)
|
||||
if err != nil || got != tc.data[tc.key] {
|
||||
t.Errorf("With Marshaling: Payload.GetBool(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetBool(tc.nonkey)
|
||||
if err == nil || got != false {
|
||||
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadStringSlice(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"names": []string{"luke", "rey", "anakin"}},
|
||||
key: "names",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetStringSlice(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetStringSlice(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetStringSlice(tc.nonkey)
|
||||
if err == nil || got != nil {
|
||||
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadIntSlice(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"nums": []int{9, 8, 7}},
|
||||
key: "nums",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetIntSlice(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetIntSlice(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetIntSlice(tc.nonkey)
|
||||
if err == nil || got != nil {
|
||||
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadStringMap(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"user": map[string]interface{}{"name": "Jon Doe", "score": 2.2}},
|
||||
key: "user",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetStringMap(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetStringMap(tc.key)
|
||||
ignoreOpt := cmpopts.IgnoreMapEntries(func(key string, val interface{}) bool {
|
||||
switch val.(type) {
|
||||
case json.Number:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
})
|
||||
diff = cmp.Diff(got, tc.data[tc.key], ignoreOpt)
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil;(-want,+got)\n%s",
|
||||
tc.key, got, err, tc.data[tc.key], diff)
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetStringMap(tc.nonkey)
|
||||
if err == nil || got != nil {
|
||||
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadStringMapString(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"address": map[string]string{"line": "123 Main St", "city": "San Francisco", "state": "CA"}},
|
||||
key: "address",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetStringMapString(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetStringMapString(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetStringMapString(tc.nonkey)
|
||||
if err == nil || got != nil {
|
||||
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadStringMapStringSlice(t *testing.T) {
|
||||
favs := map[string][]string{
|
||||
"movies": {"forrest gump", "star wars"},
|
||||
"tv_shows": {"game of thrones", "HIMYM", "breaking bad"},
|
||||
}
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"favorites": favs},
|
||||
key: "favorites",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetStringMapStringSlice(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetStringMapStringSlice(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetStringMapStringSlice(tc.nonkey)
|
||||
if err == nil || got != nil {
|
||||
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadStringMapInt(t *testing.T) {
|
||||
counter := map[string]int{
|
||||
"a": 1,
|
||||
"b": 101,
|
||||
"c": 42,
|
||||
}
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"counts": counter},
|
||||
key: "counts",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetStringMapInt(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetStringMapInt(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetStringMapInt(tc.nonkey)
|
||||
if err == nil || got != nil {
|
||||
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v; want nil, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadStringMapBool(t *testing.T) {
|
||||
features := map[string]bool{
|
||||
"A": false,
|
||||
"B": true,
|
||||
"C": true,
|
||||
}
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"features": features},
|
||||
key: "features",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetStringMapBool(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetStringMapBool(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetStringMapBool(tc.nonkey)
|
||||
if err == nil || got != nil {
|
||||
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v; want nil, error",
|
||||
tc.key, got, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadTime(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"current": time.Now()},
|
||||
key: "current",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetTime(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetTime(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetTime(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetTime(tc.nonkey)
|
||||
if err == nil || !got.IsZero() {
|
||||
t.Errorf("Payload.GetTime(%q) = %v, %v; want %v, error",
|
||||
tc.key, got, err, time.Time{})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadDuration(t *testing.T) {
|
||||
tests := []payloadTest{
|
||||
{
|
||||
data: map[string]interface{}{"duration": 15 * time.Minute},
|
||||
key: "duration",
|
||||
nonkey: "unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
payload := Payload{tc.data}
|
||||
|
||||
got, err := payload.GetDuration(tc.key)
|
||||
diff := cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// encode and then decode task messsage.
|
||||
in := h.NewTaskMessage("testing", tc.data)
|
||||
encoded, err := base.EncodeMessage(in)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
out, err := base.DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
payload = Payload{out.Payload}
|
||||
got, err = payload.GetDuration(tc.key)
|
||||
diff = cmp.Diff(got, tc.data[tc.key])
|
||||
if err != nil || diff != "" {
|
||||
t.Errorf("With Marshaling: Payload.GetDuration(%q) = %v, %v, want %v, nil",
|
||||
tc.key, got, err, tc.data[tc.key])
|
||||
}
|
||||
|
||||
// access non-existent key.
|
||||
got, err = payload.GetDuration(tc.nonkey)
|
||||
if err == nil || got != 0 {
|
||||
t.Errorf("Payload.GetDuration(%q) = %v, %v; want %v, error",
|
||||
tc.key, got, err, time.Duration(0))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadHas(t *testing.T) {
|
||||
payload := Payload{map[string]interface{}{
|
||||
"user_id": 123,
|
||||
}}
|
||||
|
||||
if !payload.Has("user_id") {
|
||||
t.Errorf("Payload.Has(%q) = false, want true", "user_id")
|
||||
}
|
||||
if payload.Has("name") {
|
||||
t.Errorf("Payload.Has(%q) = true, want false", "name")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPayloadDebuggingStrings(t *testing.T) {
|
||||
data := map[string]interface{}{
|
||||
"foo": 123,
|
||||
"bar": "hello",
|
||||
"baz": false,
|
||||
}
|
||||
payload := Payload{data: data}
|
||||
|
||||
if payload.String() != fmt.Sprint(data) {
|
||||
t.Errorf("Payload.String() = %q, want %q",
|
||||
payload.String(), fmt.Sprint(data))
|
||||
}
|
||||
|
||||
got, err := payload.MarshalJSON()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
want, err := json.Marshal(data)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("Payload.MarhsalJSON() = %s, want %s; (-want,+got)\n%s",
|
||||
got, want, diff)
|
||||
}
|
||||
}
|
33
processor.go
33
processor.go
@@ -6,7 +6,6 @@ package asynq
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"runtime"
|
||||
@@ -17,8 +16,8 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/errors"
|
||||
"github.com/hibiken/asynq/internal/log"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"golang.org/x/time/rate"
|
||||
)
|
||||
|
||||
@@ -34,6 +33,7 @@ type processor struct {
|
||||
orderedQueues []string
|
||||
|
||||
retryDelayFunc RetryDelayFunc
|
||||
isFailureFunc func(error) bool
|
||||
|
||||
errHandler ErrorHandler
|
||||
|
||||
@@ -71,6 +71,7 @@ type processorParams struct {
|
||||
logger *log.Logger
|
||||
broker base.Broker
|
||||
retryDelayFunc RetryDelayFunc
|
||||
isFailureFunc func(error) bool
|
||||
syncCh chan<- *syncRequest
|
||||
cancelations *base.Cancelations
|
||||
concurrency int
|
||||
@@ -95,6 +96,7 @@ func newProcessor(params processorParams) *processor {
|
||||
queueConfig: queues,
|
||||
orderedQueues: orderedQueues,
|
||||
retryDelayFunc: params.retryDelayFunc,
|
||||
isFailureFunc: params.isFailureFunc,
|
||||
syncRequestCh: params.syncCh,
|
||||
cancelations: params.cancelations,
|
||||
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
|
||||
@@ -123,8 +125,8 @@ func (p *processor) stop() {
|
||||
})
|
||||
}
|
||||
|
||||
// NOTE: once terminated, processor cannot be re-started.
|
||||
func (p *processor) terminate() {
|
||||
// NOTE: once shutdown, processor cannot be re-started.
|
||||
func (p *processor) shutdown() {
|
||||
p.stop()
|
||||
|
||||
time.AfterFunc(p.shutdownTimeout, func() { close(p.abort) })
|
||||
@@ -163,7 +165,7 @@ func (p *processor) exec() {
|
||||
qnames := p.queues()
|
||||
msg, deadline, err := p.broker.Dequeue(qnames...)
|
||||
switch {
|
||||
case err == rdb.ErrNoProcessableTask:
|
||||
case errors.Is(err, errors.ErrNoProcessableTask):
|
||||
p.logger.Debug("All queues are empty")
|
||||
// Queues are empty, this is a normal behavior.
|
||||
// Sleep to avoid slamming redis and let scheduler move tasks into queues.
|
||||
@@ -198,7 +200,7 @@ func (p *processor) exec() {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
// already canceled (e.g. deadline exceeded).
|
||||
p.retryOrKill(ctx, msg, ctx.Err())
|
||||
p.retryOrArchive(ctx, msg, ctx.Err())
|
||||
return
|
||||
default:
|
||||
}
|
||||
@@ -215,7 +217,7 @@ func (p *processor) exec() {
|
||||
p.requeue(msg)
|
||||
return
|
||||
case <-ctx.Done():
|
||||
p.retryOrKill(ctx, msg, ctx.Err())
|
||||
p.retryOrArchive(ctx, msg, ctx.Err())
|
||||
return
|
||||
case resErr := <-resCh:
|
||||
// Note: One of three things should happen.
|
||||
@@ -223,7 +225,7 @@ func (p *processor) exec() {
|
||||
// 2) Retry -> Removes the message from Active & Adds the message to Retry
|
||||
// 3) Archive -> Removes the message from Active & Adds the message to archive
|
||||
if resErr != nil {
|
||||
p.retryOrKill(ctx, msg, resErr)
|
||||
p.retryOrArchive(ctx, msg, resErr)
|
||||
return
|
||||
}
|
||||
p.markAsDone(ctx, msg)
|
||||
@@ -264,22 +266,27 @@ func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) {
|
||||
// the task should not be retried and should be archived instead.
|
||||
var SkipRetry = errors.New("skip retry for the task")
|
||||
|
||||
func (p *processor) retryOrKill(ctx context.Context, msg *base.TaskMessage, err error) {
|
||||
func (p *processor) retryOrArchive(ctx context.Context, msg *base.TaskMessage, err error) {
|
||||
if p.errHandler != nil {
|
||||
p.errHandler.HandleError(ctx, NewTask(msg.Type, msg.Payload), err)
|
||||
}
|
||||
if !p.isFailureFunc(err) {
|
||||
// retry the task without marking it as failed
|
||||
p.retry(ctx, msg, err, false /*isFailure*/)
|
||||
return
|
||||
}
|
||||
if msg.Retried >= msg.Retry || errors.Is(err, SkipRetry) {
|
||||
p.logger.Warnf("Retry exhausted for task id=%s", msg.ID)
|
||||
p.archive(ctx, msg, err)
|
||||
} else {
|
||||
p.retry(ctx, msg, err)
|
||||
p.retry(ctx, msg, err, true /*isFailure*/)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error) {
|
||||
func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error, isFailure bool) {
|
||||
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
|
||||
retryAt := time.Now().Add(d)
|
||||
err := p.broker.Retry(msg, retryAt, e.Error())
|
||||
err := p.broker.Retry(msg, retryAt, e.Error(), isFailure)
|
||||
if err != nil {
|
||||
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.RetryKey(msg.Queue))
|
||||
deadline, ok := ctx.Deadline()
|
||||
@@ -289,7 +296,7 @@ func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error) {
|
||||
p.logger.Warnf("%s; Will retry syncing", errMsg)
|
||||
p.syncRequestCh <- &syncRequest{
|
||||
fn: func() error {
|
||||
return p.broker.Retry(msg, retryAt, e.Error())
|
||||
return p.broker.Retry(msg, retryAt, e.Error(), isFailure)
|
||||
},
|
||||
errMsg: errMsg,
|
||||
deadline: deadline,
|
||||
|
@@ -6,6 +6,7 @@ package asynq
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sort"
|
||||
"sync"
|
||||
@@ -13,7 +14,6 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
h "github.com/hibiken/asynq/internal/asynqtest"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
@@ -44,6 +44,7 @@ func fakeSyncer(syncCh <-chan *syncRequest, done <-chan struct{}) {
|
||||
|
||||
func TestProcessorSuccessWithSingleQueue(t *testing.T) {
|
||||
r := setup(t)
|
||||
defer r.Close()
|
||||
rdbClient := rdb.NewRDB(r)
|
||||
|
||||
m1 := h.NewTaskMessage("task1", nil)
|
||||
@@ -97,6 +98,7 @@ func TestProcessorSuccessWithSingleQueue(t *testing.T) {
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
retryDelayFunc: DefaultRetryDelayFunc,
|
||||
isFailureFunc: defaultIsFailureFunc,
|
||||
syncCh: syncCh,
|
||||
cancelations: base.NewCancelations(),
|
||||
concurrency: 10,
|
||||
@@ -113,18 +115,18 @@ func TestProcessorSuccessWithSingleQueue(t *testing.T) {
|
||||
for _, msg := range tc.incoming {
|
||||
err := rdbClient.Enqueue(msg)
|
||||
if err != nil {
|
||||
p.terminate()
|
||||
p.shutdown()
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
time.Sleep(2 * time.Second) // wait for two second to allow all pending tasks to be processed.
|
||||
if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
|
||||
if l := r.LLen(context.Background(), base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
|
||||
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
|
||||
}
|
||||
p.terminate()
|
||||
p.shutdown()
|
||||
|
||||
mu.Lock()
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
|
||||
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
|
||||
}
|
||||
mu.Unlock()
|
||||
@@ -146,6 +148,7 @@ func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
|
||||
t3 = NewTask(m3.Type, m3.Payload)
|
||||
t4 = NewTask(m4.Type, m4.Payload)
|
||||
)
|
||||
defer r.Close()
|
||||
|
||||
tests := []struct {
|
||||
pending map[string][]*base.TaskMessage
|
||||
@@ -188,6 +191,7 @@ func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
retryDelayFunc: DefaultRetryDelayFunc,
|
||||
isFailureFunc: defaultIsFailureFunc,
|
||||
syncCh: syncCh,
|
||||
cancelations: base.NewCancelations(),
|
||||
concurrency: 10,
|
||||
@@ -209,14 +213,14 @@ func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
|
||||
time.Sleep(2 * time.Second)
|
||||
// Make sure no messages are stuck in active list.
|
||||
for _, qname := range tc.queues {
|
||||
if l := r.LLen(base.ActiveKey(qname)).Val(); l != 0 {
|
||||
if l := r.LLen(context.Background(), base.ActiveKey(qname)).Val(); l != 0 {
|
||||
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
|
||||
}
|
||||
}
|
||||
p.terminate()
|
||||
p.shutdown()
|
||||
|
||||
mu.Lock()
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
|
||||
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
|
||||
}
|
||||
mu.Unlock()
|
||||
@@ -226,9 +230,10 @@ func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
|
||||
// https://github.com/hibiken/asynq/issues/166
|
||||
func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
|
||||
r := setup(t)
|
||||
defer r.Close()
|
||||
rdbClient := rdb.NewRDB(r)
|
||||
|
||||
m1 := h.NewTaskMessage("large_number", map[string]interface{}{"data": 111111111111111111})
|
||||
m1 := h.NewTaskMessage("large_number", h.JSON(map[string]interface{}{"data": 111111111111111111}))
|
||||
t1 := NewTask(m1.Type, m1.Payload)
|
||||
|
||||
tests := []struct {
|
||||
@@ -250,10 +255,14 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
|
||||
handler := func(ctx context.Context, task *Task) error {
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
if data, err := task.Payload.GetInt("data"); err != nil {
|
||||
t.Errorf("coult not get data from payload: %v", err)
|
||||
} else {
|
||||
var payload map[string]int
|
||||
if err := json.Unmarshal(task.Payload(), &payload); err != nil {
|
||||
t.Errorf("coult not decode payload: %v", err)
|
||||
}
|
||||
if data, ok := payload["data"]; ok {
|
||||
t.Logf("data == %d", data)
|
||||
} else {
|
||||
t.Errorf("could not get data from payload")
|
||||
}
|
||||
processed = append(processed, task)
|
||||
return nil
|
||||
@@ -269,6 +278,7 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
retryDelayFunc: DefaultRetryDelayFunc,
|
||||
isFailureFunc: defaultIsFailureFunc,
|
||||
syncCh: syncCh,
|
||||
cancelations: base.NewCancelations(),
|
||||
concurrency: 10,
|
||||
@@ -283,13 +293,13 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
|
||||
|
||||
p.start(&sync.WaitGroup{})
|
||||
time.Sleep(2 * time.Second) // wait for two second to allow all pending tasks to be processed.
|
||||
if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
|
||||
if l := r.LLen(context.Background(), base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
|
||||
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
|
||||
}
|
||||
p.terminate()
|
||||
p.shutdown()
|
||||
|
||||
mu.Lock()
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmpopts.IgnoreUnexported(Payload{})); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
|
||||
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
|
||||
}
|
||||
mu.Unlock()
|
||||
@@ -298,6 +308,7 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
|
||||
|
||||
func TestProcessorRetry(t *testing.T) {
|
||||
r := setup(t)
|
||||
defer r.Close()
|
||||
rdbClient := rdb.NewRDB(r)
|
||||
|
||||
m1 := h.NewTaskMessage("send_email", nil)
|
||||
@@ -308,66 +319,55 @@ func TestProcessorRetry(t *testing.T) {
|
||||
|
||||
errMsg := "something went wrong"
|
||||
wrappedSkipRetry := fmt.Errorf("%s:%w", errMsg, SkipRetry)
|
||||
now := time.Now()
|
||||
|
||||
tests := []struct {
|
||||
desc string // test description
|
||||
pending []*base.TaskMessage // initial default queue state
|
||||
incoming []*base.TaskMessage // tasks to be enqueued during run
|
||||
delay time.Duration // retry delay duration
|
||||
handler Handler // task handler
|
||||
wait time.Duration // wait duration between starting and stopping processor for this test case
|
||||
wantRetry []base.Z // tasks in retry queue at the end
|
||||
wantErrMsg string // error message the task should record
|
||||
wantRetry []*base.TaskMessage // tasks in retry queue at the end
|
||||
wantArchived []*base.TaskMessage // tasks in archived queue at the end
|
||||
wantErrCount int // number of times error handler should be called
|
||||
}{
|
||||
{
|
||||
desc: "Should automatically retry errored tasks",
|
||||
pending: []*base.TaskMessage{m1, m2},
|
||||
incoming: []*base.TaskMessage{m3, m4},
|
||||
delay: time.Minute,
|
||||
desc: "Should automatically retry errored tasks",
|
||||
pending: []*base.TaskMessage{m1, m2, m3, m4},
|
||||
delay: time.Minute,
|
||||
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
|
||||
return fmt.Errorf(errMsg)
|
||||
}),
|
||||
wait: 2 * time.Second,
|
||||
wantRetry: []base.Z{
|
||||
{Message: h.TaskMessageAfterRetry(*m2, errMsg), Score: now.Add(time.Minute).Unix()},
|
||||
{Message: h.TaskMessageAfterRetry(*m3, errMsg), Score: now.Add(time.Minute).Unix()},
|
||||
{Message: h.TaskMessageAfterRetry(*m4, errMsg), Score: now.Add(time.Minute).Unix()},
|
||||
},
|
||||
wantArchived: []*base.TaskMessage{h.TaskMessageWithError(*m1, errMsg)},
|
||||
wait: 2 * time.Second,
|
||||
wantErrMsg: errMsg,
|
||||
wantRetry: []*base.TaskMessage{m2, m3, m4},
|
||||
wantArchived: []*base.TaskMessage{m1},
|
||||
wantErrCount: 4,
|
||||
},
|
||||
{
|
||||
desc: "Should skip retry errored tasks",
|
||||
pending: []*base.TaskMessage{m1, m2},
|
||||
incoming: []*base.TaskMessage{},
|
||||
delay: time.Minute,
|
||||
desc: "Should skip retry errored tasks",
|
||||
pending: []*base.TaskMessage{m1, m2},
|
||||
delay: time.Minute,
|
||||
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
|
||||
return SkipRetry // return SkipRetry without wrapping
|
||||
}),
|
||||
wait: 2 * time.Second,
|
||||
wantRetry: []base.Z{},
|
||||
wantArchived: []*base.TaskMessage{
|
||||
h.TaskMessageWithError(*m1, SkipRetry.Error()),
|
||||
h.TaskMessageWithError(*m2, SkipRetry.Error()),
|
||||
},
|
||||
wait: 2 * time.Second,
|
||||
wantErrMsg: SkipRetry.Error(),
|
||||
wantRetry: []*base.TaskMessage{},
|
||||
wantArchived: []*base.TaskMessage{m1, m2},
|
||||
wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error
|
||||
},
|
||||
{
|
||||
desc: "Should skip retry errored tasks (with error wrapping)",
|
||||
pending: []*base.TaskMessage{m1, m2},
|
||||
incoming: []*base.TaskMessage{},
|
||||
delay: time.Minute,
|
||||
desc: "Should skip retry errored tasks (with error wrapping)",
|
||||
pending: []*base.TaskMessage{m1, m2},
|
||||
delay: time.Minute,
|
||||
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
|
||||
return wrappedSkipRetry
|
||||
}),
|
||||
wait: 2 * time.Second,
|
||||
wantRetry: []base.Z{},
|
||||
wantArchived: []*base.TaskMessage{
|
||||
h.TaskMessageWithError(*m1, wrappedSkipRetry.Error()),
|
||||
h.TaskMessageWithError(*m2, wrappedSkipRetry.Error()),
|
||||
},
|
||||
wait: 2 * time.Second,
|
||||
wantErrMsg: wrappedSkipRetry.Error(),
|
||||
wantRetry: []*base.TaskMessage{},
|
||||
wantArchived: []*base.TaskMessage{m1, m2},
|
||||
wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error
|
||||
},
|
||||
}
|
||||
@@ -398,6 +398,7 @@ func TestProcessorRetry(t *testing.T) {
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
retryDelayFunc: delayFunc,
|
||||
isFailureFunc: defaultIsFailureFunc,
|
||||
syncCh: nil,
|
||||
cancelations: base.NewCancelations(),
|
||||
concurrency: 10,
|
||||
@@ -411,28 +412,38 @@ func TestProcessorRetry(t *testing.T) {
|
||||
p.handler = tc.handler
|
||||
|
||||
p.start(&sync.WaitGroup{})
|
||||
for _, msg := range tc.incoming {
|
||||
err := rdbClient.Enqueue(msg)
|
||||
if err != nil {
|
||||
p.terminate()
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
time.Sleep(tc.wait) // FIXME: This makes test flaky.
|
||||
p.terminate()
|
||||
runTime := time.Now() // time when processor is running
|
||||
time.Sleep(tc.wait) // FIXME: This makes test flaky.
|
||||
p.shutdown()
|
||||
|
||||
cmpOpt := h.EquateInt64Approx(1) // allow up to a second difference in zset score
|
||||
cmpOpt := h.EquateInt64Approx(int64(tc.wait.Seconds())) // allow up to a wait-second difference in zset score
|
||||
gotRetry := h.GetRetryEntries(t, r, base.DefaultQueueName)
|
||||
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
|
||||
var wantRetry []base.Z // Note: construct wantRetry here since `LastFailedAt` and ZSCORE is relative to each test run.
|
||||
for _, msg := range tc.wantRetry {
|
||||
wantRetry = append(wantRetry,
|
||||
base.Z{
|
||||
Message: h.TaskMessageAfterRetry(*msg, tc.wantErrMsg, runTime),
|
||||
Score: runTime.Add(tc.delay).Unix(),
|
||||
})
|
||||
}
|
||||
if diff := cmp.Diff(wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
|
||||
t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.RetryKey(base.DefaultQueueName), diff)
|
||||
}
|
||||
|
||||
gotDead := h.GetArchivedMessages(t, r, base.DefaultQueueName)
|
||||
if diff := cmp.Diff(tc.wantArchived, gotDead, h.SortMsgOpt); diff != "" {
|
||||
gotArchived := h.GetArchivedEntries(t, r, base.DefaultQueueName)
|
||||
var wantArchived []base.Z // Note: construct wantArchived here since `LastFailedAt` and ZSCORE is relative to each test run.
|
||||
for _, msg := range tc.wantArchived {
|
||||
wantArchived = append(wantArchived,
|
||||
base.Z{
|
||||
Message: h.TaskMessageWithError(*msg, tc.wantErrMsg, runTime),
|
||||
Score: runTime.Unix(),
|
||||
})
|
||||
}
|
||||
if diff := cmp.Diff(wantArchived, gotArchived, h.SortZSetEntryOpt, cmpOpt); diff != "" {
|
||||
t.Errorf("%s: mismatch found in %q after running processor; (-want, +got)\n%s", tc.desc, base.ArchivedKey(base.DefaultQueueName), diff)
|
||||
}
|
||||
|
||||
if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
|
||||
if l := r.LLen(context.Background(), base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
|
||||
t.Errorf("%s: %q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), tc.desc, l)
|
||||
}
|
||||
|
||||
@@ -479,6 +490,7 @@ func TestProcessorQueues(t *testing.T) {
|
||||
logger: testLogger,
|
||||
broker: nil,
|
||||
retryDelayFunc: DefaultRetryDelayFunc,
|
||||
isFailureFunc: defaultIsFailureFunc,
|
||||
syncCh: nil,
|
||||
cancelations: base.NewCancelations(),
|
||||
concurrency: 10,
|
||||
@@ -570,6 +582,7 @@ func TestProcessorWithStrictPriority(t *testing.T) {
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
retryDelayFunc: DefaultRetryDelayFunc,
|
||||
isFailureFunc: defaultIsFailureFunc,
|
||||
syncCh: syncCh,
|
||||
cancelations: base.NewCancelations(),
|
||||
concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time.
|
||||
@@ -586,13 +599,13 @@ func TestProcessorWithStrictPriority(t *testing.T) {
|
||||
time.Sleep(tc.wait)
|
||||
// Make sure no tasks are stuck in active list.
|
||||
for _, qname := range tc.queues {
|
||||
if l := r.LLen(base.ActiveKey(qname)).Val(); l != 0 {
|
||||
if l := r.LLen(context.Background(), base.ActiveKey(qname)).Val(); l != 0 {
|
||||
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
|
||||
}
|
||||
}
|
||||
p.terminate()
|
||||
p.shutdown()
|
||||
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, cmp.AllowUnexported(Payload{})); diff != "" {
|
||||
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Task{})); diff != "" {
|
||||
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
|
||||
}
|
||||
|
||||
@@ -611,7 +624,7 @@ func TestProcessorPerform(t *testing.T) {
|
||||
handler: func(ctx context.Context, t *Task) error {
|
||||
return nil
|
||||
},
|
||||
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
|
||||
task: NewTask("gen_thumbnail", h.JSON(map[string]interface{}{"src": "some/img/path"})),
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
@@ -619,7 +632,7 @@ func TestProcessorPerform(t *testing.T) {
|
||||
handler: func(ctx context.Context, t *Task) error {
|
||||
return fmt.Errorf("something went wrong")
|
||||
},
|
||||
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
|
||||
task: NewTask("gen_thumbnail", h.JSON(map[string]interface{}{"src": "some/img/path"})),
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
@@ -627,7 +640,7 @@ func TestProcessorPerform(t *testing.T) {
|
||||
handler: func(ctx context.Context, t *Task) error {
|
||||
panic("something went terribly wrong")
|
||||
},
|
||||
task: NewTask("gen_thumbnail", map[string]interface{}{"src": "some/img/path"}),
|
||||
task: NewTask("gen_thumbnail", h.JSON(map[string]interface{}{"src": "some/img/path"})),
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
53
recoverer.go
53
recoverer.go
@@ -5,7 +5,7 @@
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"context"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -17,6 +17,7 @@ type recoverer struct {
|
||||
logger *log.Logger
|
||||
broker base.Broker
|
||||
retryDelayFunc RetryDelayFunc
|
||||
isFailureFunc func(error) bool
|
||||
|
||||
// channel to communicate back to the long running "recoverer" goroutine.
|
||||
done chan struct{}
|
||||
@@ -34,6 +35,7 @@ type recovererParams struct {
|
||||
queues []string
|
||||
interval time.Duration
|
||||
retryDelayFunc RetryDelayFunc
|
||||
isFailureFunc func(error) bool
|
||||
}
|
||||
|
||||
func newRecoverer(params recovererParams) *recoverer {
|
||||
@@ -44,10 +46,11 @@ func newRecoverer(params recovererParams) *recoverer {
|
||||
queues: params.queues,
|
||||
interval: params.interval,
|
||||
retryDelayFunc: params.retryDelayFunc,
|
||||
isFailureFunc: params.isFailureFunc,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *recoverer) terminate() {
|
||||
func (r *recoverer) shutdown() {
|
||||
r.logger.Debug("Recoverer shutting down...")
|
||||
// Signal the recoverer goroutine to stop polling.
|
||||
r.done <- struct{}{}
|
||||
@@ -57,6 +60,7 @@ func (r *recoverer) start(wg *sync.WaitGroup) {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
r.recover()
|
||||
timer := time.NewTimer(r.interval)
|
||||
for {
|
||||
select {
|
||||
@@ -65,37 +69,40 @@ func (r *recoverer) start(wg *sync.WaitGroup) {
|
||||
timer.Stop()
|
||||
return
|
||||
case <-timer.C:
|
||||
// Get all tasks which have expired 30 seconds ago or earlier.
|
||||
deadline := time.Now().Add(-30 * time.Second)
|
||||
msgs, err := r.broker.ListDeadlineExceeded(deadline, r.queues...)
|
||||
if err != nil {
|
||||
r.logger.Warn("recoverer: could not list deadline exceeded tasks")
|
||||
continue
|
||||
}
|
||||
const errMsg = "deadline exceeded" // TODO: better error message
|
||||
for _, msg := range msgs {
|
||||
if msg.Retried >= msg.Retry {
|
||||
r.archive(msg, errMsg)
|
||||
} else {
|
||||
r.retry(msg, errMsg)
|
||||
}
|
||||
}
|
||||
|
||||
r.recover()
|
||||
timer.Reset(r.interval)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func (r *recoverer) retry(msg *base.TaskMessage, errMsg string) {
|
||||
delay := r.retryDelayFunc(msg.Retried, fmt.Errorf(errMsg), NewTask(msg.Type, msg.Payload))
|
||||
func (r *recoverer) recover() {
|
||||
// Get all tasks which have expired 30 seconds ago or earlier.
|
||||
deadline := time.Now().Add(-30 * time.Second)
|
||||
msgs, err := r.broker.ListDeadlineExceeded(deadline, r.queues...)
|
||||
if err != nil {
|
||||
r.logger.Warn("recoverer: could not list deadline exceeded tasks")
|
||||
return
|
||||
}
|
||||
for _, msg := range msgs {
|
||||
if msg.Retried >= msg.Retry {
|
||||
r.archive(msg, context.DeadlineExceeded)
|
||||
} else {
|
||||
r.retry(msg, context.DeadlineExceeded)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *recoverer) retry(msg *base.TaskMessage, err error) {
|
||||
delay := r.retryDelayFunc(msg.Retried, err, NewTask(msg.Type, msg.Payload))
|
||||
retryAt := time.Now().Add(delay)
|
||||
if err := r.broker.Retry(msg, retryAt, errMsg); err != nil {
|
||||
if err := r.broker.Retry(msg, retryAt, err.Error(), r.isFailureFunc(err)); err != nil {
|
||||
r.logger.Warnf("recoverer: could not retry deadline exceeded task: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *recoverer) archive(msg *base.TaskMessage, errMsg string) {
|
||||
if err := r.broker.Archive(msg, errMsg); err != nil {
|
||||
func (r *recoverer) archive(msg *base.TaskMessage, err error) {
|
||||
if err := r.broker.Archive(msg, err.Error()); err != nil {
|
||||
r.logger.Warnf("recoverer: could not move task to archive: %v", err)
|
||||
}
|
||||
}
|
||||
|
@@ -64,7 +64,7 @@ func TestRecoverer(t *testing.T) {
|
||||
"default": {},
|
||||
},
|
||||
wantRetry: map[string][]*base.TaskMessage{
|
||||
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
|
||||
"default": {t1},
|
||||
},
|
||||
wantArchived: map[string][]*base.TaskMessage{
|
||||
"default": {},
|
||||
@@ -101,7 +101,7 @@ func TestRecoverer(t *testing.T) {
|
||||
"critical": {},
|
||||
},
|
||||
wantArchived: map[string][]*base.TaskMessage{
|
||||
"default": {h.TaskMessageWithError(*t4, "deadline exceeded")},
|
||||
"default": {t4},
|
||||
"critical": {},
|
||||
},
|
||||
},
|
||||
@@ -137,7 +137,7 @@ func TestRecoverer(t *testing.T) {
|
||||
"critical": {{Message: t3, Score: oneHourFromNow.Unix()}},
|
||||
},
|
||||
wantRetry: map[string][]*base.TaskMessage{
|
||||
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
|
||||
"default": {t1},
|
||||
"critical": {},
|
||||
},
|
||||
wantArchived: map[string][]*base.TaskMessage{
|
||||
@@ -176,8 +176,8 @@ func TestRecoverer(t *testing.T) {
|
||||
"default": {{Message: t2, Score: oneHourFromNow.Unix()}},
|
||||
},
|
||||
wantRetry: map[string][]*base.TaskMessage{
|
||||
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
|
||||
"critical": {h.TaskMessageAfterRetry(*t3, "deadline exceeded")},
|
||||
"default": {t1},
|
||||
"critical": {t3},
|
||||
},
|
||||
wantArchived: map[string][]*base.TaskMessage{
|
||||
"default": {},
|
||||
@@ -234,12 +234,14 @@ func TestRecoverer(t *testing.T) {
|
||||
queues: []string{"default", "critical"},
|
||||
interval: 1 * time.Second,
|
||||
retryDelayFunc: func(n int, err error, task *Task) time.Duration { return 30 * time.Second },
|
||||
isFailureFunc: defaultIsFailureFunc,
|
||||
})
|
||||
|
||||
var wg sync.WaitGroup
|
||||
recoverer.start(&wg)
|
||||
runTime := time.Now() // time when recoverer is running
|
||||
time.Sleep(2 * time.Second)
|
||||
recoverer.terminate()
|
||||
recoverer.shutdown()
|
||||
|
||||
for qname, want := range tc.wantActive {
|
||||
gotActive := h.GetActiveMessages(t, r, qname)
|
||||
@@ -253,15 +255,24 @@ func TestRecoverer(t *testing.T) {
|
||||
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.DeadlinesKey(qname), diff)
|
||||
}
|
||||
}
|
||||
for qname, want := range tc.wantRetry {
|
||||
cmpOpt := h.EquateInt64Approx(2) // allow up to two-second difference in `LastFailedAt`
|
||||
for qname, msgs := range tc.wantRetry {
|
||||
gotRetry := h.GetRetryMessages(t, r, qname)
|
||||
if diff := cmp.Diff(want, gotRetry, h.SortMsgOpt); diff != "" {
|
||||
var wantRetry []*base.TaskMessage // Note: construct message here since `LastFailedAt` is relative to each test run
|
||||
for _, msg := range msgs {
|
||||
wantRetry = append(wantRetry, h.TaskMessageAfterRetry(*msg, "context deadline exceeded", runTime))
|
||||
}
|
||||
if diff := cmp.Diff(wantRetry, gotRetry, h.SortMsgOpt, cmpOpt); diff != "" {
|
||||
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff)
|
||||
}
|
||||
}
|
||||
for qname, want := range tc.wantArchived {
|
||||
gotDead := h.GetArchivedMessages(t, r, qname)
|
||||
if diff := cmp.Diff(want, gotDead, h.SortMsgOpt); diff != "" {
|
||||
for qname, msgs := range tc.wantArchived {
|
||||
gotArchived := h.GetArchivedMessages(t, r, qname)
|
||||
var wantArchived []*base.TaskMessage
|
||||
for _, msg := range msgs {
|
||||
wantArchived = append(wantArchived, h.TaskMessageWithError(*msg, "context deadline exceeded", runTime))
|
||||
}
|
||||
if diff := cmp.Diff(wantArchived, gotArchived, h.SortMsgOpt, cmpOpt); diff != "" {
|
||||
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.ArchivedKey(qname), diff)
|
||||
}
|
||||
}
|
||||
|
54
scheduler.go
54
scheduler.go
@@ -10,7 +10,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/google/uuid"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/log"
|
||||
@@ -19,9 +19,11 @@ import (
|
||||
)
|
||||
|
||||
// A Scheduler kicks off tasks at regular intervals based on the user defined schedule.
|
||||
//
|
||||
// Schedulers are safe for concurrent use by multiple goroutines.
|
||||
type Scheduler struct {
|
||||
id string
|
||||
status *base.ServerStatus
|
||||
state *base.ServerState
|
||||
logger *log.Logger
|
||||
client *Client
|
||||
rdb *rdb.RDB
|
||||
@@ -30,6 +32,9 @@ type Scheduler struct {
|
||||
done chan struct{}
|
||||
wg sync.WaitGroup
|
||||
errHandler func(task *Task, opts []Option, err error)
|
||||
|
||||
// guards idmap
|
||||
mu sync.Mutex
|
||||
// idmap maps Scheduler's entry ID to cron.EntryID
|
||||
// to avoid using cron.EntryID as the public API of
|
||||
// the Scheduler.
|
||||
@@ -61,7 +66,7 @@ func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler {
|
||||
|
||||
return &Scheduler{
|
||||
id: generateSchedulerID(),
|
||||
status: base.NewServerStatus(base.StatusIdle),
|
||||
state: base.NewServerState(),
|
||||
logger: logger,
|
||||
client: NewClient(r),
|
||||
rdb: rdb.NewRDB(c),
|
||||
@@ -117,7 +122,7 @@ type enqueueJob struct {
|
||||
}
|
||||
|
||||
func (j *enqueueJob) Run() {
|
||||
res, err := j.client.Enqueue(j.task, j.opts...)
|
||||
info, err := j.client.Enqueue(j.task, j.opts...)
|
||||
if err != nil {
|
||||
j.logger.Errorf("scheduler could not enqueue a task %+v: %v", j.task, err)
|
||||
if j.errHandler != nil {
|
||||
@@ -125,10 +130,10 @@ func (j *enqueueJob) Run() {
|
||||
}
|
||||
return
|
||||
}
|
||||
j.logger.Debugf("scheduler enqueued a task: %+v", res)
|
||||
j.logger.Debugf("scheduler enqueued a task: %+v", info)
|
||||
event := &base.SchedulerEnqueueEvent{
|
||||
TaskID: res.ID,
|
||||
EnqueuedAt: res.EnqueuedAt.In(j.location),
|
||||
TaskID: info.ID,
|
||||
EnqueuedAt: time.Now().In(j.location),
|
||||
}
|
||||
err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event)
|
||||
if err != nil {
|
||||
@@ -154,38 +159,44 @@ func (s *Scheduler) Register(cronspec string, task *Task, opts ...Option) (entry
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
s.mu.Lock()
|
||||
s.idmap[job.id.String()] = cronID
|
||||
s.mu.Unlock()
|
||||
return job.id.String(), nil
|
||||
}
|
||||
|
||||
// Unregister removes a registered entry by entry ID.
|
||||
// Unregister returns a non-nil error if no entries were found for the given entryID.
|
||||
func (s *Scheduler) Unregister(entryID string) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
cronID, ok := s.idmap[entryID]
|
||||
if !ok {
|
||||
return fmt.Errorf("asynq: no scheduler entry found")
|
||||
}
|
||||
delete(s.idmap, entryID)
|
||||
s.cron.Remove(cronID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run starts the scheduler until an os signal to exit the program is received.
|
||||
// It returns an error if scheduler is already running or has been stopped.
|
||||
// It returns an error if scheduler is already running or has been shutdown.
|
||||
func (s *Scheduler) Run() error {
|
||||
if err := s.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
s.waitForSignals()
|
||||
return s.Stop()
|
||||
s.Shutdown()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start starts the scheduler.
|
||||
// It returns an error if the scheduler is already running or has been stopped.
|
||||
// It returns an error if the scheduler is already running or has been shutdown.
|
||||
func (s *Scheduler) Start() error {
|
||||
switch s.status.Get() {
|
||||
case base.StatusRunning:
|
||||
switch s.state.Get() {
|
||||
case base.StateActive:
|
||||
return fmt.Errorf("asynq: the scheduler is already running")
|
||||
case base.StatusStopped:
|
||||
case base.StateClosed:
|
||||
return fmt.Errorf("asynq: the scheduler has already been stopped")
|
||||
}
|
||||
s.logger.Info("Scheduler starting")
|
||||
@@ -193,16 +204,12 @@ func (s *Scheduler) Start() error {
|
||||
s.cron.Start()
|
||||
s.wg.Add(1)
|
||||
go s.runHeartbeater()
|
||||
s.status.Set(base.StatusRunning)
|
||||
s.state.Set(base.StateActive)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop stops the scheduler.
|
||||
// It returns an error if the scheduler is not currently running.
|
||||
func (s *Scheduler) Stop() error {
|
||||
if s.status.Get() != base.StatusRunning {
|
||||
return fmt.Errorf("asynq: the scheduler is not running")
|
||||
}
|
||||
// Shutdown stops and shuts down the scheduler.
|
||||
func (s *Scheduler) Shutdown() {
|
||||
s.logger.Info("Scheduler shutting down")
|
||||
close(s.done) // signal heartbeater to stop
|
||||
ctx := s.cron.Stop()
|
||||
@@ -212,9 +219,8 @@ func (s *Scheduler) Stop() error {
|
||||
s.clearHistory()
|
||||
s.client.Close()
|
||||
s.rdb.Close()
|
||||
s.status.Set(base.StatusStopped)
|
||||
s.state.Set(base.StateClosed)
|
||||
s.logger.Info("Scheduler stopped")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Scheduler) runHeartbeater() {
|
||||
@@ -240,8 +246,8 @@ func (s *Scheduler) beat() {
|
||||
e := &base.SchedulerEntry{
|
||||
ID: job.id.String(),
|
||||
Spec: job.cronspec,
|
||||
Type: job.task.Type,
|
||||
Payload: job.task.Payload.data,
|
||||
Type: job.task.Type(),
|
||||
Payload: job.task.Payload(),
|
||||
Opts: stringifyOptions(job.opts),
|
||||
Next: entry.Next,
|
||||
Prev: entry.Prev,
|
||||
|
@@ -67,9 +67,7 @@ func TestSchedulerRegister(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
time.Sleep(tc.wait)
|
||||
if err := scheduler.Stop(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
scheduler.Shutdown()
|
||||
|
||||
got := asynqtest.GetPendingMessages(t, r, tc.queue)
|
||||
if diff := cmp.Diff(tc.want, got, asynqtest.IgnoreIDOpt); diff != "" {
|
||||
@@ -106,9 +104,7 @@ func TestSchedulerWhenRedisDown(t *testing.T) {
|
||||
}
|
||||
// Scheduler should attempt to enqueue the task three times (every 3s).
|
||||
time.Sleep(10 * time.Second)
|
||||
if err := scheduler.Stop(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
scheduler.Shutdown()
|
||||
|
||||
mu.Lock()
|
||||
if counter != 3 {
|
||||
@@ -150,9 +146,7 @@ func TestSchedulerUnregister(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
time.Sleep(tc.wait)
|
||||
if err := scheduler.Stop(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
scheduler.Shutdown()
|
||||
|
||||
got := asynqtest.GetPendingMessages(t, r, tc.queue)
|
||||
if len(got) != 0 {
|
||||
|
@@ -62,7 +62,7 @@ func (mux *ServeMux) Handler(t *Task) (h Handler, pattern string) {
|
||||
mux.mu.RLock()
|
||||
defer mux.mu.RUnlock()
|
||||
|
||||
h, pattern = mux.match(t.Type)
|
||||
h, pattern = mux.match(t.Type())
|
||||
if h == nil {
|
||||
h, pattern = NotFoundHandler(), ""
|
||||
}
|
||||
@@ -98,7 +98,7 @@ func (mux *ServeMux) Handle(pattern string, handler Handler) {
|
||||
mux.mu.Lock()
|
||||
defer mux.mu.Unlock()
|
||||
|
||||
if pattern == "" {
|
||||
if strings.TrimSpace(pattern) == "" {
|
||||
panic("asynq: invalid pattern")
|
||||
}
|
||||
if handler == nil {
|
||||
@@ -151,7 +151,7 @@ func (mux *ServeMux) Use(mws ...MiddlewareFunc) {
|
||||
|
||||
// NotFound returns an error indicating that the handler was not found for the given task.
|
||||
func NotFound(ctx context.Context, task *Task) error {
|
||||
return fmt.Errorf("handler not found for task %q", task.Type)
|
||||
return fmt.Errorf("handler not found for task %q", task.Type())
|
||||
}
|
||||
|
||||
// NotFoundHandler returns a simple task handler that returns a ``not found`` error.
|
||||
|
@@ -68,7 +68,7 @@ func TestServeMux(t *testing.T) {
|
||||
}
|
||||
|
||||
if called != tc.want {
|
||||
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
|
||||
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type(), tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -124,7 +124,7 @@ func TestServeMuxNotFound(t *testing.T) {
|
||||
task := NewTask(tc.typename, nil)
|
||||
err := mux.ProcessTask(context.Background(), task)
|
||||
if err == nil {
|
||||
t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type)
|
||||
t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type())
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -164,7 +164,7 @@ func TestServeMuxMiddlewares(t *testing.T) {
|
||||
}
|
||||
|
||||
if called != tc.want {
|
||||
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
|
||||
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type(), tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
124
server.go
124
server.go
@@ -15,29 +15,30 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/log"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
)
|
||||
|
||||
// Server is responsible for managing the task processing.
|
||||
// Server is responsible for task processing and task lifecycle management.
|
||||
//
|
||||
// Server pulls tasks off queues and processes them.
|
||||
// If the processing of a task is unsuccessful, server will schedule it for a retry.
|
||||
//
|
||||
// A task will be retried until either the task gets processed successfully
|
||||
// or until it reaches its max retry count.
|
||||
//
|
||||
// If a task exhausts its retries, it will be moved to the archive and
|
||||
// will be kept in the archive for some time until a certain condition is met
|
||||
// (e.g., archive size reaches a certain limit, or the task has been in the
|
||||
// archive for a certain amount of time).
|
||||
// will be kept in the archive set.
|
||||
// Note that the archive size is finite and once it reaches its max size,
|
||||
// oldest tasks in the archive will be deleted.
|
||||
type Server struct {
|
||||
logger *log.Logger
|
||||
|
||||
broker base.Broker
|
||||
|
||||
status *base.ServerStatus
|
||||
state *base.ServerState
|
||||
|
||||
// wait group to wait for all goroutines to finish.
|
||||
wg sync.WaitGroup
|
||||
@@ -63,6 +64,14 @@ type Config struct {
|
||||
// By default, it uses exponential backoff algorithm to calculate the delay.
|
||||
RetryDelayFunc RetryDelayFunc
|
||||
|
||||
// Predicate function to determine whether the error returned from Handler is a failure.
|
||||
// If the function returns false, Server will not increment the retried counter for the task,
|
||||
// and Server won't record the queue stats (processed and failed stats) to avoid skewing the error
|
||||
// rate of the queue.
|
||||
//
|
||||
// By default, if the given error is non-nil the function returns true.
|
||||
IsFailure func(error) bool
|
||||
|
||||
// List of queues to process with given priority value. Keys are the names of the
|
||||
// queues and values are associated priority value.
|
||||
//
|
||||
@@ -267,6 +276,8 @@ func DefaultRetryDelayFunc(n int, e error, t *Task) time.Duration {
|
||||
return time.Duration(s) * time.Second
|
||||
}
|
||||
|
||||
func defaultIsFailureFunc(err error) bool { return err != nil }
|
||||
|
||||
var defaultQueueConfig = map[string]int{
|
||||
base.DefaultQueueName: 1,
|
||||
}
|
||||
@@ -278,7 +289,7 @@ const (
|
||||
)
|
||||
|
||||
// NewServer returns a new Server given a redis connection option
|
||||
// and background processing configuration.
|
||||
// and server configuration.
|
||||
func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
c, ok := r.MakeRedisClient().(redis.UniversalClient)
|
||||
if !ok {
|
||||
@@ -292,8 +303,15 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
if delayFunc == nil {
|
||||
delayFunc = DefaultRetryDelayFunc
|
||||
}
|
||||
isFailureFunc := cfg.IsFailure
|
||||
if isFailureFunc == nil {
|
||||
isFailureFunc = defaultIsFailureFunc
|
||||
}
|
||||
queues := make(map[string]int)
|
||||
for qname, p := range cfg.Queues {
|
||||
if err := base.ValidateQueueName(qname); err != nil {
|
||||
continue // ignore invalid queue names
|
||||
}
|
||||
if p > 0 {
|
||||
queues[qname] = p
|
||||
}
|
||||
@@ -324,7 +342,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
starting := make(chan *workerInfo)
|
||||
finished := make(chan *base.TaskMessage)
|
||||
syncCh := make(chan *syncRequest)
|
||||
status := base.NewServerStatus(base.StatusIdle)
|
||||
state := base.NewServerState()
|
||||
cancels := base.NewCancelations()
|
||||
|
||||
syncer := newSyncer(syncerParams{
|
||||
@@ -339,7 +357,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
concurrency: n,
|
||||
queues: queues,
|
||||
strictPriority: cfg.StrictPriority,
|
||||
status: status,
|
||||
state: state,
|
||||
starting: starting,
|
||||
finished: finished,
|
||||
})
|
||||
@@ -358,6 +376,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
logger: logger,
|
||||
broker: rdb,
|
||||
retryDelayFunc: delayFunc,
|
||||
isFailureFunc: isFailureFunc,
|
||||
syncCh: syncCh,
|
||||
cancelations: cancels,
|
||||
concurrency: n,
|
||||
@@ -372,6 +391,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
logger: logger,
|
||||
broker: rdb,
|
||||
retryDelayFunc: delayFunc,
|
||||
isFailureFunc: isFailureFunc,
|
||||
queues: qnames,
|
||||
interval: 1 * time.Minute,
|
||||
})
|
||||
@@ -384,7 +404,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
return &Server{
|
||||
logger: logger,
|
||||
broker: rdb,
|
||||
status: status,
|
||||
state: state,
|
||||
forwarder: forwarder,
|
||||
processor: processor,
|
||||
syncer: syncer,
|
||||
@@ -400,11 +420,13 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
// ProcessTask should return nil if the processing of a task
|
||||
// is successful.
|
||||
//
|
||||
// If ProcessTask return a non-nil error or panics, the task
|
||||
// will be retried after delay.
|
||||
// One exception to this rule is when ProcessTask returns SkipRetry error.
|
||||
// If the returned error is SkipRetry or the error wraps SkipRetry, retry is
|
||||
// skipped and task will be archived instead.
|
||||
// If ProcessTask returns a non-nil error or panics, the task
|
||||
// will be retried after delay if retry-count is remaining,
|
||||
// otherwise the task will be archived.
|
||||
//
|
||||
// One exception to this rule is when ProcessTask returns a SkipRetry error.
|
||||
// If the returned error is SkipRetry or an error wraps SkipRetry, retry is
|
||||
// skipped and the task will be immediately archived instead.
|
||||
type Handler interface {
|
||||
ProcessTask(context.Context, *Task) error
|
||||
}
|
||||
@@ -420,43 +442,46 @@ func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
|
||||
return fn(ctx, task)
|
||||
}
|
||||
|
||||
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped.
|
||||
var ErrServerStopped = errors.New("asynq: the server has been stopped")
|
||||
// ErrServerClosed indicates that the operation is now illegal because of the server has been shutdown.
|
||||
var ErrServerClosed = errors.New("asynq: Server closed")
|
||||
|
||||
// Run starts the background-task processing and blocks until
|
||||
// Run starts the task processing and blocks until
|
||||
// an os signal to exit the program is received. Once it receives
|
||||
// a signal, it gracefully shuts down all active workers and other
|
||||
// goroutines to process the tasks.
|
||||
//
|
||||
// Run returns any error encountered during server startup time.
|
||||
// If the server has already been stopped, ErrServerStopped is returned.
|
||||
// Run returns any error encountered at server startup time.
|
||||
// If the server has already been shutdown, ErrServerClosed is returned.
|
||||
func (srv *Server) Run(handler Handler) error {
|
||||
if err := srv.Start(handler); err != nil {
|
||||
return err
|
||||
}
|
||||
srv.waitForSignals()
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start starts the worker server. Once the server has started,
|
||||
// it pulls tasks off queues and starts a worker goroutine for each task.
|
||||
// Tasks are processed concurrently by the workers up to the number of
|
||||
// concurrency specified at the initialization time.
|
||||
// it pulls tasks off queues and starts a worker goroutine for each task
|
||||
// and then call Handler to process it.
|
||||
// Tasks are processed concurrently by the workers up to the number of
|
||||
// concurrency specified in Config.Concurrency.
|
||||
//
|
||||
// Start returns any error encountered during server startup time.
|
||||
// If the server has already been stopped, ErrServerStopped is returned.
|
||||
// Start returns any error encountered at server startup time.
|
||||
// If the server has already been shutdown, ErrServerClosed is returned.
|
||||
func (srv *Server) Start(handler Handler) error {
|
||||
if handler == nil {
|
||||
return fmt.Errorf("asynq: server cannot run with nil handler")
|
||||
}
|
||||
switch srv.status.Get() {
|
||||
case base.StatusRunning:
|
||||
switch srv.state.Get() {
|
||||
case base.StateActive:
|
||||
return fmt.Errorf("asynq: the server is already running")
|
||||
case base.StatusStopped:
|
||||
return ErrServerStopped
|
||||
case base.StateStopped:
|
||||
return fmt.Errorf("asynq: the server is in the stopped state. Waiting for shutdown.")
|
||||
case base.StateClosed:
|
||||
return ErrServerClosed
|
||||
}
|
||||
srv.status.Set(base.StatusRunning)
|
||||
srv.state.Set(base.StateActive)
|
||||
srv.processor.handler = handler
|
||||
|
||||
srv.logger.Info("Starting processing")
|
||||
@@ -471,43 +496,46 @@ func (srv *Server) Start(handler Handler) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop stops the worker server.
|
||||
// Shutdown gracefully shuts down the server.
|
||||
// It gracefully closes all active workers. The server will wait for
|
||||
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
|
||||
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
|
||||
func (srv *Server) Stop() {
|
||||
switch srv.status.Get() {
|
||||
case base.StatusIdle, base.StatusStopped:
|
||||
func (srv *Server) Shutdown() {
|
||||
switch srv.state.Get() {
|
||||
case base.StateNew, base.StateClosed:
|
||||
// server is not running, do nothing and return.
|
||||
return
|
||||
}
|
||||
|
||||
srv.logger.Info("Starting graceful shutdown")
|
||||
// Note: The order of termination is important.
|
||||
// Note: The order of shutdown is important.
|
||||
// Sender goroutines should be terminated before the receiver goroutines.
|
||||
// processor -> syncer (via syncCh)
|
||||
// processor -> heartbeater (via starting, finished channels)
|
||||
srv.forwarder.terminate()
|
||||
srv.processor.terminate()
|
||||
srv.recoverer.terminate()
|
||||
srv.syncer.terminate()
|
||||
srv.subscriber.terminate()
|
||||
srv.healthchecker.terminate()
|
||||
srv.heartbeater.terminate()
|
||||
srv.forwarder.shutdown()
|
||||
srv.processor.shutdown()
|
||||
srv.recoverer.shutdown()
|
||||
srv.syncer.shutdown()
|
||||
srv.subscriber.shutdown()
|
||||
srv.healthchecker.shutdown()
|
||||
srv.heartbeater.shutdown()
|
||||
|
||||
srv.wg.Wait()
|
||||
|
||||
srv.broker.Close()
|
||||
srv.status.Set(base.StatusStopped)
|
||||
srv.state.Set(base.StateClosed)
|
||||
|
||||
srv.logger.Info("Exiting")
|
||||
}
|
||||
|
||||
// Quiet signals the server to stop pulling new tasks off queues.
|
||||
// Quiet should be used before stopping the server.
|
||||
func (srv *Server) Quiet() {
|
||||
// Stop signals the server to stop pulling new tasks off queues.
|
||||
// Stop can be used before shutting down the server to ensure that all
|
||||
// currently active tasks are processed before server shutdown.
|
||||
//
|
||||
// Stop does not shutdown the server, make sure to call Shutdown before exit.
|
||||
func (srv *Server) Stop() {
|
||||
srv.logger.Info("Stopping processor")
|
||||
srv.processor.stop()
|
||||
srv.status.Set(base.StatusQuiet)
|
||||
srv.state.Set(base.StateStopped)
|
||||
srv.logger.Info("Processor stopped")
|
||||
}
|
||||
|
@@ -11,6 +11,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/internal/asynqtest"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"github.com/hibiken/asynq/internal/testbroker"
|
||||
"go.uber.org/goleak"
|
||||
@@ -18,7 +19,7 @@ import (
|
||||
|
||||
func TestServer(t *testing.T) {
|
||||
// https://github.com/go-redis/redis/issues/1029
|
||||
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
|
||||
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v8/internal/pool.(*ConnPool).reaper")
|
||||
defer goleak.VerifyNoLeaks(t, ignoreOpt)
|
||||
|
||||
redisConnOpt := getRedisConnOpt(t)
|
||||
@@ -39,22 +40,22 @@ func TestServer(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
|
||||
_, err = c.Enqueue(NewTask("send_email", asynqtest.JSON(map[string]interface{}{"recipient_id": 123})))
|
||||
if err != nil {
|
||||
t.Errorf("could not enqueue a task: %v", err)
|
||||
}
|
||||
|
||||
_, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 456}), ProcessIn(1*time.Hour))
|
||||
_, err = c.Enqueue(NewTask("send_email", asynqtest.JSON(map[string]interface{}{"recipient_id": 456})), ProcessIn(1*time.Hour))
|
||||
if err != nil {
|
||||
t.Errorf("could not enqueue a task: %v", err)
|
||||
}
|
||||
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
}
|
||||
|
||||
func TestServerRun(t *testing.T) {
|
||||
// https://github.com/go-redis/redis/issues/1029
|
||||
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
|
||||
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v8/internal/pool.(*ConnPool).reaper")
|
||||
defer goleak.VerifyNoLeaks(t, ignoreOpt)
|
||||
|
||||
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
|
||||
@@ -81,16 +82,16 @@ func TestServerRun(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerErrServerStopped(t *testing.T) {
|
||||
func TestServerErrServerClosed(t *testing.T) {
|
||||
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
|
||||
handler := NewServeMux()
|
||||
if err := srv.Start(handler); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
err := srv.Start(handler)
|
||||
if err != ErrServerStopped {
|
||||
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err)
|
||||
if err != ErrServerClosed {
|
||||
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerClosed error", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -99,7 +100,7 @@ func TestServerErrNilHandler(t *testing.T) {
|
||||
err := srv.Start(nil)
|
||||
if err == nil {
|
||||
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -113,7 +114,7 @@ func TestServerErrServerRunning(t *testing.T) {
|
||||
if err == nil {
|
||||
t.Error("Calling (*Server).Start(handler) on already running server did not return error")
|
||||
}
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
}
|
||||
|
||||
func TestServerWithRedisDown(t *testing.T) {
|
||||
@@ -145,7 +146,7 @@ func TestServerWithRedisDown(t *testing.T) {
|
||||
|
||||
time.Sleep(3 * time.Second)
|
||||
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
}
|
||||
|
||||
func TestServerWithFlakyBroker(t *testing.T) {
|
||||
@@ -169,8 +170,8 @@ func TestServerWithFlakyBroker(t *testing.T) {
|
||||
|
||||
h := func(ctx context.Context, task *Task) error {
|
||||
// force task retry.
|
||||
if task.Type == "bad_task" {
|
||||
return fmt.Errorf("could not process %q", task.Type)
|
||||
if task.Type() == "bad_task" {
|
||||
return fmt.Errorf("could not process %q", task.Type())
|
||||
}
|
||||
time.Sleep(2 * time.Second)
|
||||
return nil
|
||||
@@ -206,7 +207,7 @@ func TestServerWithFlakyBroker(t *testing.T) {
|
||||
|
||||
time.Sleep(3 * time.Second)
|
||||
|
||||
srv.Stop()
|
||||
srv.Shutdown()
|
||||
}
|
||||
|
||||
func TestLogLevel(t *testing.T) {
|
||||
|
@@ -22,7 +22,7 @@ func (srv *Server) waitForSignals() {
|
||||
for {
|
||||
sig := <-sigs
|
||||
if sig == unix.SIGTSTP {
|
||||
srv.Quiet()
|
||||
srv.Stop()
|
||||
continue
|
||||
}
|
||||
break
|
||||
|
@@ -8,7 +8,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/log"
|
||||
)
|
||||
@@ -43,7 +43,7 @@ func newSubscriber(params subscriberParams) *subscriber {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *subscriber) terminate() {
|
||||
func (s *subscriber) shutdown() {
|
||||
s.logger.Debug("Subscriber shutting down...")
|
||||
// Signal the subscriber goroutine to stop.
|
||||
s.done <- struct{}{}
|
||||
|
@@ -46,7 +46,7 @@ func TestSubscriber(t *testing.T) {
|
||||
})
|
||||
var wg sync.WaitGroup
|
||||
subscriber.start(&wg)
|
||||
defer subscriber.terminate()
|
||||
defer subscriber.shutdown()
|
||||
|
||||
// wait for subscriber to establish connection to pubsub channel
|
||||
time.Sleep(time.Second)
|
||||
@@ -91,7 +91,7 @@ func TestSubscriberWithRedisDown(t *testing.T) {
|
||||
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
|
||||
var wg sync.WaitGroup
|
||||
subscriber.start(&wg)
|
||||
defer subscriber.terminate()
|
||||
defer subscriber.shutdown()
|
||||
|
||||
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.
|
||||
|
||||
|
@@ -46,7 +46,7 @@ func newSyncer(params syncerParams) *syncer {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *syncer) terminate() {
|
||||
func (s *syncer) shutdown() {
|
||||
s.logger.Debug("Syncer shutting down...")
|
||||
// Signal the syncer goroutine to stop.
|
||||
s.done <- struct{}{}
|
||||
|
@@ -35,7 +35,7 @@ func TestSyncer(t *testing.T) {
|
||||
})
|
||||
var wg sync.WaitGroup
|
||||
syncer.start(&wg)
|
||||
defer syncer.terminate()
|
||||
defer syncer.shutdown()
|
||||
|
||||
for _, msg := range inProgress {
|
||||
m := msg
|
||||
@@ -66,7 +66,7 @@ func TestSyncerRetry(t *testing.T) {
|
||||
|
||||
var wg sync.WaitGroup
|
||||
syncer.start(&wg)
|
||||
defer syncer.terminate()
|
||||
defer syncer.shutdown()
|
||||
|
||||
var (
|
||||
mu sync.Mutex
|
||||
@@ -131,7 +131,7 @@ func TestSyncerDropsStaleRequests(t *testing.T) {
|
||||
}
|
||||
|
||||
time.Sleep(2 * interval) // ensure that syncer runs at least once
|
||||
syncer.terminate()
|
||||
syncer.shutdown()
|
||||
|
||||
mu.Lock()
|
||||
if n != 0 {
|
||||
|
@@ -11,7 +11,7 @@ import (
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/inspeq"
|
||||
"github.com/hibiken/asynq"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
@@ -63,7 +63,7 @@ func cronList(cmd *cobra.Command, args []string) {
|
||||
cols := []string{"EntryID", "Spec", "Type", "Payload", "Options", "Next", "Prev"}
|
||||
printRows := func(w io.Writer, tmpl string) {
|
||||
for _, e := range entries {
|
||||
fmt.Fprintf(w, tmpl, e.ID, e.Spec, e.Task.Type, e.Task.Payload, e.Opts,
|
||||
fmt.Fprintf(w, tmpl, e.ID, e.Spec, e.Task.Type(), formatPayload(e.Task.Payload()), e.Opts,
|
||||
nextEnqueue(e.Next), prevEnqueue(e.Prev))
|
||||
}
|
||||
}
|
||||
@@ -108,7 +108,7 @@ func cronHistory(cmd *cobra.Command, args []string) {
|
||||
fmt.Printf("Entry: %s\n\n", entryID)
|
||||
|
||||
events, err := inspector.ListSchedulerEnqueueEvents(
|
||||
entryID, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
|
||||
entryID, asynq.PageSize(pageSize), asynq.Page(pageNum))
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
continue
|
||||
|
@@ -5,385 +5,401 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/google/uuid"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/spf13/cast"
|
||||
"github.com/hibiken/asynq/internal/errors"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
// migrateCmd represents the migrate command.
|
||||
var migrateCmd = &cobra.Command{
|
||||
Use: "migrate",
|
||||
Short: fmt.Sprintf("Migrate all tasks to be compatible with asynq v%s", base.Version),
|
||||
Args: cobra.NoArgs,
|
||||
Run: migrate,
|
||||
Short: fmt.Sprintf("Migrate existing tasks and queues to be asynq%s compatible", base.Version),
|
||||
Long: `Migrate (asynq migrate) will migrate existing tasks and queues in redis to be compatible with the latest version of asynq.
|
||||
`,
|
||||
Args: cobra.NoArgs,
|
||||
Run: migrate,
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(migrateCmd)
|
||||
}
|
||||
|
||||
func migrate(cmd *cobra.Command, args []string) {
|
||||
c := redis.NewClient(&redis.Options{
|
||||
Addr: viper.GetString("uri"),
|
||||
DB: viper.GetInt("db"),
|
||||
Password: viper.GetString("password"),
|
||||
})
|
||||
r := createRDB()
|
||||
|
||||
/*** Migrate from 0.9 to 0.10, 0.11 compatible ***/
|
||||
lists := []string{"asynq:in_progress"}
|
||||
allQueues, err := c.SMembers(base.AllQueues).Result()
|
||||
if err != nil {
|
||||
printError(fmt.Errorf("could not read all queues: %v", err))
|
||||
os.Exit(1)
|
||||
}
|
||||
lists = append(lists, allQueues...)
|
||||
for _, key := range lists {
|
||||
if err := migrateList(c, key); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
zsets := []string{"asynq:scheduled", "asynq:retry", "asynq:dead"}
|
||||
for _, key := range zsets {
|
||||
if err := migrateZSet(c, key); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
/*** Migrate from 0.11 to 0.12 compatible ***/
|
||||
if err := createBackup(c, base.AllQueues); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
for _, qkey := range allQueues {
|
||||
qname := strings.TrimPrefix(qkey, "asynq:queues:")
|
||||
if err := c.SAdd(base.AllQueues, qname).Err(); err != nil {
|
||||
err = fmt.Errorf("could not add queue name %q to %q set: %v\n",
|
||||
qname, base.AllQueues, err)
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := deleteBackup(c, base.AllQueues); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
for _, qkey := range allQueues {
|
||||
qname := strings.TrimPrefix(qkey, "asynq:queues:")
|
||||
if exists := c.Exists(qkey).Val(); exists == 1 {
|
||||
if err := c.Rename(qkey, base.QueueKey(qname)).Err(); err != nil {
|
||||
printError(fmt.Errorf("could not rename key %q: %v\n", qkey, err))
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := partitionZSetMembersByQueue(c, "asynq:scheduled", base.ScheduledKey); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := partitionZSetMembersByQueue(c, "asynq:retry", base.RetryKey); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
// Note: base.DeadKey function was renamed in v0.14. We define the legacy function here since we need it for this migration script.
|
||||
deadKeyFunc := func(qname string) string { return fmt.Sprintf("asynq:{%s}:dead", qname) }
|
||||
if err := partitionZSetMembersByQueue(c, "asynq:dead", deadKeyFunc); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := partitionZSetMembersByQueue(c, "asynq:deadlines", base.DeadlinesKey); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := partitionListMembersByQueue(c, "asynq:in_progress", base.ActiveKey); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
paused, err := c.SMembers("asynq:paused").Result()
|
||||
if err != nil {
|
||||
printError(fmt.Errorf("command SMEMBERS asynq:paused failed: %v", err))
|
||||
os.Exit(1)
|
||||
}
|
||||
for _, qkey := range paused {
|
||||
qname := strings.TrimPrefix(qkey, "asynq:queues:")
|
||||
if err := r.Pause(qname); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := deleteKey(c, "asynq:paused"); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if err := deleteKey(c, "asynq:servers"); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := deleteKey(c, "asynq:workers"); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
/*** Migrate from 0.13 to 0.14 compatible ***/
|
||||
|
||||
// Move all dead tasks to archived ZSET.
|
||||
for _, qname := range allQueues {
|
||||
zs, err := c.ZRangeWithScores(deadKeyFunc(qname), 0, -1).Result()
|
||||
if err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
for _, z := range zs {
|
||||
if err := c.ZAdd(base.ArchivedKey(qname), &z).Err(); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := deleteKey(c, deadKeyFunc(qname)); err != nil {
|
||||
printError(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func backupKey(key string) string {
|
||||
return fmt.Sprintf("%s:backup", key)
|
||||
}
|
||||
|
||||
func createBackup(c *redis.Client, key string) error {
|
||||
err := c.Rename(key, backupKey(key)).Err()
|
||||
func renameKeyAsBackup(c redis.UniversalClient, key string) error {
|
||||
if c.Exists(context.Background(), key).Val() == 0 {
|
||||
return nil // key doesn't exist; no-op
|
||||
}
|
||||
return c.Rename(context.Background(), key, backupKey(key)).Err()
|
||||
}
|
||||
|
||||
func failIfError(err error, msg string) {
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not rename key %q: %v", key, err)
|
||||
fmt.Printf("error: %s: %v\n", msg, err)
|
||||
fmt.Println("*** Please report this issue at https://github.com/hibiken/asynq/issues ***")
|
||||
os.Exit(1)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteBackup(c *redis.Client, key string) error {
|
||||
return deleteKey(c, backupKey(key))
|
||||
}
|
||||
|
||||
func deleteKey(c *redis.Client, key string) error {
|
||||
exists := c.Exists(key).Val()
|
||||
if exists == 0 {
|
||||
// key does not exist
|
||||
return nil
|
||||
}
|
||||
err := c.Del(key).Err()
|
||||
func logIfError(err error, msg string) {
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not delete key %q: %v", key, err)
|
||||
fmt.Printf("warning: %s: %v\n", msg, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func printError(err error) {
|
||||
fmt.Println(err)
|
||||
fmt.Println()
|
||||
fmt.Println("Migrate command error")
|
||||
fmt.Println("Please file an issue on Github at https://github.com/hibiken/asynq/issues/new/choose")
|
||||
}
|
||||
func migrate(cmd *cobra.Command, args []string) {
|
||||
r := createRDB()
|
||||
queues, err := r.AllQueues()
|
||||
failIfError(err, "Failed to get queue names")
|
||||
|
||||
func partitionZSetMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error {
|
||||
zs, err := c.ZRangeWithScores(key, 0, -1).Result()
|
||||
if err != nil {
|
||||
return fmt.Errorf("command ZRANGE %s 0 -1 WITHSCORES failed: %v", key, err)
|
||||
// ---------------------------------------------
|
||||
// Pre-check: Ensure no active servers, tasks.
|
||||
// ---------------------------------------------
|
||||
srvs, err := r.ListServers()
|
||||
failIfError(err, "Failed to get server infos")
|
||||
if len(srvs) > 0 {
|
||||
fmt.Println("(error): Server(s) still running. Please ensure that no asynq servers are running when runnning migrate command.")
|
||||
os.Exit(1)
|
||||
}
|
||||
for _, z := range zs {
|
||||
s := cast.ToString(z.Member)
|
||||
msg, err := base.DecodeMessage(s)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not decode message from %q: %v", key, err)
|
||||
}
|
||||
if err := c.ZAdd(newKeyFunc(msg.Queue), &z).Err(); err != nil {
|
||||
return fmt.Errorf("could not add %v to %q: %v", z, newKeyFunc(msg.Queue))
|
||||
for _, qname := range queues {
|
||||
stats, err := r.CurrentStats(qname)
|
||||
failIfError(err, "Failed to get stats")
|
||||
if stats.Active > 0 {
|
||||
fmt.Printf("(error): %d active tasks found. Please ensure that no active tasks exist when running migrate command.\n", stats.Active)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if err := deleteKey(c, key); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func partitionListMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error {
|
||||
data, err := c.LRange(key, 0, -1).Result()
|
||||
if err != nil {
|
||||
return fmt.Errorf("command LRANGE %s 0 -1 failed: %v", key, err)
|
||||
}
|
||||
for _, s := range data {
|
||||
msg, err := base.DecodeMessage(s)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not decode message from %q: %v", key, err)
|
||||
// ---------------------------------------------
|
||||
// Rename pending key
|
||||
// ---------------------------------------------
|
||||
fmt.Print("Renaming pending keys...")
|
||||
for _, qname := range queues {
|
||||
oldKey := fmt.Sprintf("asynq:{%s}", qname)
|
||||
if r.Client().Exists(context.Background(), oldKey).Val() == 0 {
|
||||
continue
|
||||
}
|
||||
if err := c.LPush(newKeyFunc(msg.Queue), s).Err(); err != nil {
|
||||
return fmt.Errorf("could not add %v to %q: %v", s, newKeyFunc(msg.Queue))
|
||||
newKey := base.PendingKey(qname)
|
||||
err := r.Client().Rename(context.Background(), oldKey, newKey).Err()
|
||||
failIfError(err, "Failed to rename key")
|
||||
}
|
||||
fmt.Print("Done\n")
|
||||
|
||||
// ---------------------------------------------
|
||||
// Rename keys as backup
|
||||
// ---------------------------------------------
|
||||
fmt.Print("Renaming keys for backup...")
|
||||
for _, qname := range queues {
|
||||
keys := []string{
|
||||
base.ActiveKey(qname),
|
||||
base.PendingKey(qname),
|
||||
base.ScheduledKey(qname),
|
||||
base.RetryKey(qname),
|
||||
base.ArchivedKey(qname),
|
||||
}
|
||||
for _, key := range keys {
|
||||
err := renameKeyAsBackup(r.Client(), key)
|
||||
failIfError(err, fmt.Sprintf("Failed to rename key %q for backup", key))
|
||||
}
|
||||
}
|
||||
if err := deleteKey(c, key); err != nil {
|
||||
return err
|
||||
fmt.Print("Done\n")
|
||||
|
||||
// ---------------------------------------------
|
||||
// Update to new schema
|
||||
// ---------------------------------------------
|
||||
fmt.Print("Updating to new schema...")
|
||||
for _, qname := range queues {
|
||||
updatePendingMessages(r, qname)
|
||||
updateZSetMessages(r.Client(), base.ScheduledKey(qname), "scheduled")
|
||||
updateZSetMessages(r.Client(), base.RetryKey(qname), "retry")
|
||||
updateZSetMessages(r.Client(), base.ArchivedKey(qname), "archived")
|
||||
}
|
||||
return nil
|
||||
fmt.Print("Done\n")
|
||||
|
||||
// ---------------------------------------------
|
||||
// Delete backup keys
|
||||
// ---------------------------------------------
|
||||
fmt.Print("Deleting backup keys...")
|
||||
for _, qname := range queues {
|
||||
keys := []string{
|
||||
backupKey(base.ActiveKey(qname)),
|
||||
backupKey(base.PendingKey(qname)),
|
||||
backupKey(base.ScheduledKey(qname)),
|
||||
backupKey(base.RetryKey(qname)),
|
||||
backupKey(base.ArchivedKey(qname)),
|
||||
}
|
||||
for _, key := range keys {
|
||||
err := r.Client().Del(context.Background(), key).Err()
|
||||
failIfError(err, "Failed to delete backup key")
|
||||
}
|
||||
}
|
||||
fmt.Print("Done\n")
|
||||
}
|
||||
|
||||
type oldTaskMessage struct {
|
||||
// Unchanged
|
||||
Type string
|
||||
Payload map[string]interface{}
|
||||
ID uuid.UUID
|
||||
Queue string
|
||||
Retry int
|
||||
Retried int
|
||||
ErrorMsg string
|
||||
UniqueKey string
|
||||
|
||||
// Following fields have changed.
|
||||
|
||||
// Deadline specifies the deadline for the task.
|
||||
// Task won't be processed if it exceeded its deadline.
|
||||
// The string shoulbe be in RFC3339 format.
|
||||
//
|
||||
// time.Time's zero value means no deadline.
|
||||
Timeout string
|
||||
|
||||
// Deadline specifies the deadline for the task.
|
||||
// Task won't be processed if it exceeded its deadline.
|
||||
// The string shoulbe be in RFC3339 format.
|
||||
//
|
||||
// time.Time's zero value means no deadline.
|
||||
Deadline string
|
||||
}
|
||||
|
||||
var defaultTimeout = 30 * time.Minute
|
||||
|
||||
func convertMessage(old *oldTaskMessage) (*base.TaskMessage, error) {
|
||||
timeout, err := time.ParseDuration(old.Timeout)
|
||||
func UnmarshalOldMessage(encoded string) (*base.TaskMessage, error) {
|
||||
oldMsg, err := DecodeMessage(encoded)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not parse Timeout field of %+v", old)
|
||||
return nil, err
|
||||
}
|
||||
deadline, err := time.Parse(time.RFC3339, old.Deadline)
|
||||
payload, err := json.Marshal(oldMsg.Payload)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not parse Deadline field of %+v", old)
|
||||
}
|
||||
if timeout == 0 && deadline.IsZero() {
|
||||
timeout = defaultTimeout
|
||||
}
|
||||
if deadline.IsZero() {
|
||||
// Zero value used to be time.Time{},
|
||||
// in the new schema zero value is represented by
|
||||
// zero in Unix time.
|
||||
deadline = time.Unix(0, 0)
|
||||
return nil, fmt.Errorf("could not marshal payload: %v", err)
|
||||
}
|
||||
return &base.TaskMessage{
|
||||
Type: old.Type,
|
||||
Payload: old.Payload,
|
||||
ID: uuid.New(),
|
||||
Queue: old.Queue,
|
||||
Retry: old.Retry,
|
||||
Retried: old.Retried,
|
||||
ErrorMsg: old.ErrorMsg,
|
||||
UniqueKey: old.UniqueKey,
|
||||
Timeout: int64(timeout.Seconds()),
|
||||
Deadline: deadline.Unix(),
|
||||
Type: oldMsg.Type,
|
||||
Payload: payload,
|
||||
ID: oldMsg.ID,
|
||||
Queue: oldMsg.Queue,
|
||||
Retry: oldMsg.Retry,
|
||||
Retried: oldMsg.Retried,
|
||||
ErrorMsg: oldMsg.ErrorMsg,
|
||||
LastFailedAt: 0,
|
||||
Timeout: oldMsg.Timeout,
|
||||
Deadline: oldMsg.Deadline,
|
||||
UniqueKey: oldMsg.UniqueKey,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func deserialize(s string) (*base.TaskMessage, error) {
|
||||
// Try deserializing as old message.
|
||||
// TaskMessage from v0.17
|
||||
type OldTaskMessage struct {
|
||||
// Type indicates the kind of the task to be performed.
|
||||
Type string
|
||||
|
||||
// Payload holds data needed to process the task.
|
||||
Payload map[string]interface{}
|
||||
|
||||
// ID is a unique identifier for each task.
|
||||
ID uuid.UUID
|
||||
|
||||
// Queue is a name this message should be enqueued to.
|
||||
Queue string
|
||||
|
||||
// Retry is the max number of retry for this task.
|
||||
Retry int
|
||||
|
||||
// Retried is the number of times we've retried this task so far.
|
||||
Retried int
|
||||
|
||||
// ErrorMsg holds the error message from the last failure.
|
||||
ErrorMsg string
|
||||
|
||||
// Timeout specifies timeout in seconds.
|
||||
// If task processing doesn't complete within the timeout, the task will be retried
|
||||
// if retry count is remaining. Otherwise it will be moved to the archive.
|
||||
//
|
||||
// Use zero to indicate no timeout.
|
||||
Timeout int64
|
||||
|
||||
// Deadline specifies the deadline for the task in Unix time,
|
||||
// the number of seconds elapsed since January 1, 1970 UTC.
|
||||
// If task processing doesn't complete before the deadline, the task will be retried
|
||||
// if retry count is remaining. Otherwise it will be moved to the archive.
|
||||
//
|
||||
// Use zero to indicate no deadline.
|
||||
Deadline int64
|
||||
|
||||
// UniqueKey holds the redis key used for uniqueness lock for this task.
|
||||
//
|
||||
// Empty string indicates that no uniqueness lock was used.
|
||||
UniqueKey string
|
||||
}
|
||||
|
||||
// DecodeMessage unmarshals the given encoded string and returns a decoded task message.
|
||||
// Code from v0.17.
|
||||
func DecodeMessage(s string) (*OldTaskMessage, error) {
|
||||
d := json.NewDecoder(strings.NewReader(s))
|
||||
d.UseNumber()
|
||||
var old *oldTaskMessage
|
||||
if err := d.Decode(&old); err != nil {
|
||||
// Try deserializing as new message.
|
||||
d = json.NewDecoder(strings.NewReader(s))
|
||||
d.UseNumber()
|
||||
var msg *base.TaskMessage
|
||||
if err := d.Decode(&msg); err != nil {
|
||||
return nil, fmt.Errorf("could not deserialize %s into task message: %v", s, err)
|
||||
}
|
||||
return msg, nil
|
||||
var msg OldTaskMessage
|
||||
if err := d.Decode(&msg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return convertMessage(old)
|
||||
return &msg, nil
|
||||
}
|
||||
|
||||
func migrateZSet(c *redis.Client, key string) error {
|
||||
if c.Exists(key).Val() == 0 {
|
||||
// skip if key doesn't exist.
|
||||
return nil
|
||||
func updatePendingMessages(r *rdb.RDB, qname string) {
|
||||
data, err := r.Client().LRange(context.Background(), backupKey(base.PendingKey(qname)), 0, -1).Result()
|
||||
failIfError(err, "Failed to read backup pending key")
|
||||
|
||||
for _, s := range data {
|
||||
msg, err := UnmarshalOldMessage(s)
|
||||
failIfError(err, "Failed to unmarshal message")
|
||||
|
||||
if msg.UniqueKey != "" {
|
||||
ttl, err := r.Client().TTL(context.Background(), msg.UniqueKey).Result()
|
||||
failIfError(err, "Failed to get ttl")
|
||||
|
||||
if ttl > 0 {
|
||||
err = r.Client().Del(context.Background(), msg.UniqueKey).Err()
|
||||
logIfError(err, "Failed to delete unique key")
|
||||
}
|
||||
|
||||
// Regenerate unique key.
|
||||
msg.UniqueKey = base.UniqueKey(msg.Queue, msg.Type, msg.Payload)
|
||||
if ttl > 0 {
|
||||
err = r.EnqueueUnique(msg, ttl)
|
||||
} else {
|
||||
err = r.Enqueue(msg)
|
||||
}
|
||||
failIfError(err, "Failed to enqueue message")
|
||||
|
||||
} else {
|
||||
err := r.Enqueue(msg)
|
||||
failIfError(err, "Failed to enqueue message")
|
||||
}
|
||||
}
|
||||
res, err := c.ZRangeWithScores(key, 0, -1).Result()
|
||||
}
|
||||
|
||||
// KEYS[1] -> asynq:{<qname>}:t:<task_id>
|
||||
// KEYS[2] -> asynq:{<qname>}:scheduled
|
||||
// ARGV[1] -> task message data
|
||||
// ARGV[2] -> zset score
|
||||
// ARGV[3] -> task ID
|
||||
// ARGV[4] -> task timeout in seconds (0 if not timeout)
|
||||
// ARGV[5] -> task deadline in unix time (0 if no deadline)
|
||||
// ARGV[6] -> task state (e.g. "retry", "archived")
|
||||
var taskZAddCmd = redis.NewScript(`
|
||||
redis.call("HSET", KEYS[1],
|
||||
"msg", ARGV[1],
|
||||
"state", ARGV[6],
|
||||
"timeout", ARGV[4],
|
||||
"deadline", ARGV[5])
|
||||
redis.call("ZADD", KEYS[2], ARGV[2], ARGV[3])
|
||||
return 1
|
||||
`)
|
||||
|
||||
// ZAddTask adds task to zset.
|
||||
func ZAddTask(c redis.UniversalClient, key string, msg *base.TaskMessage, score float64, state string) error {
|
||||
// Special case; LastFailedAt field is new so assign a value inferred from zscore.
|
||||
if state == "archived" {
|
||||
msg.LastFailedAt = int64(score)
|
||||
}
|
||||
|
||||
encoded, err := base.EncodeMessage(msg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var msgs []*redis.Z
|
||||
for _, z := range res {
|
||||
s, err := cast.ToStringE(z.Member)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not cast to string: %v", err)
|
||||
}
|
||||
msg, err := deserialize(s)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
encoded, err := base.EncodeMessage(msg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not encode message from %q: %v", key, err)
|
||||
}
|
||||
msgs = append(msgs, &redis.Z{Score: z.Score, Member: encoded})
|
||||
if err := c.SAdd(context.Background(), base.AllQueues, msg.Queue).Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := c.Rename(key, key+":backup").Err(); err != nil {
|
||||
return fmt.Errorf("could not rename key %q: %v", key, err)
|
||||
keys := []string{
|
||||
base.TaskKey(msg.Queue, msg.ID.String()),
|
||||
key,
|
||||
}
|
||||
if err := c.ZAdd(key, msgs...).Err(); err != nil {
|
||||
return fmt.Errorf("could not write new messages to %q: %v", key, err)
|
||||
argv := []interface{}{
|
||||
encoded,
|
||||
score,
|
||||
msg.ID.String(),
|
||||
msg.Timeout,
|
||||
msg.Deadline,
|
||||
state,
|
||||
}
|
||||
if err := c.Del(key + ":backup").Err(); err != nil {
|
||||
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
|
||||
return taskZAddCmd.Run(context.Background(), c, keys, argv...).Err()
|
||||
}
|
||||
|
||||
// KEYS[1] -> unique key
|
||||
// KEYS[2] -> asynq:{<qname>}:t:<task_id>
|
||||
// KEYS[3] -> zset key (e.g. asynq:{<qname>}:scheduled)
|
||||
// --
|
||||
// ARGV[1] -> task ID
|
||||
// ARGV[2] -> uniqueness lock TTL
|
||||
// ARGV[3] -> score (process_at timestamp)
|
||||
// ARGV[4] -> task message
|
||||
// ARGV[5] -> task timeout in seconds (0 if not timeout)
|
||||
// ARGV[6] -> task deadline in unix time (0 if no deadline)
|
||||
// ARGV[7] -> task state (oneof "scheduled", "retry", "archived")
|
||||
var taskZAddUniqueCmd = redis.NewScript(`
|
||||
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
|
||||
if not ok then
|
||||
return 0
|
||||
end
|
||||
redis.call("HSET", KEYS[2],
|
||||
"msg", ARGV[4],
|
||||
"state", ARGV[7],
|
||||
"timeout", ARGV[5],
|
||||
"deadline", ARGV[6],
|
||||
"unique_key", KEYS[1])
|
||||
redis.call("ZADD", KEYS[3], ARGV[3], ARGV[1])
|
||||
return 1
|
||||
`)
|
||||
|
||||
// ScheduleUnique adds the task to the backlog queue to be processed in the future if the uniqueness lock can be acquired.
|
||||
// It returns ErrDuplicateTask if the lock cannot be acquired.
|
||||
func ZAddTaskUnique(c redis.UniversalClient, key string, msg *base.TaskMessage, score float64, state string, ttl time.Duration) error {
|
||||
encoded, err := base.EncodeMessage(msg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := c.SAdd(context.Background(), base.AllQueues, msg.Queue).Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
keys := []string{
|
||||
msg.UniqueKey,
|
||||
base.TaskKey(msg.Queue, msg.ID.String()),
|
||||
key,
|
||||
}
|
||||
argv := []interface{}{
|
||||
msg.ID.String(),
|
||||
int(ttl.Seconds()),
|
||||
score,
|
||||
encoded,
|
||||
msg.Timeout,
|
||||
msg.Deadline,
|
||||
state,
|
||||
}
|
||||
res, err := taskZAddUniqueCmd.Run(context.Background(), c, keys, argv...).Result()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
n, ok := res.(int64)
|
||||
if !ok {
|
||||
return errors.E(errors.Internal, fmt.Sprintf("cast error: unexpected return value from Lua script: %v", res))
|
||||
}
|
||||
if n == 0 {
|
||||
return errors.E(errors.AlreadyExists, errors.ErrDuplicateTask)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func migrateList(c *redis.Client, key string) error {
|
||||
if c.Exists(key).Val() == 0 {
|
||||
// skip if key doesn't exist.
|
||||
return nil
|
||||
}
|
||||
res, err := c.LRange(key, 0, -1).Result()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var msgs []interface{}
|
||||
for _, s := range res {
|
||||
msg, err := deserialize(s)
|
||||
if err != nil {
|
||||
return err
|
||||
func updateZSetMessages(c redis.UniversalClient, key, state string) {
|
||||
zs, err := c.ZRangeWithScores(context.Background(), backupKey(key), 0, -1).Result()
|
||||
failIfError(err, "Failed to read")
|
||||
|
||||
for _, z := range zs {
|
||||
msg, err := UnmarshalOldMessage(z.Member.(string))
|
||||
failIfError(err, "Failed to unmarshal message")
|
||||
|
||||
if msg.UniqueKey != "" {
|
||||
ttl, err := c.TTL(context.Background(), msg.UniqueKey).Result()
|
||||
failIfError(err, "Failed to get ttl")
|
||||
|
||||
if ttl > 0 {
|
||||
err = c.Del(context.Background(), msg.UniqueKey).Err()
|
||||
logIfError(err, "Failed to delete unique key")
|
||||
}
|
||||
|
||||
// Regenerate unique key.
|
||||
msg.UniqueKey = base.UniqueKey(msg.Queue, msg.Type, msg.Payload)
|
||||
if ttl > 0 {
|
||||
err = ZAddTaskUnique(c, key, msg, z.Score, state, ttl)
|
||||
} else {
|
||||
err = ZAddTask(c, key, msg, z.Score, state)
|
||||
}
|
||||
failIfError(err, "Failed to zadd message")
|
||||
} else {
|
||||
err := ZAddTask(c, key, msg, z.Score, state)
|
||||
failIfError(err, "Failed to enqueue scheduled message")
|
||||
}
|
||||
encoded, err := base.EncodeMessage(msg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not encode message from %q: %v", key, err)
|
||||
}
|
||||
msgs = append(msgs, encoded)
|
||||
}
|
||||
if err := c.Rename(key, key+":backup").Err(); err != nil {
|
||||
return fmt.Errorf("could not rename key %q: %v", key, err)
|
||||
}
|
||||
if err := c.LPush(key, msgs...).Err(); err != nil {
|
||||
return fmt.Errorf("could not write new messages to %q: %v", key, err)
|
||||
}
|
||||
if err := c.Del(key + ":backup").Err(); err != nil {
|
||||
return fmt.Errorf("could not delete back up key %q: %v", key+":backup", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
@@ -10,8 +10,8 @@ import (
|
||||
"os"
|
||||
|
||||
"github.com/fatih/color"
|
||||
"github.com/hibiken/asynq/inspeq"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"github.com/hibiken/asynq"
|
||||
"github.com/hibiken/asynq/internal/errors"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
@@ -82,7 +82,7 @@ func queueList(cmd *cobra.Command, args []string) {
|
||||
type queueInfo struct {
|
||||
name string
|
||||
keyslot int64
|
||||
nodes []inspeq.ClusterNode
|
||||
nodes []*asynq.ClusterNode
|
||||
}
|
||||
inspector := createInspector()
|
||||
queues, err := inspector.Queues()
|
||||
@@ -90,7 +90,7 @@ func queueList(cmd *cobra.Command, args []string) {
|
||||
fmt.Printf("error: Could not fetch list of queues: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
var qs []queueInfo
|
||||
var qs []*queueInfo
|
||||
for _, qname := range queues {
|
||||
q := queueInfo{name: qname}
|
||||
if useRedisCluster {
|
||||
@@ -107,7 +107,7 @@ func queueList(cmd *cobra.Command, args []string) {
|
||||
}
|
||||
q.nodes = nodes
|
||||
}
|
||||
qs = append(qs, q)
|
||||
qs = append(qs, &q)
|
||||
}
|
||||
if useRedisCluster {
|
||||
printTable(
|
||||
@@ -129,43 +129,42 @@ func queueInspect(cmd *cobra.Command, args []string) {
|
||||
inspector := createInspector()
|
||||
for i, qname := range args {
|
||||
if i > 0 {
|
||||
fmt.Printf("\n%s\n", separator)
|
||||
fmt.Printf("\n%s\n\n", separator)
|
||||
}
|
||||
fmt.Println()
|
||||
stats, err := inspector.CurrentStats(qname)
|
||||
info, err := inspector.GetQueueInfo(qname)
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
continue
|
||||
}
|
||||
printQueueStats(stats)
|
||||
printQueueInfo(info)
|
||||
}
|
||||
}
|
||||
|
||||
func printQueueStats(s *inspeq.QueueStats) {
|
||||
func printQueueInfo(info *asynq.QueueInfo) {
|
||||
bold := color.New(color.Bold)
|
||||
bold.Println("Queue Info")
|
||||
fmt.Printf("Name: %s\n", s.Queue)
|
||||
fmt.Printf("Size: %d\n", s.Size)
|
||||
fmt.Printf("Paused: %t\n\n", s.Paused)
|
||||
fmt.Printf("Name: %s\n", info.Queue)
|
||||
fmt.Printf("Size: %d\n", info.Size)
|
||||
fmt.Printf("Paused: %t\n\n", info.Paused)
|
||||
bold.Println("Task Count by State")
|
||||
printTable(
|
||||
[]string{"active", "pending", "scheduled", "retry", "archived"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
fmt.Fprintf(w, tmpl, s.Active, s.Pending, s.Scheduled, s.Retry, s.Archived)
|
||||
fmt.Fprintf(w, tmpl, info.Active, info.Pending, info.Scheduled, info.Retry, info.Archived)
|
||||
},
|
||||
)
|
||||
fmt.Println()
|
||||
bold.Printf("Daily Stats %s UTC\n", s.Timestamp.UTC().Format("2006-01-02"))
|
||||
bold.Printf("Daily Stats %s UTC\n", info.Timestamp.UTC().Format("2006-01-02"))
|
||||
printTable(
|
||||
[]string{"processed", "failed", "error rate"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
var errRate string
|
||||
if s.Processed == 0 {
|
||||
if info.Processed == 0 {
|
||||
errRate = "N/A"
|
||||
} else {
|
||||
errRate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
|
||||
errRate = fmt.Sprintf("%.2f%%", float64(info.Failed)/float64(info.Processed)*100)
|
||||
}
|
||||
fmt.Fprintf(w, tmpl, s.Processed, s.Failed, errRate)
|
||||
fmt.Fprintf(w, tmpl, info.Processed, info.Failed, errRate)
|
||||
},
|
||||
)
|
||||
}
|
||||
@@ -179,9 +178,9 @@ func queueHistory(cmd *cobra.Command, args []string) {
|
||||
inspector := createInspector()
|
||||
for i, qname := range args {
|
||||
if i > 0 {
|
||||
fmt.Printf("\n%s\n", separator)
|
||||
fmt.Printf("\n%s\n\n", separator)
|
||||
}
|
||||
fmt.Printf("\nQueue: %s\n\n", qname)
|
||||
fmt.Printf("Queue: %s\n\n", qname)
|
||||
stats, err := inspector.History(qname, days)
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
@@ -191,7 +190,7 @@ func queueHistory(cmd *cobra.Command, args []string) {
|
||||
}
|
||||
}
|
||||
|
||||
func printDailyStats(stats []*inspeq.DailyStats) {
|
||||
func printDailyStats(stats []*asynq.DailyStats) {
|
||||
printTable(
|
||||
[]string{"date (UTC)", "processed", "failed", "error rate"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
@@ -244,7 +243,7 @@ func queueRemove(cmd *cobra.Command, args []string) {
|
||||
for _, qname := range args {
|
||||
err = r.RemoveQueue(qname, force)
|
||||
if err != nil {
|
||||
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
|
||||
if errors.IsQueueNotEmpty(err) {
|
||||
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq queue rm --force %s'\n", err, qname)
|
||||
continue
|
||||
}
|
||||
|
@@ -11,10 +11,11 @@ import (
|
||||
"os"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"github.com/hibiken/asynq"
|
||||
"github.com/hibiken/asynq/inspeq"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"github.com/spf13/cobra"
|
||||
@@ -136,24 +137,25 @@ func createRDB() *rdb.RDB {
|
||||
}
|
||||
|
||||
// createRDB creates a Inspector instance using flag values and returns it.
|
||||
func createInspector() *inspeq.Inspector {
|
||||
var connOpt asynq.RedisConnOpt
|
||||
func createInspector() *asynq.Inspector {
|
||||
return asynq.NewInspector(getRedisConnOpt())
|
||||
}
|
||||
|
||||
func getRedisConnOpt() asynq.RedisConnOpt {
|
||||
if useRedisCluster {
|
||||
addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
|
||||
connOpt = asynq.RedisClusterClientOpt{
|
||||
return asynq.RedisClusterClientOpt{
|
||||
Addrs: addrs,
|
||||
Password: viper.GetString("password"),
|
||||
TLSConfig: getTLSConfig(),
|
||||
}
|
||||
} else {
|
||||
connOpt = asynq.RedisClientOpt{
|
||||
Addr: viper.GetString("uri"),
|
||||
DB: viper.GetInt("db"),
|
||||
Password: viper.GetString("password"),
|
||||
TLSConfig: getTLSConfig(),
|
||||
}
|
||||
}
|
||||
return inspeq.New(connOpt)
|
||||
return asynq.RedisClientOpt{
|
||||
Addr: viper.GetString("uri"),
|
||||
DB: viper.GetInt("db"),
|
||||
Password: viper.GetString("password"),
|
||||
TLSConfig: getTLSConfig(),
|
||||
}
|
||||
}
|
||||
|
||||
func getTLSConfig() *tls.Config {
|
||||
@@ -196,3 +198,28 @@ func printTable(cols []string, printRows func(w io.Writer, tmpl string)) {
|
||||
printRows(tw, format)
|
||||
tw.Flush()
|
||||
}
|
||||
|
||||
// formatPayload returns string representation of payload if data is printable.
|
||||
// If data is not printable, it returns a string describing payload is not printable.
|
||||
func formatPayload(payload []byte) string {
|
||||
if !isPrintable(payload) {
|
||||
return "non-printable bytes"
|
||||
}
|
||||
return string(payload)
|
||||
}
|
||||
|
||||
func isPrintable(data []byte) bool {
|
||||
if !utf8.Valid(data) {
|
||||
return false
|
||||
}
|
||||
isAllSpace := true
|
||||
for _, r := range string(data) {
|
||||
if !unicode.IsPrint(r) {
|
||||
return false
|
||||
}
|
||||
if !unicode.IsSpace(r) {
|
||||
isAllSpace = false
|
||||
}
|
||||
}
|
||||
return !isAllSpace
|
||||
}
|
||||
|
@@ -35,11 +35,11 @@ The command shows the following for each server:
|
||||
* Host and PID of the process in which the server is running
|
||||
* Number of active workers out of worker pool
|
||||
* Queue configuration
|
||||
* State of the worker server ("running" | "quiet")
|
||||
* State of the worker server ("active" | "stopped")
|
||||
* Time the server was started
|
||||
|
||||
A "running" server is pulling tasks from queues and processing them.
|
||||
A "quiet" server is no longer pulling new tasks from queues`,
|
||||
A "active" server is pulling tasks from queues and processing them.
|
||||
A "stopped" server is no longer pulling new tasks from queues`,
|
||||
Run: serverList,
|
||||
}
|
||||
|
||||
|
@@ -22,7 +22,7 @@ import (
|
||||
var statsCmd = &cobra.Command{
|
||||
Use: "stats",
|
||||
Short: "Shows current state of the tasks and queues",
|
||||
Long: `Stats (aysnqmon stats) will show the overview of tasks and queues at that instant.
|
||||
Long: `Stats (aysnq stats) will show the overview of tasks and queues at that instant.
|
||||
|
||||
Specifically, the command shows the following:
|
||||
* Number of tasks in each state
|
||||
|
@@ -10,7 +10,8 @@ import (
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/inspeq"
|
||||
"github.com/fatih/color"
|
||||
"github.com/hibiken/asynq"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
@@ -26,23 +27,29 @@ func init() {
|
||||
|
||||
taskCmd.AddCommand(taskCancelCmd)
|
||||
|
||||
taskCmd.AddCommand(taskInspectCmd)
|
||||
taskInspectCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
|
||||
taskInspectCmd.Flags().StringP("id", "i", "", "id of the task")
|
||||
taskInspectCmd.MarkFlagRequired("queue")
|
||||
taskInspectCmd.MarkFlagRequired("id")
|
||||
|
||||
taskCmd.AddCommand(taskArchiveCmd)
|
||||
taskArchiveCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
|
||||
taskArchiveCmd.Flags().StringP("key", "k", "", "key of the task")
|
||||
taskArchiveCmd.Flags().StringP("id", "i", "", "id of the task")
|
||||
taskArchiveCmd.MarkFlagRequired("queue")
|
||||
taskArchiveCmd.MarkFlagRequired("key")
|
||||
taskArchiveCmd.MarkFlagRequired("id")
|
||||
|
||||
taskCmd.AddCommand(taskDeleteCmd)
|
||||
taskDeleteCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
|
||||
taskDeleteCmd.Flags().StringP("key", "k", "", "key of the task")
|
||||
taskDeleteCmd.Flags().StringP("id", "i", "", "id of the task")
|
||||
taskDeleteCmd.MarkFlagRequired("queue")
|
||||
taskDeleteCmd.MarkFlagRequired("key")
|
||||
taskDeleteCmd.MarkFlagRequired("id")
|
||||
|
||||
taskCmd.AddCommand(taskRunCmd)
|
||||
taskRunCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
|
||||
taskRunCmd.Flags().StringP("key", "k", "", "key of the task")
|
||||
taskRunCmd.Flags().StringP("id", "i", "", "id of the task")
|
||||
taskRunCmd.MarkFlagRequired("queue")
|
||||
taskRunCmd.MarkFlagRequired("key")
|
||||
taskRunCmd.MarkFlagRequired("id")
|
||||
|
||||
taskCmd.AddCommand(taskArchiveAllCmd)
|
||||
taskArchiveAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
|
||||
@@ -93,6 +100,13 @@ To list the tasks from the second page, run
|
||||
Run: taskList,
|
||||
}
|
||||
|
||||
var taskInspectCmd = &cobra.Command{
|
||||
Use: "inspect --queue=QUEUE --id=TASK_ID",
|
||||
Short: "Display detailed information on the specified task",
|
||||
Args: cobra.NoArgs,
|
||||
Run: taskInspect,
|
||||
}
|
||||
|
||||
var taskCancelCmd = &cobra.Command{
|
||||
Use: "cancel TASK_ID [TASK_ID...]",
|
||||
Short: "Cancel one or more active tasks",
|
||||
@@ -101,42 +115,42 @@ var taskCancelCmd = &cobra.Command{
|
||||
}
|
||||
|
||||
var taskArchiveCmd = &cobra.Command{
|
||||
Use: "archive --queue=QUEUE --key=KEY",
|
||||
Short: "Archive a task with the given key",
|
||||
Use: "archive --queue=QUEUE --id=TASK_ID",
|
||||
Short: "Archive a task with the given id",
|
||||
Args: cobra.NoArgs,
|
||||
Run: taskArchive,
|
||||
}
|
||||
|
||||
var taskDeleteCmd = &cobra.Command{
|
||||
Use: "delete --queue=QUEUE --key=KEY",
|
||||
Short: "Delete a task with the given key",
|
||||
Use: "delete --queue=QUEUE --id=TASK_ID",
|
||||
Short: "Delete a task with the given id",
|
||||
Args: cobra.NoArgs,
|
||||
Run: taskDelete,
|
||||
}
|
||||
|
||||
var taskRunCmd = &cobra.Command{
|
||||
Use: "run --queue=QUEUE --key=KEY",
|
||||
Short: "Run a task with the given key",
|
||||
Use: "run --queue=QUEUE --id=TASK_ID",
|
||||
Short: "Run a task with the given id",
|
||||
Args: cobra.NoArgs,
|
||||
Run: taskRun,
|
||||
}
|
||||
|
||||
var taskArchiveAllCmd = &cobra.Command{
|
||||
Use: "archive-all --queue=QUEUE --state=STATE",
|
||||
Use: "archiveall --queue=QUEUE --state=STATE",
|
||||
Short: "Archive all tasks in the given state",
|
||||
Args: cobra.NoArgs,
|
||||
Run: taskArchiveAll,
|
||||
}
|
||||
|
||||
var taskDeleteAllCmd = &cobra.Command{
|
||||
Use: "delete-all --queue=QUEUE --key=KEY",
|
||||
Use: "deleteall --queue=QUEUE --state=STATE",
|
||||
Short: "Delete all tasks in the given state",
|
||||
Args: cobra.NoArgs,
|
||||
Run: taskDeleteAll,
|
||||
}
|
||||
|
||||
var taskRunAllCmd = &cobra.Command{
|
||||
Use: "run-all --queue=QUEUE --key=KEY",
|
||||
Use: "runall --queue=QUEUE --state=STATE",
|
||||
Short: "Run all tasks in the given state",
|
||||
Args: cobra.NoArgs,
|
||||
Run: taskRunAll,
|
||||
@@ -183,7 +197,7 @@ func taskList(cmd *cobra.Command, args []string) {
|
||||
|
||||
func listActiveTasks(qname string, pageNum, pageSize int) {
|
||||
i := createInspector()
|
||||
tasks, err := i.ListActiveTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
|
||||
tasks, err := i.ListActiveTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
@@ -196,7 +210,7 @@ func listActiveTasks(qname string, pageNum, pageSize int) {
|
||||
[]string{"ID", "Type", "Payload"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
for _, t := range tasks {
|
||||
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload)
|
||||
fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload))
|
||||
}
|
||||
},
|
||||
)
|
||||
@@ -204,7 +218,7 @@ func listActiveTasks(qname string, pageNum, pageSize int) {
|
||||
|
||||
func listPendingTasks(qname string, pageNum, pageSize int) {
|
||||
i := createInspector()
|
||||
tasks, err := i.ListPendingTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
|
||||
tasks, err := i.ListPendingTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
@@ -214,10 +228,10 @@ func listPendingTasks(qname string, pageNum, pageSize int) {
|
||||
return
|
||||
}
|
||||
printTable(
|
||||
[]string{"Key", "Type", "Payload"},
|
||||
[]string{"ID", "Type", "Payload"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
for _, t := range tasks {
|
||||
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload)
|
||||
fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload))
|
||||
}
|
||||
},
|
||||
)
|
||||
@@ -225,7 +239,7 @@ func listPendingTasks(qname string, pageNum, pageSize int) {
|
||||
|
||||
func listScheduledTasks(qname string, pageNum, pageSize int) {
|
||||
i := createInspector()
|
||||
tasks, err := i.ListScheduledTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
|
||||
tasks, err := i.ListScheduledTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
@@ -235,20 +249,29 @@ func listScheduledTasks(qname string, pageNum, pageSize int) {
|
||||
return
|
||||
}
|
||||
printTable(
|
||||
[]string{"Key", "Type", "Payload", "Process In"},
|
||||
[]string{"ID", "Type", "Payload", "Process In"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
for _, t := range tasks {
|
||||
processIn := fmt.Sprintf("%.0f seconds",
|
||||
t.NextProcessAt.Sub(time.Now()).Seconds())
|
||||
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, processIn)
|
||||
fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload), formatProcessAt(t.NextProcessAt))
|
||||
}
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// formatProcessAt formats next process at time to human friendly string.
|
||||
// If processAt time is in the past, returns "right now".
|
||||
// If processAt time is in the future, returns "in xxx" where xxx is the duration from now.
|
||||
func formatProcessAt(processAt time.Time) string {
|
||||
d := processAt.Sub(time.Now())
|
||||
if d < 0 {
|
||||
return "right now"
|
||||
}
|
||||
return fmt.Sprintf("in %v", d.Round(time.Second))
|
||||
}
|
||||
|
||||
func listRetryTasks(qname string, pageNum, pageSize int) {
|
||||
i := createInspector()
|
||||
tasks, err := i.ListRetryTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
|
||||
tasks, err := i.ListRetryTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
@@ -258,16 +281,11 @@ func listRetryTasks(qname string, pageNum, pageSize int) {
|
||||
return
|
||||
}
|
||||
printTable(
|
||||
[]string{"Key", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry"},
|
||||
[]string{"ID", "Type", "Payload", "Next Retry", "Last Error", "Last Failed", "Retried", "Max Retry"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
for _, t := range tasks {
|
||||
var nextRetry string
|
||||
if d := t.NextProcessAt.Sub(time.Now()); d > 0 {
|
||||
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
|
||||
} else {
|
||||
nextRetry = "right now"
|
||||
}
|
||||
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, nextRetry, t.LastError, t.Retried, t.MaxRetry)
|
||||
fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload), formatProcessAt(t.NextProcessAt),
|
||||
t.LastErr, formatLastFailedAt(t.LastFailedAt), t.Retried, t.MaxRetry)
|
||||
}
|
||||
},
|
||||
)
|
||||
@@ -275,7 +293,7 @@ func listRetryTasks(qname string, pageNum, pageSize int) {
|
||||
|
||||
func listArchivedTasks(qname string, pageNum, pageSize int) {
|
||||
i := createInspector()
|
||||
tasks, err := i.ListArchivedTasks(qname, inspeq.PageSize(pageSize), inspeq.Page(pageNum))
|
||||
tasks, err := i.ListArchivedTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
@@ -285,19 +303,18 @@ func listArchivedTasks(qname string, pageNum, pageSize int) {
|
||||
return
|
||||
}
|
||||
printTable(
|
||||
[]string{"Key", "Type", "Payload", "Last Failed", "Last Error"},
|
||||
[]string{"ID", "Type", "Payload", "Last Failed", "Last Error"},
|
||||
func(w io.Writer, tmpl string) {
|
||||
for _, t := range tasks {
|
||||
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, t.LastFailedAt, t.LastError)
|
||||
fmt.Fprintf(w, tmpl, t.ID, t.Type, formatPayload(t.Payload), formatLastFailedAt(t.LastFailedAt), t.LastErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func taskCancel(cmd *cobra.Command, args []string) {
|
||||
r := createRDB()
|
||||
i := createInspector()
|
||||
for _, id := range args {
|
||||
err := r.PublishCancelation(id)
|
||||
if err != nil {
|
||||
if err := i.CancelProcessing(id); err != nil {
|
||||
fmt.Printf("error: could not send cancelation signal: %v\n", err)
|
||||
continue
|
||||
}
|
||||
@@ -305,20 +322,76 @@ func taskCancel(cmd *cobra.Command, args []string) {
|
||||
}
|
||||
}
|
||||
|
||||
func taskInspect(cmd *cobra.Command, args []string) {
|
||||
qname, err := cmd.Flags().GetString("queue")
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
id, err := cmd.Flags().GetString("id")
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
i := createInspector()
|
||||
info, err := i.GetTaskInfo(qname, id)
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
printTaskInfo(info)
|
||||
}
|
||||
|
||||
func printTaskInfo(info *asynq.TaskInfo) {
|
||||
bold := color.New(color.Bold)
|
||||
bold.Println("Task Info")
|
||||
fmt.Printf("Queue: %s\n", info.Queue)
|
||||
fmt.Printf("ID: %s\n", info.ID)
|
||||
fmt.Printf("Type: %s\n", info.Type)
|
||||
fmt.Printf("State: %v\n", info.State)
|
||||
fmt.Printf("Retried: %d/%d\n", info.Retried, info.MaxRetry)
|
||||
fmt.Println()
|
||||
fmt.Printf("Next process time: %s\n", formatNextProcessAt(info.NextProcessAt))
|
||||
if len(info.LastErr) != 0 {
|
||||
fmt.Println()
|
||||
bold.Println("Last Failure")
|
||||
fmt.Printf("Failed at: %s\n", formatLastFailedAt(info.LastFailedAt))
|
||||
fmt.Printf("Error message: %s\n", info.LastErr)
|
||||
}
|
||||
}
|
||||
|
||||
func formatNextProcessAt(processAt time.Time) string {
|
||||
if processAt.IsZero() || processAt.Unix() == 0 {
|
||||
return "n/a"
|
||||
}
|
||||
if processAt.Before(time.Now()) {
|
||||
return "now"
|
||||
}
|
||||
return fmt.Sprintf("%s (in %v)", processAt.Format(time.UnixDate), processAt.Sub(time.Now()).Round(time.Second))
|
||||
}
|
||||
|
||||
func formatLastFailedAt(lastFailedAt time.Time) string {
|
||||
if lastFailedAt.IsZero() || lastFailedAt.Unix() == 0 {
|
||||
return ""
|
||||
}
|
||||
return lastFailedAt.Format(time.UnixDate)
|
||||
}
|
||||
|
||||
func taskArchive(cmd *cobra.Command, args []string) {
|
||||
qname, err := cmd.Flags().GetString("queue")
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
key, err := cmd.Flags().GetString("key")
|
||||
id, err := cmd.Flags().GetString("id")
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
i := createInspector()
|
||||
err = i.ArchiveTaskByKey(qname, key)
|
||||
err = i.ArchiveTask(qname, id)
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
@@ -332,14 +405,14 @@ func taskDelete(cmd *cobra.Command, args []string) {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
key, err := cmd.Flags().GetString("key")
|
||||
id, err := cmd.Flags().GetString("id")
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
i := createInspector()
|
||||
err = i.DeleteTaskByKey(qname, key)
|
||||
err = i.DeleteTask(qname, id)
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
@@ -353,14 +426,14 @@ func taskRun(cmd *cobra.Command, args []string) {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
key, err := cmd.Flags().GetString("key")
|
||||
id, err := cmd.Flags().GetString("id")
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
i := createInspector()
|
||||
err = i.RunTaskByKey(qname, key)
|
||||
err = i.RunTask(qname, id)
|
||||
if err != nil {
|
||||
fmt.Printf("error: %v\n", err)
|
||||
os.Exit(1)
|
||||
|
13
tools/go.mod
13
tools/go.mod
@@ -3,20 +3,13 @@ module github.com/hibiken/asynq/tools
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6 // indirect
|
||||
github.com/coreos/go-etcd v2.0.0+incompatible // indirect
|
||||
github.com/cpuguy83/go-md2man v1.0.10 // indirect
|
||||
github.com/fatih/color v1.9.0
|
||||
github.com/go-redis/redis/v7 v7.4.0
|
||||
github.com/google/uuid v1.1.1
|
||||
github.com/hibiken/asynq v0.14.0
|
||||
github.com/go-redis/redis/v8 v8.11.2
|
||||
github.com/google/uuid v1.2.0
|
||||
github.com/hibiken/asynq v0.17.1
|
||||
github.com/mitchellh/go-homedir v1.1.0
|
||||
github.com/spf13/cast v1.3.1
|
||||
github.com/spf13/cobra v1.1.1
|
||||
github.com/spf13/viper v1.7.0
|
||||
github.com/ugorji/go v1.1.4 // indirect
|
||||
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8 // indirect
|
||||
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77 // indirect
|
||||
)
|
||||
|
||||
replace github.com/hibiken/asynq => ./..
|
||||
|
132
tools/go.sum
132
tools/go.sum
@@ -18,44 +18,46 @@ github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAE
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
|
||||
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
|
||||
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
|
||||
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
|
||||
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
|
||||
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
||||
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
|
||||
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
|
||||
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
|
||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
|
||||
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
|
||||
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
|
||||
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
|
||||
github.com/go-redis/redis/v8 v8.11.2 h1:WqlSpAwz8mxDSMCvbyz1Mkiqe0LE5OY4j3lgkvu1Ts0=
|
||||
github.com/go-redis/redis/v8 v8.11.2/go.mod h1:DLomh7y2e3ggQXQLd1YgmvIfecPJoFl7WU5SOQ/r06M=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
@@ -66,25 +68,34 @@ github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfb
|
||||
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
|
||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
|
||||
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
@@ -110,7 +121,6 @@ github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO
|
||||
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
|
||||
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
|
||||
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
|
||||
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
|
||||
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
@@ -129,7 +139,6 @@ github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORN
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
|
||||
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||
@@ -154,12 +163,17 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
|
||||
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
|
||||
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
|
||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
|
||||
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
|
||||
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
|
||||
github.com/onsi/ginkgo v1.15.0 h1:1V1NfVQR87RtWAgp1lv9JZJ5Jap+XFGKPi00andXGi4=
|
||||
github.com/onsi/ginkgo v1.15.0/go.mod h1:hF8qUzuuC8DJGygJH3726JnCZX4MYbRB8yFfISqnKUg=
|
||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
||||
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
|
||||
github.com/onsi/gomega v1.10.5 h1:7n6FEkpFmfCoo2t+YYqXH0evK+a9ICQz0xcAy9dYcaQ=
|
||||
github.com/onsi/gomega v1.10.5/go.mod h1:gza4q3jKQJijlu05nKWRCW/GavJumGt8aNRxWg7mt48=
|
||||
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
|
||||
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
@@ -172,6 +186,7 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP
|
||||
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
@@ -181,7 +196,6 @@ github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
|
||||
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
|
||||
@@ -198,49 +212,41 @@ github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B
|
||||
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
|
||||
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
|
||||
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
|
||||
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
|
||||
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
|
||||
github.com/spf13/cobra v1.1.1 h1:KfztREH0tPxJJ+geloSLaAkaPkr4ki2Er5quFV1TDo4=
|
||||
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
|
||||
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
|
||||
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
|
||||
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
|
||||
github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E=
|
||||
github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
|
||||
github.com/spf13/viper v1.7.0 h1:xVKxvI7ouOI5I+U9s2eeiUfMaWBVoXA3AWskkrqK0VM=
|
||||
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
|
||||
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
|
||||
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
|
||||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
|
||||
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4=
|
||||
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
@@ -259,6 +265,7 @@ golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU
|
||||
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -272,11 +279,12 @@ golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb h1:eBmm0M9fYhWpKZLjQUUKka/LtIxf46G4fxeEz5KJr9U=
|
||||
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
@@ -285,6 +293,7 @@ golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJ
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
@@ -292,7 +301,6 @@ golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5h
|
||||
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -301,14 +309,19 @@ golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210112080510-489259a85091 h1:DMyOG0U+gKfu8JZzg2UQe9MeaC1X+xQWlAKcRnjxjCw=
|
||||
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
@@ -322,6 +335,7 @@ golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3
|
||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
@@ -329,9 +343,13 @@ golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtn
|
||||
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
|
||||
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
@@ -350,17 +368,28 @@ google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
|
||||
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
||||
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
|
||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
|
||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
|
||||
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
@@ -369,14 +398,15 @@ gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkep
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
|
||||
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
|
Reference in New Issue
Block a user