2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-10-20 21:26:14 +08:00

Compare commits

..

81 Commits

Author SHA1 Message Date
Ken Hibino
476862dd7b v0.13.1 2020-11-22 12:26:52 -08:00
Ken Hibino
dcd873fa2a fix: Wait for specified time duration before shutdown 2020-11-22 12:25:27 -08:00
strobus
2604bb2192 add tls support to command line tool 2020-10-14 15:13:05 -07:00
Ken Hibino
942345ee80 v0.13.0 2020-10-13 06:33:47 -07:00
Ken Hibino
1f059eeee1 Update docs for periodic tasks feature 2020-10-13 06:31:47 -07:00
Ken Hibino
4ae73abdaa Minor update to asynq cron command 2020-10-13 06:31:47 -07:00
Ken Hibino
96b2318300 Add EnqueueErrorHandler option to SchedulerOpts 2020-10-13 06:31:47 -07:00
Ken Hibino
8312515e64 Update Option interface
- Added `String()`, `Type()`, and `Value()` methods to the interface to
  aid with debugging and error handling.
2020-10-13 06:31:47 -07:00
Ken Hibino
50e7f38365 Add Scheduler
- Renamed previously called scheduler to forwarder to resolve name
  conflicts
2020-10-13 06:31:47 -07:00
Ken Hibino
fadcae76d6 Add String and MarshalJSON methods to Payload type 2020-09-20 07:33:23 -07:00
Ken Hibino
a2d4ead989 Fix comments in Config 2020-09-14 21:48:05 -07:00
Ken Hibino
82b6828f43 Replace benchcmp with benchstat 2020-09-14 06:59:55 -07:00
Ken Hibino
3114987428 v0.12.0 2020-09-12 13:34:27 -07:00
Ken Hibino
1ee3b10104 Update changelog 2020-09-12 12:59:03 -07:00
Ken Hibino
6d720d6a05 Update demo.gif for CLI demo 2020-09-12 12:59:03 -07:00
Ken Hibino
3e6981170d Use color package to bold fonts in CLI output 2020-09-12 12:59:03 -07:00
Ken Hibino
a9aa480551 Update migrate command 2020-09-12 12:59:03 -07:00
Ken Hibino
9d41de795a Mention about testing using redis cluster in CONTRIBUTING.md 2020-09-12 12:59:03 -07:00
Ken Hibino
c43fb21a0a Minor test updates 2020-09-12 12:59:03 -07:00
Ken Hibino
a293efcdab Add Close to Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
69d7ec725a Close redis client after each test run 2020-09-12 12:59:03 -07:00
Ken Hibino
450a9aa1e2 Add MaxRedirects field in RedisClusterClientOpt 2020-09-12 12:59:03 -07:00
Ken Hibino
6e294a7013 Add Username field to RedisConnOpt 2020-09-12 12:59:03 -07:00
Ken Hibino
c26b7469bd Display cluster info in stats command when --cluster flag is passed 2020-09-12 12:59:03 -07:00
Ken Hibino
818c2d6f35 Add GetQueueName helper to extract queue name from context 2020-09-12 12:59:03 -07:00
Ken Hibino
e09870a08a Update package documentation 2020-09-12 12:59:03 -07:00
Ken Hibino
ac3d5b126a Update README 2020-09-12 12:59:03 -07:00
Ken Hibino
29e542e591 Rename Enqueue methods in Inspector to Run 2020-09-12 12:59:03 -07:00
Ken Hibino
a891ce5568 Rename InProgress to Active 2020-09-12 12:59:03 -07:00
Ken Hibino
ebe3c4083f Rename NextEnqueueAt to NextProcessAt 2020-09-12 12:59:03 -07:00
Ken Hibino
c8c47fcbf0 Rename Enqueued to Pending 2020-09-12 12:59:03 -07:00
Ken Hibino
cca680a7fd Change Client.Enqueue to take ProcessAt and ProcessIn as Option 2020-09-12 12:59:03 -07:00
Ken Hibino
8076b5ae50 Use different redis db number for rdb package tests 2020-09-12 12:59:03 -07:00
Ken Hibino
a42c174dae Display cluster keyslot and nodes in queueList command 2020-09-12 12:59:03 -07:00
Ken Hibino
a88325cb96 Add ClusterNodes and ClusterKeySlot in Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
eb739a0258 Fix flaky test 2020-09-12 12:59:03 -07:00
Ken Hibino
a9c31553b8 Add redis-cluster support in asynq CLI 2020-09-12 12:59:03 -07:00
Ken Hibino
dab8295883 Validate queue name in Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
131ac823fd Return error if queue name is empty when enqueueing 2020-09-12 12:59:03 -07:00
Ken Hibino
4897dba397 Upgrade redis client lib to v7.4.0 2020-09-12 12:59:03 -07:00
Ken Hibino
6b96459881 Add test flags to run tests using redis cluster 2020-09-12 12:59:03 -07:00
Ken Hibino
572eb338d5 Fix flaky ProcessorRetry test 2020-09-12 12:59:03 -07:00
Ken Hibino
27f4027447 Add RedisClusterClientOpt to connect to redis cluster 2020-09-12 12:59:03 -07:00
Ken Hibino
ee1afd12f5 Fix done lua script
If UniqueKey is an empty string, do not provide the key to Lua script
because that will cause CROSSSLOT error in redis cluster (since it
doesn't have any hash tag).
2020-09-12 12:59:03 -07:00
Ken Hibino
3ac548e97c Fix dequeue Lua script to use a single hash tag 2020-09-12 12:59:03 -07:00
Ken Hibino
f38f94b947 Restructure CLI commands with subcommands 2020-09-12 12:59:03 -07:00
Ken Hibino
d6f389e63f Add Queues method to Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
118ef27bf2 Update RemoveQueue in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
fad0696828 Fix errors in inspector tests 2020-09-12 12:59:03 -07:00
Ken Hibino
4037b41479 Fix client tests 2020-09-12 12:59:03 -07:00
Ken Hibino
96f23d88cd Add more processor tests 2020-09-12 12:59:03 -07:00
Ken Hibino
83bdca5220 Fix test build errors 2020-09-12 12:59:03 -07:00
Ken Hibino
2f226dfb84 Update ListServers and ListWorkers methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
3f26122ac0 Fix more build errors 2020-09-12 12:59:03 -07:00
Ken Hibino
2a18181501 Fix inspector build error 2020-09-12 12:59:03 -07:00
Ken Hibino
aa2676bb57 Update Broker interface 2020-09-12 12:59:03 -07:00
Ken Hibino
9348a62691 Update Inspector API 2020-09-12 12:59:03 -07:00
Ken Hibino
f59de9ac56 Update all delete methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
996a6c0ead Update all kill methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
47e9ba4eba Update enqueue methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
dbf140a767 Update all list methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
5f82b4b365 Update HistoricalStats method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
44a3d177f0 Update Pause and Unpause methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
24b13bd865 Update CurrentStats method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
d25090c669 Add AllQueues method to RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
b5caefd663 Remove stale benchmark test 2020-09-12 12:59:03 -07:00
Ken Hibino
becd26479b Update WriteServerState and ClearServerState in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
4b81b91d3e Minor fix 2020-09-12 12:59:03 -07:00
Ken Hibino
8e23b865e9 Update recoverer 2020-09-12 12:59:03 -07:00
Ken Hibino
a873d488ee Update ListDeadlineExceeded in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
e0a8f1252a Update scheduler to check and enqueue for only the specified queues. 2020-09-12 12:59:03 -07:00
Ken Hibino
650d7fdbe9 Update CheckAndEnqueue method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
f6d504939e Update Requeue method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
74f08795f8 Update Kill method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
35b2b1782e Update Retry method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
f63dcce0c0 Update Done method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
565f86ee4f Update Dequeue command in rdb 2020-09-12 12:59:03 -07:00
Ken Hibino
94aa878060 Update Enqueue and Schedule commands in rdb 2020-09-12 12:59:03 -07:00
Ken Hibino
50b6034bf9 Move unique key generator function to base 2020-09-12 12:59:03 -07:00
Ken Hibino
154113d0d0 Update base package to generate redis keys with hashtag 2020-09-12 12:59:03 -07:00
Ken Hibino
669c7995c4 Run CI builds using go v1.15.x 2020-09-02 06:34:58 -07:00
71 changed files with 7897 additions and 4775 deletions

View File

@@ -2,12 +2,12 @@ language: go
go_import_path: github.com/hibiken/asynq go_import_path: github.com/hibiken/asynq
git: git:
depth: 1 depth: 1
go: [1.13.x, 1.14.x] go: [1.13.x, 1.14.x, 1.15.x]
script: script:
- go test -race -v -coverprofile=coverage.txt -covermode=atomic ./... - go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
- go test -run=XXX -bench=. -loglevel=debug ./... - go test -run=^$ -bench=. -loglevel=debug ./...
services: services:
- redis-server - redis-server
after_success: after_success:
- bash ./.travis/benchcmp.sh - travis_wait 60 bash ./.travis/benchstat.sh
- bash <(curl -s https://codecov.io/bash) - bash <(curl -s https://codecov.io/bash)

View File

@@ -2,17 +2,19 @@ if [ "${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}" != "master" ]; then
REMOTE_URL="$(git config --get remote.origin.url)"; REMOTE_URL="$(git config --get remote.origin.url)";
cd ${TRAVIS_BUILD_DIR}/.. && \ cd ${TRAVIS_BUILD_DIR}/.. && \
git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \ git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \
# turn the detached message off
git config --global advice.detachedHead false && \
cd "${TRAVIS_REPO_SLUG}-bench" && \ cd "${TRAVIS_REPO_SLUG}-bench" && \
# Benchmark master # Benchmark master
git checkout master && \ git checkout master && \
go test -run=XXX -bench=. ./... > master.txt && \ go test -run=^$ -bench=. -count=5 -timeout=60m -benchmem ./... > master.txt && \
# Benchmark feature branch # Benchmark feature branch
git checkout ${TRAVIS_COMMIT} && \ git checkout ${TRAVIS_COMMIT} && \
go test -run=XXX -bench=. ./... > feature.txt && \ go test -run=^$ -bench=. -count=5 -timeout=60m -benchmem ./... > feature.txt && \
# compare two benchmarks # compare two benchmarks
go get -u golang.org/x/tools/cmd/benchcmp && \ go get -u golang.org/x/perf/cmd/benchstat && \
benchcmp master.txt feature.txt; benchstat master.txt feature.txt;
fi fi

View File

@@ -7,6 +7,89 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [0.13.1] - 2020-11-22
### Fixed
- Fixed processor to wait for specified time duration before forcefully shutdown workers.
## [0.13.0] - 2020-10-13
### Added
- `Scheduler` type is added to enable periodic tasks. See the godoc for its APIs and [wiki](https://github.com/hibiken/asynq/wiki/Periodic-Tasks) for the getting-started guide.
### Changed
- interface `Option` has changed. See the godoc for the new interface.
This change would have no impact as long as you are using exported functions (e.g. `MaxRetry`, `Queue`, etc)
to create `Option`s.
### Added
- `Payload.String() string` method is added
- `Payload.MarshalJSON() ([]byte, error)` method is added
## [0.12.0] - 2020-09-12
**IMPORTANT**: If you are upgrading from a previous version, please install the latest version of the CLI `go get -u github.com/hibiken/asynq/tools/asynq` and run `asynq migrate` command. No process should be writing to Redis while you run the migration command.
## The semantics of queue have changed
Previously, we called tasks that are ready to be processed *"Enqueued tasks"*, and other tasks that are scheduled to be processed in the future *"Scheduled tasks"*, etc.
We changed the semantics of *"Enqueue"* slightly; All tasks that client pushes to Redis are *Enqueued* to a queue. Within a queue, tasks will transition from one state to another.
Possible task states are:
- `Pending`: task is ready to be processed (previously called "Enqueued")
- `Active`: tasks is currently being processed (previously called "InProgress")
- `Scheduled`: task is scheduled to be processed in the future
- `Retry`: task failed to be processed and will be retried again in the future
- `Dead`: task has exhausted all of its retries and stored for manual inspection purpose
**These semantics change is reflected in the new `Inspector` API and CLI commands.**
---
### Changed
#### `Client`
Use `ProcessIn` or `ProcessAt` option to schedule a task instead of `EnqueueIn` or `EnqueueAt`.
| Previously | v0.12.0 |
|-----------------------------|--------------------------------------------|
| `client.EnqueueAt(t, task)` | `client.Enqueue(task, asynq.ProcessAt(t))` |
| `client.EnqueueIn(d, task)` | `client.Enqueue(task, asynq.ProcessIn(d))` |
#### `Inspector`
All Inspector methods are scoped to a queue, and the methods take `qname (string)` as the first argument.
`EnqueuedTask` is renamed to `PendingTask` and its corresponding methods.
`InProgressTask` is renamed to `ActiveTask` and its corresponding methods.
Command "Enqueue" is replaced by the verb "Run" (e.g. `EnqueueAllScheduledTasks` --> `RunAllScheduledTasks`)
#### `CLI`
CLI commands are restructured to use subcommands. Commands are organized into a few management commands:
To view details on any command, use `asynq help <command> <subcommand>`.
- `asynq stats`
- `asynq queue [ls inspect history rm pause unpause]`
- `asynq task [ls cancel delete kill run delete-all kill-all run-all]`
- `asynq server [ls]`
### Added
#### `RedisConnOpt`
- `RedisClusterClientOpt` is added to connect to Redis Cluster.
- `Username` field is added to all `RedisConnOpt` types in order to authenticate connection when Redis ACLs are used.
#### `Client`
- `ProcessIn(d time.Duration) Option` and `ProcessAt(t time.Time) Option` are added to replace `EnqueueIn` and `EnqueueAt` functionality.
#### `Inspector`
- `Queues() ([]string, error)` method is added to get all queue names.
- `ClusterKeySlot(qname string) (int64, error)` method is added to get queue's hash slot in Redis cluster.
- `ClusterNodes(qname string) ([]ClusterNode, error)` method is added to get a list of Redis cluster nodes for the given queue.
- `Close() error` method is added to close connection with redis.
### `Handler`
- `GetQueueName(ctx context.Context) (string, bool)` helper is added to extract queue name from a context.
## [0.11.0] - 2020-07-28 ## [0.11.0] - 2020-07-28
### Added ### Added

View File

@@ -45,6 +45,7 @@ Thank you! We'll try to respond as quickly as possible.
6. Create a new pull request 6. Create a new pull request
Please try to keep your pull request focused in scope and avoid including unrelated commits. Please try to keep your pull request focused in scope and avoid including unrelated commits.
Please run tests against redis cluster locally with `--redis_cluster` flag to ensure that code works for Redis cluster. TODO: Run tests using Redis cluster on CI.
After you have submitted your pull request, we'll try to get back to you as soon as possible. We may suggest some changes or improvements. After you have submitted your pull request, we'll try to get back to you as soon as possible. We may suggest some changes or improvements.

View File

@@ -9,7 +9,7 @@
## Overview ## Overview
Asynq is a Go library for queueing tasks and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily. Asynq is a Go library for queueing tasks and processing them asynchronously with workers. It's backed by Redis and is designed to be scalable yet easy to get started.
Highlevel overview of how Asynq works: Highlevel overview of how Asynq works:
@@ -42,7 +42,9 @@ A system can consist of multiple worker servers and brokers, giving way to high
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation) - Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
- [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive) - [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive)
- [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue - [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for HA - [Periodic Tasks](https://github.com/hibiken/asynq/wiki/Periodic-Tasks)
- [Support Redis Cluster](https://github.com/hibiken/asynq/wiki/Redis-Cluster) for automatic sharding and high availability
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for high availability
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks - [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Quickstart ## Quickstart
@@ -66,8 +68,8 @@ import (
// A list of task types. // A list of task types.
const ( const (
EmailDelivery = "email:deliver" TypeEmailDelivery = "email:deliver"
ImageProcessing = "image:process" TypeImageResize = "image:resize"
) )
//---------------------------------------------- //----------------------------------------------
@@ -77,12 +79,12 @@ const (
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task { func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID} payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
return asynq.NewTask(EmailDelivery, payload) return asynq.NewTask(TypeEmailDelivery, payload)
} }
func NewImageProcessingTask(src, dst string) *asynq.Task { func NewImageResizeTask(src string) *asynq.Task {
payload := map[string]interface{}{"src": src, "dst": dst} payload := map[string]interface{}{"src": src}
return asynq.NewTask(ImageProcessing, payload) return asynq.NewTask(TypeImageResize, payload)
} }
//--------------------------------------------------------------- //---------------------------------------------------------------
@@ -103,7 +105,7 @@ func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
return err return err
} }
fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID) fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID)
// Email delivery logic ... // Email delivery code ...
return nil return nil
} }
@@ -117,12 +119,8 @@ func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
if err != nil { if err != nil {
return err return err
} }
dst, err := t.Payload.GetString("dst") fmt.Printf("Resize image: src = %s\n", src)
if err != nil { // Image resizing code ...
return err
}
fmt.Printf("Process image: src = %s, dst = %s\n", src, dst)
// Image processing logic ...
return nil return nil
} }
@@ -131,9 +129,7 @@ func NewImageProcessor() *ImageProcessor {
} }
``` ```
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue. In your application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
A task will be processed asynchronously by a background worker as soon as the task gets enqueued.
Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
```go ```go
package main package main
@@ -167,11 +163,11 @@ func main() {
// ------------------------------------------------------------ // ------------------------------------------------------------
// Example 2: Schedule task to be processed in the future. // Example 2: Schedule task to be processed in the future.
// Use (*Client).EnqueueIn or (*Client).EnqueueAt. // Use ProcessIn or ProcessAt option.
// ------------------------------------------------------------ // ------------------------------------------------------------
t = tasks.NewEmailDeliveryTask(42, "other:template:id") t = tasks.NewEmailDeliveryTask(42, "other:template:id")
res, err = c.EnqueueIn(24*time.Hour, t) res, err = c.Enqueue(t, asynq.ProcessIn(24*time.Hour))
if err != nil { if err != nil {
log.Fatal("could not schedule task: %v", err) log.Fatal("could not schedule task: %v", err)
} }
@@ -179,13 +175,13 @@ func main() {
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
// Example 3: Set options to tune task processing behavior. // Example 3: Set other options to tune task processing behavior.
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc. // Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
c.SetDefaultOptions(tasks.ImageProcessing, asynq.MaxRetry(10), asynq.Timeout(time.Minute)) c.SetDefaultOptions(tasks.ImageProcessing, asynq.MaxRetry(10), asynq.Timeout(3*time.Minute))
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url") t = tasks.NewImageResizeTask("some/blobstore/path")
res, err = c.Enqueue(t) res, err = c.Enqueue(t)
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatal("could not enqueue task: %v", err)
@@ -197,7 +193,7 @@ func main() {
// Options passed at enqueue time override default ones, if any. // Options passed at enqueue time override default ones, if any.
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url") t = tasks.NewImageResizeTask("some/blobstore/path")
res, err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second)) res, err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatal("could not enqueue task: %v", err)
@@ -206,7 +202,7 @@ func main() {
} }
``` ```
Next, create a worker server to process these tasks in the background. Next, start a worker server to process these tasks in the background.
To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks. To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler. You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
@@ -240,8 +236,8 @@ func main() {
// mux maps a type to a handler // mux maps a type to a handler
mux := asynq.NewServeMux() mux := asynq.NewServeMux()
mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask) mux.HandleFunc(tasks.TypeEmailDelivery, tasks.HandleEmailDeliveryTask)
mux.Handle(tasks.ImageProcessing, tasks.NewImageProcessor()) mux.Handle(tasks.TypeImageResize, tasks.NewImageProcessor())
// ...register other handlers... // ...register other handlers...
if err := srv.Run(mux); err != nil { if err := srv.Run(mux); err != nil {
@@ -282,7 +278,7 @@ go get -u github.com/hibiken/asynq/tools/asynq
| Dependency | Version | | Dependency | Version |
| -------------------------- | ------- | | -------------------------- | ------- |
| [Redis](https://redis.io/) | v2.8+ | | [Redis](https://redis.io/) | v3.0+ |
| [Go](https://golang.org/) | v1.13+ | | [Go](https://golang.org/) | v1.13+ |
## Contributing ## Contributing

View File

@@ -37,7 +37,9 @@ func NewTask(typename string, payload map[string]interface{}) *Task {
// //
// RedisConnOpt represents a sum of following types: // RedisConnOpt represents a sum of following types:
// //
// RedisClientOpt | *RedisClientOpt | RedisFailoverClientOpt | *RedisFailoverClientOpt // - RedisClientOpt
// - RedisFailoverClientOpt
// - RedisClusterClientOpt
type RedisConnOpt interface{} type RedisConnOpt interface{}
// RedisClientOpt is used to create a redis client that connects // RedisClientOpt is used to create a redis client that connects
@@ -50,7 +52,12 @@ type RedisClientOpt struct {
// Redis server address in "host:port" format. // Redis server address in "host:port" format.
Addr string Addr string
// Redis server password. // Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string Password string
// Redis DB to select after connecting to a server. // Redis DB to select after connecting to a server.
@@ -81,7 +88,12 @@ type RedisFailoverClientOpt struct {
// Redis sentinel password. // Redis sentinel password.
SentinelPassword string SentinelPassword string
// Redis server password. // Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string Password string
// Redis DB to select after connecting to a server. // Redis DB to select after connecting to a server.
@@ -97,6 +109,30 @@ type RedisFailoverClientOpt struct {
TLSConfig *tls.Config TLSConfig *tls.Config
} }
// RedisFailoverClientOpt is used to creates a redis client that connects to
// redis cluster.
type RedisClusterClientOpt struct {
// A seed list of host:port addresses of cluster nodes.
Addrs []string
// The maximum number of retries before giving up.
// Command is retried on network errors and MOVED/ASK redirects.
// Default is 8 retries.
MaxRedirects int
// Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string
// TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set.
TLSConfig *tls.Config
}
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid. // ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
// It returns a non-nil error if uri cannot be parsed. // It returns a non-nil error if uri cannot be parsed.
// //
@@ -173,12 +209,13 @@ func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
// createRedisClient returns a redis client given a redis connection configuration. // createRedisClient returns a redis client given a redis connection configuration.
// //
// Passing an unexpected type as a RedisConnOpt argument will cause panic. // Passing an unexpected type as a RedisConnOpt argument will cause panic.
func createRedisClient(r RedisConnOpt) *redis.Client { func createRedisClient(r RedisConnOpt) redis.UniversalClient {
switch r := r.(type) { switch r := r.(type) {
case *RedisClientOpt: case *RedisClientOpt:
return redis.NewClient(&redis.Options{ return redis.NewClient(&redis.Options{
Network: r.Network, Network: r.Network,
Addr: r.Addr, Addr: r.Addr,
Username: r.Username,
Password: r.Password, Password: r.Password,
DB: r.DB, DB: r.DB,
PoolSize: r.PoolSize, PoolSize: r.PoolSize,
@@ -188,6 +225,7 @@ func createRedisClient(r RedisConnOpt) *redis.Client {
return redis.NewClient(&redis.Options{ return redis.NewClient(&redis.Options{
Network: r.Network, Network: r.Network,
Addr: r.Addr, Addr: r.Addr,
Username: r.Username,
Password: r.Password, Password: r.Password,
DB: r.DB, DB: r.DB,
PoolSize: r.PoolSize, PoolSize: r.PoolSize,
@@ -198,6 +236,7 @@ func createRedisClient(r RedisConnOpt) *redis.Client {
MasterName: r.MasterName, MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs, SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword, SentinelPassword: r.SentinelPassword,
Username: r.Username,
Password: r.Password, Password: r.Password,
DB: r.DB, DB: r.DB,
PoolSize: r.PoolSize, PoolSize: r.PoolSize,
@@ -208,11 +247,28 @@ func createRedisClient(r RedisConnOpt) *redis.Client {
MasterName: r.MasterName, MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs, SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword, SentinelPassword: r.SentinelPassword,
Username: r.Username,
Password: r.Password, Password: r.Password,
DB: r.DB, DB: r.DB,
PoolSize: r.PoolSize, PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig, TLSConfig: r.TLSConfig,
}) })
case RedisClusterClientOpt:
return redis.NewClusterClient(&redis.ClusterOptions{
Addrs: r.Addrs,
MaxRedirects: r.MaxRedirects,
Username: r.Username,
Password: r.Password,
TLSConfig: r.TLSConfig,
})
case *RedisClusterClientOpt:
return redis.NewClusterClient(&redis.ClusterOptions{
Addrs: r.Addrs,
MaxRedirects: r.MaxRedirects,
Username: r.Username,
Password: r.Password,
TLSConfig: r.TLSConfig,
})
default: default:
panic(fmt.Sprintf("asynq: unexpected type %T for RedisConnOpt", r)) panic(fmt.Sprintf("asynq: unexpected type %T for RedisConnOpt", r))
} }

View File

@@ -7,6 +7,7 @@ package asynq
import ( import (
"flag" "flag"
"sort" "sort"
"strings"
"testing" "testing"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
@@ -24,6 +25,9 @@ var (
redisAddr string redisAddr string
redisDB int redisDB int
useRedisCluster bool
redisClusterAddrs string // comma-separated list of host:port
testLogLevel = FatalLevel testLogLevel = FatalLevel
) )
@@ -32,23 +36,52 @@ var testLogger *log.Logger
func init() { func init() {
flag.StringVar(&redisAddr, "redis_addr", "localhost:6379", "redis address to use in testing") flag.StringVar(&redisAddr, "redis_addr", "localhost:6379", "redis address to use in testing")
flag.IntVar(&redisDB, "redis_db", 14, "redis db number to use in testing") flag.IntVar(&redisDB, "redis_db", 14, "redis db number to use in testing")
flag.BoolVar(&useRedisCluster, "redis_cluster", false, "use redis cluster as a broker in testing")
flag.StringVar(&redisClusterAddrs, "redis_cluster_addrs", "localhost:7000,localhost:7001,localhost:7002", "comma separated list of redis server addresses")
flag.Var(&testLogLevel, "loglevel", "log level to use in testing") flag.Var(&testLogLevel, "loglevel", "log level to use in testing")
testLogger = log.NewLogger(nil) testLogger = log.NewLogger(nil)
testLogger.SetLevel(toInternalLogLevel(testLogLevel)) testLogger.SetLevel(toInternalLogLevel(testLogLevel))
} }
func setup(tb testing.TB) *redis.Client { func setup(tb testing.TB) (r redis.UniversalClient) {
tb.Helper() tb.Helper()
r := redis.NewClient(&redis.Options{ if useRedisCluster {
Addr: redisAddr, addrs := strings.Split(redisClusterAddrs, ",")
DB: redisDB, if len(addrs) == 0 {
}) tb.Fatal("No redis cluster addresses provided. Please set addresses using --redis_cluster_addrs flag.")
}
r = redis.NewClusterClient(&redis.ClusterOptions{
Addrs: addrs,
})
} else {
r = redis.NewClient(&redis.Options{
Addr: redisAddr,
DB: redisDB,
})
}
// Start each test with a clean slate. // Start each test with a clean slate.
h.FlushDB(tb, r) h.FlushDB(tb, r)
return r return r
} }
func getRedisConnOpt(tb testing.TB) RedisConnOpt {
tb.Helper()
if useRedisCluster {
addrs := strings.Split(redisClusterAddrs, ",")
if len(addrs) == 0 {
tb.Fatal("No redis cluster addresses provided. Please set addresses using --redis_cluster_addrs flag.")
}
return RedisClusterClientOpt{
Addrs: addrs,
}
}
return RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
}
var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task { var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
out := append([]*Task(nil), in...) // Copy input to avoid mutating it out := append([]*Task(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {

View File

@@ -18,10 +18,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
setup(b) setup(b)
redis := &RedisClientOpt{ redis := getRedisConnOpt(b)
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis) client := NewClient(redis)
srv := NewServer(redis, Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
@@ -37,6 +34,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
client.Close()
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(count) wg.Add(count)
@@ -61,10 +59,7 @@ func BenchmarkEndToEnd(b *testing.B) {
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
setup(b) setup(b)
redis := &RedisClientOpt{ redis := getRedisConnOpt(b)
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis) client := NewClient(redis)
srv := NewServer(redis, Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
@@ -82,10 +77,11 @@ func BenchmarkEndToEnd(b *testing.B) {
} }
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if _, err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil { if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
client.Close()
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(count * 2) wg.Add(count * 2)
@@ -127,10 +123,7 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
setup(b) setup(b)
redis := &RedisClientOpt{ redis := getRedisConnOpt(b)
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis) client := NewClient(redis)
srv := NewServer(redis, Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
@@ -160,6 +153,7 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
client.Close()
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(highCount + defaultCount + lowCount) wg.Add(highCount + defaultCount + lowCount)
@@ -185,10 +179,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup b.StopTimer() // begin setup
setup(b) setup(b)
redis := &RedisClientOpt{ redis := getRedisConnOpt(b)
Addr: redisAddr,
DB: redisDB,
}
client := NewClient(redis) client := NewClient(redis)
srv := NewServer(redis, Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
@@ -207,7 +198,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
// Schedule 10,000 tasks. // Schedule 10,000 tasks.
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i}) t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if _, err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil { if _, err := client.Enqueue(t, ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err) b.Fatalf("could not enqueue a task: %v", err)
} }
} }
@@ -233,6 +224,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
srv.Stop() srv.Stop()
client.Close()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }

218
client.go
View File

@@ -7,7 +7,6 @@ package asynq
import ( import (
"errors" "errors"
"fmt" "fmt"
"sort"
"strings" "strings"
"sync" "sync"
"time" "time"
@@ -29,7 +28,7 @@ type Client struct {
rdb *rdb.RDB rdb *rdb.RDB
} }
// NewClient and returns a new Client given a redis connection option. // NewClient returns a new Client instance given a redis connection option.
func NewClient(r RedisConnOpt) *Client { func NewClient(r RedisConnOpt) *Client {
rdb := rdb.NewRDB(createRedisClient(r)) rdb := rdb.NewRDB(createRedisClient(r))
return &Client{ return &Client{
@@ -38,16 +37,39 @@ func NewClient(r RedisConnOpt) *Client {
} }
} }
type OptionType int
const (
MaxRetryOpt OptionType = iota
QueueOpt
TimeoutOpt
DeadlineOpt
UniqueOpt
ProcessAtOpt
ProcessInOpt
)
// Option specifies the task processing behavior. // Option specifies the task processing behavior.
type Option interface{} type Option interface {
// String returns a string representation of the option.
String() string
// Type describes the type of the option.
Type() OptionType
// Value returns a value used to create this option.
Value() interface{}
}
// Internal option representations. // Internal option representations.
type ( type (
retryOption int retryOption int
queueOption string queueOption string
timeoutOption time.Duration timeoutOption time.Duration
deadlineOption time.Time deadlineOption time.Time
uniqueOption time.Duration uniqueOption time.Duration
processAtOption time.Time
processInOption time.Duration
) )
// MaxRetry returns an option to specify the max number of times // MaxRetry returns an option to specify the max number of times
@@ -61,13 +83,21 @@ func MaxRetry(n int) Option {
return retryOption(n) return retryOption(n)
} }
func (n retryOption) String() string { return fmt.Sprintf("MaxRetry(%d)", int(n)) }
func (n retryOption) Type() OptionType { return MaxRetryOpt }
func (n retryOption) Value() interface{} { return n }
// Queue returns an option to specify the queue to enqueue the task into. // Queue returns an option to specify the queue to enqueue the task into.
// //
// Queue name is case-insensitive and the lowercased version is used. // Queue name is case-insensitive and the lowercased version is used.
func Queue(name string) Option { func Queue(qname string) Option {
return queueOption(strings.ToLower(name)) return queueOption(strings.ToLower(qname))
} }
func (qname queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(qname)) }
func (qname queueOption) Type() OptionType { return QueueOpt }
func (qname queueOption) Value() interface{} { return qname }
// Timeout returns an option to specify how long a task may run. // Timeout returns an option to specify how long a task may run.
// If the timeout elapses before the Handler returns, then the task // If the timeout elapses before the Handler returns, then the task
// will be retried. // will be retried.
@@ -80,6 +110,10 @@ func Timeout(d time.Duration) Option {
return timeoutOption(d) return timeoutOption(d)
} }
func (d timeoutOption) String() string { return fmt.Sprintf("Timeout(%v)", time.Duration(d)) }
func (d timeoutOption) Type() OptionType { return TimeoutOpt }
func (d timeoutOption) Value() interface{} { return d }
// Deadline returns an option to specify the deadline for the given task. // Deadline returns an option to specify the deadline for the given task.
// If it reaches the deadline before the Handler returns, then the task // If it reaches the deadline before the Handler returns, then the task
// will be retried. // will be retried.
@@ -90,6 +124,10 @@ func Deadline(t time.Time) Option {
return deadlineOption(t) return deadlineOption(t)
} }
func (t deadlineOption) String() string { return fmt.Sprintf("Deadline(%v)", time.Time(t)) }
func (t deadlineOption) Type() OptionType { return DeadlineOpt }
func (t deadlineOption) Value() interface{} { return t }
// Unique returns an option to enqueue a task only if the given task is unique. // Unique returns an option to enqueue a task only if the given task is unique.
// Task enqueued with this option is guaranteed to be unique within the given ttl. // Task enqueued with this option is guaranteed to be unique within the given ttl.
// Once the task gets processed successfully or once the TTL has expired, another task with the same uniqueness may be enqueued. // Once the task gets processed successfully or once the TTL has expired, another task with the same uniqueness may be enqueued.
@@ -103,6 +141,32 @@ func Unique(ttl time.Duration) Option {
return uniqueOption(ttl) return uniqueOption(ttl)
} }
func (ttl uniqueOption) String() string { return fmt.Sprintf("Unique(%v)", time.Duration(ttl)) }
func (ttl uniqueOption) Type() OptionType { return UniqueOpt }
func (ttl uniqueOption) Value() interface{} { return ttl }
// ProcessAt returns an option to specify when to process the given task.
//
// If there's a conflicting ProcessIn option, the last option passed to Enqueue overrides the others.
func ProcessAt(t time.Time) Option {
return processAtOption(t)
}
func (t processAtOption) String() string { return fmt.Sprintf("ProcessAt(%v)", time.Time(t)) }
func (t processAtOption) Type() OptionType { return ProcessAtOpt }
func (t processAtOption) Value() interface{} { return t }
// ProcessIn returns an option to specify when to process the given task relative to the current time.
//
// If there's a conflicting ProcessAt option, the last option passed to Enqueue overrides the others.
func ProcessIn(d time.Duration) Option {
return processInOption(d)
}
func (d processInOption) String() string { return fmt.Sprintf("ProcessIn(%v)", time.Duration(d)) }
func (d processInOption) Type() OptionType { return ProcessInOpt }
func (d processInOption) Value() interface{} { return d }
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task. // ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
// //
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option. // ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
@@ -114,65 +178,53 @@ type option struct {
timeout time.Duration timeout time.Duration
deadline time.Time deadline time.Time
uniqueTTL time.Duration uniqueTTL time.Duration
processAt time.Time
} }
func composeOptions(opts ...Option) option { // composeOptions merges user provided options into the default options
// and returns the composed option.
// It also validates the user provided options and returns an error if any of
// the user provided options fail the validations.
func composeOptions(opts ...Option) (option, error) {
res := option{ res := option{
retry: defaultMaxRetry, retry: defaultMaxRetry,
queue: base.DefaultQueueName, queue: base.DefaultQueueName,
timeout: 0, // do not set to deafultTimeout here timeout: 0, // do not set to deafultTimeout here
deadline: time.Time{}, deadline: time.Time{},
processAt: time.Now(),
} }
for _, opt := range opts { for _, opt := range opts {
switch opt := opt.(type) { switch opt := opt.(type) {
case retryOption: case retryOption:
res.retry = int(opt) res.retry = int(opt)
case queueOption: case queueOption:
res.queue = string(opt) trimmed := strings.TrimSpace(string(opt))
if err := validateQueueName(trimmed); err != nil {
return option{}, err
}
res.queue = trimmed
case timeoutOption: case timeoutOption:
res.timeout = time.Duration(opt) res.timeout = time.Duration(opt)
case deadlineOption: case deadlineOption:
res.deadline = time.Time(opt) res.deadline = time.Time(opt)
case uniqueOption: case uniqueOption:
res.uniqueTTL = time.Duration(opt) res.uniqueTTL = time.Duration(opt)
case processAtOption:
res.processAt = time.Time(opt)
case processInOption:
res.processAt = time.Now().Add(time.Duration(opt))
default: default:
// ignore unexpected option // ignore unexpected option
} }
} }
return res return res, nil
} }
// uniqueKey computes the redis key used for the given task. func validateQueueName(qname string) error {
// It returns an empty string if ttl is zero. if len(qname) == 0 {
func uniqueKey(t *Task, ttl time.Duration, qname string) string { return fmt.Errorf("queue name must contain one or more characters")
if ttl == 0 {
return ""
} }
return fmt.Sprintf("%s:%s:%s", t.Type, serializePayload(t.Payload.data), qname) return nil
}
func serializePayload(payload map[string]interface{}) string {
if payload == nil {
return "nil"
}
type entry struct {
k string
v interface{}
}
var es []entry
for k, v := range payload {
es = append(es, entry{k, v})
}
// sort entries by key
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
var b strings.Builder
for _, e := range es {
if b.Len() > 0 {
b.WriteString(",")
}
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
}
return b.String()
} }
const ( const (
@@ -205,6 +257,12 @@ type Result struct {
// ID is a unique identifier for the task. // ID is a unique identifier for the task.
ID string ID string
// EnqueuedAt is the time the task was enqueued in UTC.
EnqueuedAt time.Time
// ProcessAt indicates when the task should be processed.
ProcessAt time.Time
// Retry is the maximum number of retry for the task. // Retry is the maximum number of retry for the task.
Retry int Retry int
@@ -229,51 +287,29 @@ type Result struct {
Deadline time.Time Deadline time.Time
} }
// EnqueueAt schedules task to be enqueued at the specified time. // Close closes the connection with redis.
// func (c *Client) Close() error {
// EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error. return c.rdb.Close()
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
// By deafult, max retry is set to 25 and timeout is set to 30 minutes.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(t, task, opts...)
} }
// Enqueue enqueues task to be processed immediately. // Enqueue enqueues the given task to be processed asynchronously.
// //
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error. // Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
// //
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
// By deafult, max retry is set to 25 and timeout is set to 30 minutes. // By deafult, max retry is set to 25 and timeout is set to 30 minutes.
// If no ProcessAt or ProcessIn options are passed, the task will be processed immediately.
func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) { func (c *Client) Enqueue(task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(time.Now(), task, opts...)
}
// EnqueueIn schedules task to be enqueued after the specified delay.
//
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
// By deafult, max retry is set to 25 and timeout is set to 30 minutes.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) (*Result, error) {
return c.enqueueAt(time.Now().Add(d), task, opts...)
}
// Close closes the connection with redis server.
func (c *Client) Close() error {
return c.rdb.Close()
}
func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) (*Result, error) {
c.mu.Lock() c.mu.Lock()
defer c.mu.Unlock()
if defaults, ok := c.opts[task.Type]; ok { if defaults, ok := c.opts[task.Type]; ok {
opts = append(defaults, opts...) opts = append(defaults, opts...)
} }
opt := composeOptions(opts...) c.mu.Unlock()
opt, err := composeOptions(opts...)
if err != nil {
return nil, err
}
deadline := noDeadline deadline := noDeadline
if !opt.deadline.IsZero() { if !opt.deadline.IsZero() {
deadline = opt.deadline deadline = opt.deadline
@@ -286,6 +322,10 @@ func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) (*Result, er
// If neither deadline nor timeout are set, use default timeout. // If neither deadline nor timeout are set, use default timeout.
timeout = defaultTimeout timeout = defaultTimeout
} }
var uniqueKey string
if opt.uniqueTTL > 0 {
uniqueKey = base.UniqueKey(opt.queue, task.Type, task.Payload.data)
}
msg := &base.TaskMessage{ msg := &base.TaskMessage{
ID: uuid.New(), ID: uuid.New(),
Type: task.Type, Type: task.Type,
@@ -294,14 +334,14 @@ func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) (*Result, er
Retry: opt.retry, Retry: opt.retry,
Deadline: deadline.Unix(), Deadline: deadline.Unix(),
Timeout: int64(timeout.Seconds()), Timeout: int64(timeout.Seconds()),
UniqueKey: uniqueKey(task, opt.uniqueTTL, opt.queue), UniqueKey: uniqueKey,
} }
var err error
now := time.Now() now := time.Now()
if t.Before(now) || t.Equal(now) { if opt.processAt.Before(now) || opt.processAt.Equal(now) {
opt.processAt = now
err = c.enqueue(msg, opt.uniqueTTL) err = c.enqueue(msg, opt.uniqueTTL)
} else { } else {
err = c.schedule(msg, t, opt.uniqueTTL) err = c.schedule(msg, opt.processAt, opt.uniqueTTL)
} }
switch { switch {
case err == rdb.ErrDuplicateTask: case err == rdb.ErrDuplicateTask:
@@ -310,11 +350,13 @@ func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) (*Result, er
return nil, err return nil, err
} }
return &Result{ return &Result{
ID: msg.ID.String(), ID: msg.ID.String(),
Queue: msg.Queue, EnqueuedAt: time.Now().UTC(),
Retry: msg.Retry, ProcessAt: opt.processAt,
Timeout: timeout, Queue: msg.Queue,
Deadline: deadline, Retry: msg.Retry,
Timeout: timeout,
Deadline: deadline,
}, nil }, nil
} }

View File

@@ -15,12 +15,10 @@ import (
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
func TestClientEnqueueAt(t *testing.T) { func TestClientEnqueueWithProcessAtOption(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(RedisClientOpt{ client := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer client.Close()
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
@@ -32,11 +30,11 @@ func TestClientEnqueueAt(t *testing.T) {
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
processAt time.Time processAt time.Time // value for ProcessAt option
opts []Option opts []Option // other options
wantRes *Result wantRes *Result
wantEnqueued map[string][]*base.TaskMessage wantPending map[string][]*base.TaskMessage
wantScheduled []base.Z wantScheduled map[string][]base.Z
}{ }{
{ {
desc: "Process task immediately", desc: "Process task immediately",
@@ -44,12 +42,14 @@ func TestClientEnqueueAt(t *testing.T) {
processAt: now, processAt: now,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantRes: &Result{
Queue: "default", EnqueuedAt: now.UTC(),
Retry: defaultMaxRetry, ProcessAt: now,
Timeout: defaultTimeout, Queue: "default",
Deadline: noDeadline, Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -61,7 +61,9 @@ func TestClientEnqueueAt(t *testing.T) {
}, },
}, },
}, },
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil wantScheduled: map[string][]base.Z{
"default": {},
},
}, },
{ {
desc: "Schedule task to be processed in the future", desc: "Schedule task to be processed in the future",
@@ -69,23 +71,29 @@ func TestClientEnqueueAt(t *testing.T) {
processAt: oneHourLater, processAt: oneHourLater,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantRes: &Result{
Queue: "default", EnqueuedAt: now.UTC(),
Retry: defaultMaxRetry, ProcessAt: oneHourLater,
Timeout: defaultTimeout, Queue: "default",
Deadline: noDeadline, Retry: defaultMaxRetry,
Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil wantPending: map[string][]*base.TaskMessage{
wantScheduled: []base.Z{ "default": {},
{ },
Message: &base.TaskMessage{ wantScheduled: map[string][]base.Z{
Type: task.Type, "default": {
Payload: task.Payload.data, {
Retry: defaultMaxRetry, Message: &base.TaskMessage{
Queue: "default", Type: task.Type,
Timeout: int64(defaultTimeout.Seconds()), Payload: task.Payload.data,
Deadline: noDeadline.Unix(), Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
Score: oneHourLater.Unix(),
}, },
Score: oneHourLater.Unix(),
}, },
}, },
}, },
@@ -94,45 +102,50 @@ func TestClientEnqueueAt(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
gotRes, err := client.EnqueueAt(tc.processAt, tc.task, tc.opts...) opts := append(tc.opts, ProcessAt(tc.processAt))
gotRes, err := client.Enqueue(tc.task, opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" { cmpOptions := []cmp.Option{
t.Errorf("%s;\nEnqueueAt(processAt, task) returned %v, want %v; (-want,+got)\n%s", cmpopts.IgnoreFields(Result{}, "ID"),
tc.desc, gotRes, tc.wantRes, diff) cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, ProcessAt(%v)) returned %v, want %v; (-want,+got)\n%s",
tc.desc, tc.processAt, gotRes, tc.wantRes, diff)
} }
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantPending {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname) gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotEnqueued, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
} }
} }
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledEntries(t, r) gotScheduled := h.GetScheduledEntries(t, r, qname)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, gotScheduled, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledQueue, diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledKey(qname), diff)
}
} }
} }
} }
func TestClientEnqueue(t *testing.T) { func TestClientEnqueue(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(RedisClientOpt{ client := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer client.Close()
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
now := time.Now()
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
opts []Option opts []Option
wantRes *Result wantRes *Result
wantEnqueued map[string][]*base.TaskMessage wantPending map[string][]*base.TaskMessage
}{ }{
{ {
desc: "Process task immediately with a custom retry count", desc: "Process task immediately with a custom retry count",
@@ -141,12 +154,13 @@ func TestClientEnqueue(t *testing.T) {
MaxRetry(3), MaxRetry(3),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now,
Retry: 3, Queue: "default",
Timeout: defaultTimeout, Retry: 3,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -166,12 +180,13 @@ func TestClientEnqueue(t *testing.T) {
MaxRetry(-2), MaxRetry(-2),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now,
Retry: 0, Queue: "default",
Timeout: defaultTimeout, Retry: 0,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -192,12 +207,13 @@ func TestClientEnqueue(t *testing.T) {
MaxRetry(10), MaxRetry(10),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now,
Retry: 10, Queue: "default",
Timeout: defaultTimeout, Retry: 10,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -217,12 +233,13 @@ func TestClientEnqueue(t *testing.T) {
Queue("custom"), Queue("custom"),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "custom", ProcessAt: now,
Retry: defaultMaxRetry, Queue: "custom",
Timeout: defaultTimeout, Retry: defaultMaxRetry,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"custom": { "custom": {
{ {
Type: task.Type, Type: task.Type,
@@ -242,12 +259,13 @@ func TestClientEnqueue(t *testing.T) {
Queue("HIGH"), Queue("HIGH"),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "high", ProcessAt: now,
Retry: defaultMaxRetry, Queue: "high",
Timeout: defaultTimeout, Retry: defaultMaxRetry,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"high": { "high": {
{ {
Type: task.Type, Type: task.Type,
@@ -267,12 +285,13 @@ func TestClientEnqueue(t *testing.T) {
Timeout(20 * time.Second), Timeout(20 * time.Second),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now,
Retry: defaultMaxRetry, Queue: "default",
Timeout: 20 * time.Second, Retry: defaultMaxRetry,
Deadline: noDeadline, Timeout: 20 * time.Second,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -292,12 +311,13 @@ func TestClientEnqueue(t *testing.T) {
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)), Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now,
Retry: defaultMaxRetry, Queue: "default",
Timeout: noTimeout, Retry: defaultMaxRetry,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC), Timeout: noTimeout,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -318,12 +338,13 @@ func TestClientEnqueue(t *testing.T) {
Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)), Deadline(time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC)),
}, },
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now,
Retry: defaultMaxRetry, Queue: "default",
Timeout: 20 * time.Second, Retry: defaultMaxRetry,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC), Timeout: 20 * time.Second,
Deadline: time.Date(2020, time.June, 24, 0, 0, 0, 0, time.UTC),
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -346,13 +367,17 @@ func TestClientEnqueue(t *testing.T) {
t.Error(err) t.Error(err)
continue continue
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" { cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s", t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff) tc.desc, gotRes, tc.wantRes, diff)
} }
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantPending {
got := h.GetEnqueuedMessages(t, r, qname) got := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
} }
@@ -360,47 +385,51 @@ func TestClientEnqueue(t *testing.T) {
} }
} }
func TestClientEnqueueIn(t *testing.T) { func TestClientEnqueueWithProcessInOption(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(RedisClientOpt{ client := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer client.Close()
DB: redisDB,
})
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
now := time.Now()
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
delay time.Duration delay time.Duration // value for ProcessIn option
opts []Option opts []Option // other options
wantRes *Result wantRes *Result
wantEnqueued map[string][]*base.TaskMessage wantPending map[string][]*base.TaskMessage
wantScheduled []base.Z wantScheduled map[string][]base.Z
}{ }{
{ {
desc: "schedule a task to be enqueued in one hour", desc: "schedule a task to be processed in one hour",
task: task, task: task,
delay: time.Hour, delay: 1 * time.Hour,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now.Add(1 * time.Hour),
Retry: defaultMaxRetry, Queue: "default",
Timeout: defaultTimeout, Retry: defaultMaxRetry,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: nil, // db is flushed in setup so list does not exist hence nil wantPending: map[string][]*base.TaskMessage{
wantScheduled: []base.Z{ "default": {},
{ },
Message: &base.TaskMessage{ wantScheduled: map[string][]base.Z{
Type: task.Type, "default": {
Payload: task.Payload.data, {
Retry: defaultMaxRetry, Message: &base.TaskMessage{
Queue: "default", Type: task.Type,
Timeout: int64(defaultTimeout.Seconds()), Payload: task.Payload.data,
Deadline: noDeadline.Unix(), Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
},
Score: time.Now().Add(time.Hour).Unix(),
}, },
Score: time.Now().Add(time.Hour).Unix(),
}, },
}, },
}, },
@@ -410,12 +439,13 @@ func TestClientEnqueueIn(t *testing.T) {
delay: 0, delay: 0,
opts: []Option{}, opts: []Option{},
wantRes: &Result{ wantRes: &Result{
Queue: "default", ProcessAt: now,
Retry: defaultMaxRetry, Queue: "default",
Timeout: defaultTimeout, Retry: defaultMaxRetry,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
wantEnqueued: map[string][]*base.TaskMessage{ wantPending: map[string][]*base.TaskMessage{
"default": { "default": {
{ {
Type: task.Type, Type: task.Type,
@@ -427,33 +457,72 @@ func TestClientEnqueueIn(t *testing.T) {
}, },
}, },
}, },
wantScheduled: nil, // db is flushed in setup so zset does not exist hence nil wantScheduled: map[string][]base.Z{
"default": {},
},
}, },
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
gotRes, err := client.EnqueueIn(tc.delay, tc.task, tc.opts...) opts := append(tc.opts, ProcessIn(tc.delay))
gotRes, err := client.Enqueue(tc.task, opts...)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
continue continue
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" { cmpOptions := []cmp.Option{
t.Errorf("%s;\nEnqueueIn(delay, task) returned %v, want %v; (-want,+got)\n%s", cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
tc.desc, gotRes, tc.wantRes, diff) cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, ProcessIn(%v)) returned %v, want %v; (-want,+got)\n%s",
tc.desc, tc.delay, gotRes, tc.wantRes, diff)
} }
for qname, want := range tc.wantEnqueued { for qname, want := range tc.wantPending {
gotEnqueued := h.GetEnqueuedMessages(t, r, qname) gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotEnqueued, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(want, gotPending, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff) t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.QueueKey(qname), diff)
} }
} }
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledEntries(t, r, qname)
if diff := cmp.Diff(want, gotScheduled, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledKey(qname), diff)
}
}
}
}
gotScheduled := h.GetScheduledEntries(t, r) func TestClientEnqueueError(t *testing.T) {
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.IgnoreIDOpt); diff != "" { r := setup(t)
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledQueue, diff) client := NewClient(getRedisConnOpt(t))
defer client.Close()
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
tests := []struct {
desc string
task *Task
opts []Option
}{
{
desc: "With empty queue name",
task: task,
opts: []Option{
Queue(""),
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
_, err := client.Enqueue(tc.task, tc.opts...)
if err == nil {
t.Errorf("%s; client.Enqueue(task, opts...) did not return non-nil error", tc.desc)
} }
} }
} }
@@ -461,6 +530,8 @@ func TestClientEnqueueIn(t *testing.T) {
func TestClientDefaultOptions(t *testing.T) { func TestClientDefaultOptions(t *testing.T) {
r := setup(t) r := setup(t)
now := time.Now()
tests := []struct { tests := []struct {
desc string desc string
defaultOpts []Option // options set at the client level. defaultOpts []Option // options set at the client level.
@@ -476,10 +547,11 @@ func TestClientDefaultOptions(t *testing.T) {
opts: []Option{}, opts: []Option{},
task: NewTask("feed:import", nil), task: NewTask("feed:import", nil),
wantRes: &Result{ wantRes: &Result{
Queue: "feed", ProcessAt: now,
Retry: defaultMaxRetry, Queue: "feed",
Timeout: defaultTimeout, Retry: defaultMaxRetry,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
queue: "feed", queue: "feed",
want: &base.TaskMessage{ want: &base.TaskMessage{
@@ -497,10 +569,11 @@ func TestClientDefaultOptions(t *testing.T) {
opts: []Option{}, opts: []Option{},
task: NewTask("feed:import", nil), task: NewTask("feed:import", nil),
wantRes: &Result{ wantRes: &Result{
Queue: "feed", ProcessAt: now,
Retry: 5, Queue: "feed",
Timeout: defaultTimeout, Retry: 5,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
queue: "feed", queue: "feed",
want: &base.TaskMessage{ want: &base.TaskMessage{
@@ -518,10 +591,11 @@ func TestClientDefaultOptions(t *testing.T) {
opts: []Option{Queue("critical")}, opts: []Option{Queue("critical")},
task: NewTask("feed:import", nil), task: NewTask("feed:import", nil),
wantRes: &Result{ wantRes: &Result{
Queue: "critical", ProcessAt: now,
Retry: 5, Queue: "critical",
Timeout: defaultTimeout, Retry: 5,
Deadline: noDeadline, Timeout: defaultTimeout,
Deadline: noDeadline,
}, },
queue: "critical", queue: "critical",
want: &base.TaskMessage{ want: &base.TaskMessage{
@@ -537,102 +611,39 @@ func TestClientDefaultOptions(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB}) c := NewClient(getRedisConnOpt(t))
defer c.Close()
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...) c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...)
gotRes, err := c.Enqueue(tc.task, tc.opts...) gotRes, err := c.Enqueue(tc.task, tc.opts...)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if diff := cmp.Diff(tc.wantRes, gotRes, cmpopts.IgnoreFields(Result{}, "ID")); diff != "" { cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(Result{}, "ID", "EnqueuedAt"),
cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantRes, gotRes, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s", t.Errorf("%s;\nEnqueue(task, opts...) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotRes, tc.wantRes, diff) tc.desc, gotRes, tc.wantRes, diff)
} }
enqueued := h.GetEnqueuedMessages(t, r, tc.queue) pending := h.GetPendingMessages(t, r, tc.queue)
if len(enqueued) != 1 { if len(pending) != 1 {
t.Errorf("%s;\nexpected queue %q to have one message; got %d messages in the queue.", t.Errorf("%s;\nexpected queue %q to have one message; got %d messages in the queue.",
tc.desc, tc.queue, len(enqueued)) tc.desc, tc.queue, len(pending))
continue continue
} }
got := enqueued[0] got := pending[0]
if diff := cmp.Diff(tc.want, got, h.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(tc.want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in enqueued task message; (-want,+got)\n%s", t.Errorf("%s;\nmismatch found in pending task message; (-want,+got)\n%s",
tc.desc, diff) tc.desc, diff)
} }
} }
} }
func TestUniqueKey(t *testing.T) { func TestClientEnqueueUnique(t *testing.T) {
tests := []struct {
desc string
task *Task
ttl time.Duration
qname string
want string
}{
{
"with zero TTL",
NewTask("email:send", map[string]interface{}{"a": 123, "b": "hello", "c": true}),
0,
"default",
"",
},
{
"with primitive types",
NewTask("email:send", map[string]interface{}{"a": 123, "b": "hello", "c": true}),
10 * time.Minute,
"default",
"email:send:a=123,b=hello,c=true:default",
},
{
"with unsorted keys",
NewTask("email:send", map[string]interface{}{"b": "hello", "c": true, "a": 123}),
10 * time.Minute,
"default",
"email:send:a=123,b=hello,c=true:default",
},
{
"with composite types",
NewTask("email:send",
map[string]interface{}{
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
"names": []string{"bob", "mike", "rob"}}),
10 * time.Minute,
"default",
"email:send:address=map[city:Boston line:123 Main St state:MA],names=[bob mike rob]:default",
},
{
"with complex types",
NewTask("email:send",
map[string]interface{}{
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
"duration": time.Hour}),
10 * time.Minute,
"default",
"email:send:duration=1h0m0s,time=2020-07-28 00:00:00 +0000 UTC:default",
},
{
"with nil payload",
NewTask("reindex", nil),
10 * time.Minute,
"default",
"reindex:nil:default",
},
}
for _, tc := range tests {
got := uniqueKey(tc.task, tc.ttl, tc.qname)
if got != tc.want {
t.Errorf("%s: uniqueKey(%v, %v, %q) = %q, want %q", tc.desc, tc.task, tc.ttl, tc.qname, got, tc.want)
}
}
}
func TestEnqueueUnique(t *testing.T) {
r := setup(t) r := setup(t)
c := NewClient(RedisClientOpt{ c := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer c.Close()
DB: redisDB,
})
tests := []struct { tests := []struct {
task *Task task *Task
@@ -653,7 +664,7 @@ func TestEnqueueUnique(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val() gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(tc.ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl) t.Errorf("TTL = %v, want %v", gotTTL, tc.ttl)
continue continue
@@ -672,12 +683,10 @@ func TestEnqueueUnique(t *testing.T) {
} }
} }
func TestEnqueueInUnique(t *testing.T) { func TestClientEnqueueUniqueWithProcessInOption(t *testing.T) {
r := setup(t) r := setup(t)
c := NewClient(RedisClientOpt{ c := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer c.Close()
DB: redisDB,
})
tests := []struct { tests := []struct {
task *Task task *Task
@@ -695,12 +704,12 @@ func TestEnqueueInUnique(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed. // Enqueue the task first. It should succeed.
_, err := c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl)) _, err := c.Enqueue(tc.task, ProcessIn(tc.d), Unique(tc.ttl))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val() gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second wantTTL := time.Duration(tc.ttl.Seconds()+tc.d.Seconds()) * time.Second
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL) t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
@@ -708,7 +717,7 @@ func TestEnqueueInUnique(t *testing.T) {
} }
// Enqueue the task again. It should fail. // Enqueue the task again. It should fail.
_, err = c.EnqueueIn(tc.d, tc.task, Unique(tc.ttl)) _, err = c.Enqueue(tc.task, ProcessIn(tc.d), Unique(tc.ttl))
if err == nil { if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task) t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue continue
@@ -720,12 +729,10 @@ func TestEnqueueInUnique(t *testing.T) {
} }
} }
func TestEnqueueAtUnique(t *testing.T) { func TestClientEnqueueUniqueWithProcessAtOption(t *testing.T) {
r := setup(t) r := setup(t)
c := NewClient(RedisClientOpt{ c := NewClient(getRedisConnOpt(t))
Addr: redisAddr, defer c.Close()
DB: redisDB,
})
tests := []struct { tests := []struct {
task *Task task *Task
@@ -743,12 +750,12 @@ func TestEnqueueAtUnique(t *testing.T) {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
// Enqueue the task first. It should succeed. // Enqueue the task first. It should succeed.
_, err := c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl)) _, err := c.Enqueue(tc.task, ProcessAt(tc.at), Unique(tc.ttl))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
gotTTL := r.TTL(uniqueKey(tc.task, tc.ttl, base.DefaultQueueName)).Val() gotTTL := r.TTL(base.UniqueKey(base.DefaultQueueName, tc.task.Type, tc.task.Payload.data)).Val()
wantTTL := tc.at.Add(tc.ttl).Sub(time.Now()) wantTTL := tc.at.Add(tc.ttl).Sub(time.Now())
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL) t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
@@ -756,7 +763,7 @@ func TestEnqueueAtUnique(t *testing.T) {
} }
// Enqueue the task again. It should fail. // Enqueue the task again. It should fail.
_, err = c.EnqueueAt(tc.at, tc.task, Unique(tc.ttl)) _, err = c.Enqueue(tc.task, ProcessAt(tc.at), Unique(tc.ttl))
if err == nil { if err == nil {
t.Errorf("Enqueueing %+v did not return an error", tc.task) t.Errorf("Enqueueing %+v did not return an error", tc.task)
continue continue

View File

@@ -16,6 +16,7 @@ type taskMetadata struct {
id string id string
maxRetry int maxRetry int
retryCount int retryCount int
qname string
} }
// ctxKey type is unexported to prevent collisions with context keys defined in // ctxKey type is unexported to prevent collisions with context keys defined in
@@ -32,6 +33,7 @@ func createContext(msg *base.TaskMessage, deadline time.Time) (context.Context,
id: msg.ID.String(), id: msg.ID.String(),
maxRetry: msg.Retry, maxRetry: msg.Retry,
retryCount: msg.Retried, retryCount: msg.Retried,
qname: msg.Queue,
} }
ctx := context.WithValue(context.Background(), metadataCtxKey, metadata) ctx := context.WithValue(context.Background(), metadataCtxKey, metadata)
return context.WithDeadline(ctx, deadline) return context.WithDeadline(ctx, deadline)
@@ -72,3 +74,14 @@ func GetMaxRetry(ctx context.Context) (n int, ok bool) {
} }
return metadata.maxRetry, true return metadata.maxRetry, true
} }
// GetQueueName extracts queue name from a context, if any.
//
// Return value qname indicates which queue the task was pulled from.
func GetQueueName(ctx context.Context) (qname string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.qname, true
}

View File

@@ -92,8 +92,9 @@ func TestGetTaskMetadataFromContext(t *testing.T) {
desc string desc string
msg *base.TaskMessage msg *base.TaskMessage
}{ }{
{"with zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 25, Retried: 0, Timeout: 1800}}, {"with zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 25, Retried: 0, Timeout: 1800, Queue: "default"}},
{"with non-zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 10, Retried: 5, Timeout: 1800}}, {"with non-zero retried message", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 10, Retried: 5, Timeout: 1800, Queue: "default"}},
{"with custom queue name", &base.TaskMessage{Type: "something", ID: uuid.New(), Retry: 25, Retried: 0, Timeout: 1800, Queue: "custom"}},
} }
for _, tc := range tests { for _, tc := range tests {
@@ -123,6 +124,14 @@ func TestGetTaskMetadataFromContext(t *testing.T) {
if ok && maxRetry != tc.msg.Retry { if ok && maxRetry != tc.msg.Retry {
t.Errorf("%s: GetMaxRetry(ctx) returned n == %d want %d", tc.desc, maxRetry, tc.msg.Retry) t.Errorf("%s: GetMaxRetry(ctx) returned n == %d want %d", tc.desc, maxRetry, tc.msg.Retry)
} }
qname, ok := GetQueueName(ctx)
if !ok {
t.Errorf("%s: GetQueueName(ctx) returned ok == false", tc.desc)
}
if ok && qname != tc.msg.Queue {
t.Errorf("%s: GetQueueName(ctx) returned qname == %q, want %q", tc.desc, qname, tc.msg.Queue)
}
} }
} }
@@ -144,5 +153,8 @@ func TestGetTaskMetadataFromContextError(t *testing.T) {
if _, ok := GetMaxRetry(tc.ctx); ok { if _, ok := GetMaxRetry(tc.ctx); ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == true", tc.desc) t.Errorf("%s: GetMaxRetry(ctx) returned ok == true", tc.desc)
} }
if _, ok := GetQueueName(tc.ctx); ok {
t.Errorf("%s: GetQueueName(ctx) returned ok == true", tc.desc)
}
} }
} }

24
doc.go
View File

@@ -3,23 +3,23 @@
// that can be found in the LICENSE file. // that can be found in the LICENSE file.
/* /*
Package asynq provides a framework for asynchronous task processing. Package asynq provides a framework for Redis based distrubted task queue.
Asynq uses Redis as a message broker. To connect to redis server, Asynq uses Redis as a message broker. To connect to redis,
specify the options using one of RedisConnOpt types. specify the connection using one of RedisConnOpt types.
redis = &asynq.RedisClientOpt{ redisConnOpt = asynq.RedisClientOpt{
Addr: "127.0.0.1:6379", Addr: "127.0.0.1:6379",
Password: "xxxxx", Password: "xxxxx",
DB: 3, DB: 3,
} }
The Client is used to enqueue a task to be processed at the specified time. The Client is used to enqueue a task.
Task is created with two parameters: its type and payload.
client := asynq.NewClient(redis) client := asynq.NewClient(redisConnOpt)
// Task is created with two parameters: its type and payload.
t := asynq.NewTask( t := asynq.NewTask(
"send_email", "send_email",
map[string]interface{}{"user_id": 42}) map[string]interface{}{"user_id": 42})
@@ -28,15 +28,17 @@ Task is created with two parameters: its type and payload.
res, err := client.Enqueue(t) res, err := client.Enqueue(t)
// Schedule the task to be processed after one minute. // Schedule the task to be processed after one minute.
res, err = client.EnqueueIn(time.Minute, t) res, err = client.Enqueue(t, asynq.ProcessIn(1*time.Minute))
The Server is used to run the background task processing with a given The Server is used to run the task processing workers with a given
handler. handler.
srv := asynq.NewServer(redis, asynq.Config{ srv := asynq.NewServer(redisConnOpt, asynq.Config{
Concurrency: 10, Concurrency: 10,
}) })
srv.Run(handler) if err := srv.Run(handler); err != nil {
log.Fatal(err)
}
Handler is an interface type with a method which Handler is an interface type with a method which
takes a task and returns an error. Handler should return nil if takes a task and returns an error. Handler should return nil if

BIN
docs/assets/cluster.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 983 KiB

After

Width:  |  Height:  |  Size: 329 KiB

View File

@@ -9,6 +9,7 @@ import (
"log" "log"
"os" "os"
"os/signal" "os/signal"
"time"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
@@ -78,6 +79,25 @@ func ExampleServer_Quiet() {
srv.Stop() srv.Stop()
} }
func ExampleScheduler() {
scheduler := asynq.NewScheduler(
asynq.RedisClientOpt{Addr: ":6379"},
&asynq.SchedulerOpts{Location: time.Local},
)
if _, err := scheduler.Register("* * * * *", asynq.NewTask("task1", nil)); err != nil {
log.Fatal(err)
}
if _, err := scheduler.Register("@every 30s", asynq.NewTask("task2", nil)); err != nil {
log.Fatal(err)
}
// Run blocks and waits for os signal to terminate the program.
if err := scheduler.Run(); err != nil {
log.Fatal(err)
}
}
func ExampleParseRedisURI() { func ExampleParseRedisURI() {
rconn, err := asynq.ParseRedisURI("redis://localhost:6379/10") rconn, err := asynq.ParseRedisURI("redis://localhost:6379/10")
if err != nil { if err != nil {

75
forwarder.go Normal file
View File

@@ -0,0 +1,75 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// A forwarder is responsible for moving scheduled and retry tasks to pending state
// so that the tasks get processed by the workers.
type forwarder struct {
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "forwarder" goroutine.
done chan struct{}
// list of queue names to check and enqueue.
queues []string
// poll interval on average
avgInterval time.Duration
}
type forwarderParams struct {
logger *log.Logger
broker base.Broker
queues []string
interval time.Duration
}
func newForwarder(params forwarderParams) *forwarder {
return &forwarder{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
queues: params.queues,
avgInterval: params.interval,
}
}
func (f *forwarder) terminate() {
f.logger.Debug("Forwarder shutting down...")
// Signal the forwarder goroutine to stop polling.
f.done <- struct{}{}
}
// start starts the "forwarder" goroutine.
func (f *forwarder) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-f.done:
f.logger.Debug("Forwarder done")
return
case <-time.After(f.avgInterval):
f.exec()
}
}
}()
}
func (f *forwarder) exec() {
if err := f.broker.CheckAndEnqueue(f.queues...); err != nil {
f.logger.Errorf("Could not enqueue scheduled tasks: %v", err)
}
}

137
forwarder_test.go Normal file
View File

@@ -0,0 +1,137 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
)
func TestForwarder(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newForwarder(forwarderParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default", "critical"},
interval: pollInterval,
})
t1 := h.NewTaskMessageWithQueue("gen_thumbnail", nil, "default")
t2 := h.NewTaskMessageWithQueue("send_email", nil, "critical")
t3 := h.NewTaskMessageWithQueue("reindex", nil, "default")
t4 := h.NewTaskMessageWithQueue("sync", nil, "critical")
now := time.Now()
tests := []struct {
initScheduled map[string][]base.Z // scheduled queue initial state
initRetry map[string][]base.Z // retry queue initial state
initPending map[string][]*base.TaskMessage // default queue initial state
wait time.Duration // wait duration before checking for final state
wantScheduled map[string][]*base.TaskMessage // schedule queue final state
wantRetry map[string][]*base.TaskMessage // retry queue final state
wantPending map[string][]*base.TaskMessage // default queue final state
}{
{
initScheduled: map[string][]base.Z{
"default": {{Message: t1, Score: now.Add(time.Hour).Unix()}},
"critical": {{Message: t2, Score: now.Add(-2 * time.Second).Unix()}},
},
initRetry: map[string][]base.Z{
"default": {{Message: t3, Score: time.Now().Add(-500 * time.Millisecond).Unix()}},
"critical": {},
},
initPending: map[string][]*base.TaskMessage{
"default": {},
"critical": {t4},
},
wait: pollInterval * 2,
wantScheduled: map[string][]*base.TaskMessage{
"default": {t1},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantPending: map[string][]*base.TaskMessage{
"default": {t3},
"critical": {t2, t4},
},
},
{
initScheduled: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Unix()},
{Message: t3, Score: now.Add(-500 * time.Millisecond).Unix()},
},
"critical": {
{Message: t2, Score: now.Add(-2 * time.Second).Unix()},
},
},
initRetry: map[string][]base.Z{
"default": {},
"critical": {},
},
initPending: map[string][]*base.TaskMessage{
"default": {},
"critical": {t4},
},
wait: pollInterval * 2,
wantScheduled: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantPending: map[string][]*base.TaskMessage{
"default": {t1, t3},
"critical": {t2, t4},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedAllScheduledQueues(t, r, tc.initScheduled) // initialize scheduled queue
h.SeedAllRetryQueues(t, r, tc.initRetry) // initialize retry queue
h.SeedAllPendingQueues(t, r, tc.initPending) // initialize default queue
var wg sync.WaitGroup
s.start(&wg)
time.Sleep(tc.wait)
s.terminate()
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledMessages(t, r, qname)
if diff := cmp.Diff(want, gotScheduled, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.ScheduledKey(qname), diff)
}
}
for qname, want := range tc.wantRetry {
gotRetry := h.GetRetryMessages(t, r, qname)
if diff := cmp.Diff(want, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.RetryKey(qname), diff)
}
}
for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.QueueKey(qname), diff)
}
}
}
}

3
go.mod
View File

@@ -3,9 +3,10 @@ module github.com/hibiken/asynq
go 1.13 go 1.13
require ( require (
github.com/go-redis/redis/v7 v7.2.0 github.com/go-redis/redis/v7 v7.4.0
github.com/google/go-cmp v0.4.0 github.com/google/go-cmp v0.4.0
github.com/google/uuid v1.1.1 github.com/google/uuid v1.1.1
github.com/robfig/cron/v3 v3.0.1
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.3.1
go.uber.org/goleak v0.10.0 go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e

4
go.sum
View File

@@ -4,6 +4,8 @@ github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs= github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs= github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@@ -25,6 +27,8 @@ github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w= github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=

View File

@@ -15,6 +15,7 @@ import (
func TestHealthChecker(t *testing.T) { func TestHealthChecker(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
var ( var (
@@ -62,6 +63,7 @@ func TestHealthCheckerWhenRedisDown(t *testing.T) {
} }
}() }()
r := rdb.NewRDB(setup(t)) r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
var ( var (
// mu guards called and e variables. // mu guards called and e variables.

View File

@@ -19,6 +19,7 @@ import (
func TestHeartbeater(t *testing.T) { func TestHeartbeater(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
tests := []struct { tests := []struct {
@@ -28,7 +29,7 @@ func TestHeartbeater(t *testing.T) {
queues map[string]int queues map[string]int
concurrency int concurrency int
}{ }{
{time.Second, "localhost", 45678, map[string]int{"default": 1}, 10}, {2 * time.Second, "localhost", 45678, map[string]int{"default": 1}, 10},
} }
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond) timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
@@ -68,7 +69,7 @@ func TestHeartbeater(t *testing.T) {
} }
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval)
ss, err := rdbClient.ListServers() ss, err := rdbClient.ListServers()
if err != nil { if err != nil {
@@ -128,6 +129,7 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
} }
}() }()
r := rdb.NewRDB(setup(t)) r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
hb := newHeartbeater(heartbeaterParams{ hb := newHeartbeater(heartbeaterParams{
logger: testLogger, logger: testLogger,

View File

@@ -27,72 +27,95 @@ func NewInspector(r RedisConnOpt) *Inspector {
} }
} }
// Stats represents a state of queues at a certain time. // Close closes the connection with redis.
type Stats struct { func (i *Inspector) Close() error {
Enqueued int return i.rdb.Close()
InProgress int
Scheduled int
Retry int
Dead int
Processed int
Failed int
Queues []*QueueInfo
Timestamp time.Time
} }
// QueueInfo holds information about a queue. // Queues returns a list of all queue names.
type QueueInfo struct { func (i *Inspector) Queues() ([]string, error) {
// Name of the queue (e.g. "default", "critical"). return i.rdb.AllQueues()
// Note: It doesn't include the prefix "asynq:queues:". }
Name string
// Paused indicates whether the queue is paused. // QueueStats represents a state of queues at a certain time.
// If true, tasks in the queue should not be processed. type QueueStats struct {
Paused bool // Name of the queue.
Queue string
// Size is the number of tasks in the queue. // Size is the total number of tasks in the queue.
// The value is the sum of Pending, Active, Scheduled, Retry, and Dead.
Size int Size int
// Number of pending tasks.
Pending int
// Number of active tasks.
Active int
// Number of scheduled tasks.
Scheduled int
// Number of retry tasks.
Retry int
// Number of dead tasks.
Dead int
// Total number of tasks being processed during the given date.
// The number includes both succeeded and failed tasks.
Processed int
// Total number of tasks failed to be processed during the given date.
Failed int
// Paused indicates whether the queue is paused.
// If true, tasks in the queue will not be processed.
Paused bool
// Time when this stats was taken.
Timestamp time.Time
} }
// CurrentStats returns a current stats of the queues. // CurrentStats returns a current stats of the given queue.
func (i *Inspector) CurrentStats() (*Stats, error) { func (i *Inspector) CurrentStats(qname string) (*QueueStats, error) {
stats, err := i.rdb.CurrentStats() if err := validateQueueName(qname); err != nil {
return nil, err
}
stats, err := i.rdb.CurrentStats(qname)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var qs []*QueueInfo return &QueueStats{
for _, q := range stats.Queues { Queue: stats.Queue,
qs = append(qs, (*QueueInfo)(q)) Size: stats.Size,
} Pending: stats.Pending,
return &Stats{ Active: stats.Active,
Enqueued: stats.Enqueued, Scheduled: stats.Scheduled,
InProgress: stats.InProgress, Retry: stats.Retry,
Scheduled: stats.Scheduled, Dead: stats.Dead,
Retry: stats.Retry, Processed: stats.Processed,
Dead: stats.Dead, Failed: stats.Failed,
Processed: stats.Processed, Paused: stats.Paused,
Failed: stats.Failed, Timestamp: stats.Timestamp,
Queues: qs,
Timestamp: stats.Timestamp,
}, nil }, nil
} }
// DailyStats holds aggregate data for a given day. // DailyStats holds aggregate data for a given day for a given queue.
type DailyStats struct { type DailyStats struct {
// Name of the queue.
Queue string
// Total number of tasks being processed during the given date.
// The number includes both succeeded and failed tasks.
Processed int Processed int
Failed int // Total number of tasks failed to be processed during the given date.
Date time.Time Failed int
// Date this stats was taken.
Date time.Time
} }
// History returns a list of stats from the last n days. // History returns a list of stats from the last n days.
func (i *Inspector) History(n int) ([]*DailyStats, error) { func (i *Inspector) History(qname string, n int) ([]*DailyStats, error) {
stats, err := i.rdb.HistoricalStats(n) if err := validateQueueName(qname); err != nil {
return nil, err
}
stats, err := i.rdb.HistoricalStats(qname, n)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var res []*DailyStats var res []*DailyStats
for _, s := range stats { for _, s := range stats {
res = append(res, &DailyStats{ res = append(res, &DailyStats{
Queue: s.Queue,
Processed: s.Processed, Processed: s.Processed,
Failed: s.Failed, Failed: s.Failed,
Date: s.Time, Date: s.Time,
@@ -101,17 +124,18 @@ func (i *Inspector) History(n int) ([]*DailyStats, error) {
return res, nil return res, nil
} }
// EnqueuedTask is a task in a queue and is ready to be processed. // PendingTask is a task in a queue and is ready to be processed.
type EnqueuedTask struct { type PendingTask struct {
*Task *Task
ID string ID string
Queue string Queue string
} }
// InProgressTask is a task that's currently being processed. // ActiveTask is a task that's currently being processed.
type InProgressTask struct { type ActiveTask struct {
*Task *Task
ID string ID string
Queue string
} }
// ScheduledTask is a task scheduled to be processed in the future. // ScheduledTask is a task scheduled to be processed in the future.
@@ -119,7 +143,7 @@ type ScheduledTask struct {
*Task *Task
ID string ID string
Queue string Queue string
NextEnqueueAt time.Time NextProcessAt time.Time
score int64 score int64
} }
@@ -129,7 +153,7 @@ type RetryTask struct {
*Task *Task
ID string ID string
Queue string Queue string
NextEnqueueAt time.Time NextProcessAt time.Time
MaxRetry int MaxRetry int
Retried int Retried int
ErrorMsg string ErrorMsg string
@@ -152,24 +176,24 @@ type DeadTask struct {
score int64 score int64
} }
// Key returns a key used to delete, enqueue, and kill the task. // Key returns a key used to delete, run, and kill the task.
func (t *ScheduledTask) Key() string { func (t *ScheduledTask) Key() string {
return fmt.Sprintf("s:%v:%v", t.ID, t.score) return fmt.Sprintf("s:%v:%v", t.ID, t.score)
} }
// Key returns a key used to delete, enqueue, and kill the task. // Key returns a key used to delete, run, and kill the task.
func (t *RetryTask) Key() string { func (t *RetryTask) Key() string {
return fmt.Sprintf("r:%v:%v", t.ID, t.score) return fmt.Sprintf("r:%v:%v", t.ID, t.score)
} }
// Key returns a key used to delete, enqueue, and kill the task. // Key returns a key used to delete, run, and kill the task.
func (t *DeadTask) Key() string { func (t *DeadTask) Key() string {
return fmt.Sprintf("d:%v:%v", t.ID, t.score) return fmt.Sprintf("d:%v:%v", t.ID, t.score)
} }
// parseTaskKey parses a key string and returns each part of key with proper // parseTaskKey parses a key string and returns each part of key with proper
// type if valid, otherwise it reports an error. // type if valid, otherwise it reports an error.
func parseTaskKey(key string) (id uuid.UUID, score int64, qtype string, err error) { func parseTaskKey(key string) (id uuid.UUID, score int64, state string, err error) {
parts := strings.Split(key, ":") parts := strings.Split(key, ":")
if len(parts) != 3 { if len(parts) != 3 {
return uuid.Nil, 0, "", fmt.Errorf("invalid id") return uuid.Nil, 0, "", fmt.Errorf("invalid id")
@@ -182,11 +206,11 @@ func parseTaskKey(key string) (id uuid.UUID, score int64, qtype string, err erro
if err != nil { if err != nil {
return uuid.Nil, 0, "", fmt.Errorf("invalid id") return uuid.Nil, 0, "", fmt.Errorf("invalid id")
} }
qtype = parts[0] state = parts[0]
if len(qtype) != 1 || !strings.Contains("srd", qtype) { if len(state) != 1 || !strings.Contains("srd", state) {
return uuid.Nil, 0, "", fmt.Errorf("invalid id") return uuid.Nil, 0, "", fmt.Errorf("invalid id")
} }
return id, score, qtype, nil return id, score, state, nil
} }
// ListOption specifies behavior of list operation. // ListOption specifies behavior of list operation.
@@ -250,19 +274,22 @@ func Page(n int) ListOption {
return pageNumOpt(n) return pageNumOpt(n)
} }
// ListScheduledTasks retrieves tasks in the specified queue. // ListPendingTasks retrieves pending tasks from the specified queue.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListEnqueuedTasks(qname string, opts ...ListOption) ([]*EnqueuedTask, error) { func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*PendingTask, error) {
if err := validateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
msgs, err := i.rdb.ListEnqueued(qname, pgn) msgs, err := i.rdb.ListPending(qname, pgn)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var tasks []*EnqueuedTask var tasks []*PendingTask
for _, m := range msgs { for _, m := range msgs {
tasks = append(tasks, &EnqueuedTask{ tasks = append(tasks, &PendingTask{
Task: NewTask(m.Type, m.Payload), Task: NewTask(m.Type, m.Payload),
ID: m.ID.String(), ID: m.ID.String(),
Queue: m.Queue, Queue: m.Queue,
@@ -271,72 +298,82 @@ func (i *Inspector) ListEnqueuedTasks(qname string, opts ...ListOption) ([]*Enqu
return tasks, err return tasks, err
} }
// ListScheduledTasks retrieves tasks currently being processed. // ListActiveTasks retrieves active tasks from the specified queue.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListInProgressTasks(opts ...ListOption) ([]*InProgressTask, error) { func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*ActiveTask, error) {
if err := validateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
msgs, err := i.rdb.ListInProgress(pgn) msgs, err := i.rdb.ListActive(qname, pgn)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var tasks []*InProgressTask var tasks []*ActiveTask
for _, m := range msgs { for _, m := range msgs {
tasks = append(tasks, &InProgressTask{ tasks = append(tasks, &ActiveTask{
Task: NewTask(m.Type, m.Payload), Task: NewTask(m.Type, m.Payload),
ID: m.ID.String(), ID: m.ID.String(),
Queue: m.Queue,
}) })
} }
return tasks, err return tasks, err
} }
// ListScheduledTasks retrieves tasks in scheduled state. // ListScheduledTasks retrieves scheduled tasks from the specified queue.
// Tasks are sorted by NextEnqueueAt field in ascending order. // Tasks are sorted by NextProcessAt field in ascending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListScheduledTasks(opts ...ListOption) ([]*ScheduledTask, error) { func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*ScheduledTask, error) {
if err := validateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListScheduled(pgn) zs, err := i.rdb.ListScheduled(qname, pgn)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var tasks []*ScheduledTask var tasks []*ScheduledTask
for _, z := range zs { for _, z := range zs {
enqueueAt := time.Unix(z.Score, 0) processAt := time.Unix(z.Score, 0)
t := NewTask(z.Message.Type, z.Message.Payload) t := NewTask(z.Message.Type, z.Message.Payload)
tasks = append(tasks, &ScheduledTask{ tasks = append(tasks, &ScheduledTask{
Task: t, Task: t,
ID: z.Message.ID.String(), ID: z.Message.ID.String(),
Queue: z.Message.Queue, Queue: z.Message.Queue,
NextEnqueueAt: enqueueAt, NextProcessAt: processAt,
score: z.Score, score: z.Score,
}) })
} }
return tasks, nil return tasks, nil
} }
// ListScheduledTasks retrieves tasks in retry state. // ListRetryTasks retrieves retry tasks from the specified queue.
// Tasks are sorted by NextEnqueueAt field in ascending order. // Tasks are sorted by NextProcessAt field in ascending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListRetryTasks(opts ...ListOption) ([]*RetryTask, error) { func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*RetryTask, error) {
if err := validateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListRetry(pgn) zs, err := i.rdb.ListRetry(qname, pgn)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var tasks []*RetryTask var tasks []*RetryTask
for _, z := range zs { for _, z := range zs {
enqueueAt := time.Unix(z.Score, 0) processAt := time.Unix(z.Score, 0)
t := NewTask(z.Message.Type, z.Message.Payload) t := NewTask(z.Message.Type, z.Message.Payload)
tasks = append(tasks, &RetryTask{ tasks = append(tasks, &RetryTask{
Task: t, Task: t,
ID: z.Message.ID.String(), ID: z.Message.ID.String(),
Queue: z.Message.Queue, Queue: z.Message.Queue,
NextEnqueueAt: enqueueAt, NextProcessAt: processAt,
MaxRetry: z.Message.Retry, MaxRetry: z.Message.Retry,
Retried: z.Message.Retried, Retried: z.Message.Retried,
// TODO: LastFailedAt: z.Message.LastFailedAt // TODO: LastFailedAt: z.Message.LastFailedAt
@@ -347,14 +384,17 @@ func (i *Inspector) ListRetryTasks(opts ...ListOption) ([]*RetryTask, error) {
return tasks, nil return tasks, nil
} }
// ListScheduledTasks retrieves tasks in retry state. // ListDeadTasks retrieves dead tasks from the specified queue.
// Tasks are sorted by LastFailedAt field in descending order. // Tasks are sorted by LastFailedAt field in descending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListDeadTasks(opts ...ListOption) ([]*DeadTask, error) { func (i *Inspector) ListDeadTasks(qname string, opts ...ListOption) ([]*DeadTask, error) {
if err := validateQueueName(qname); err != nil {
return nil, err
}
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
zs, err := i.rdb.ListDead(pgn) zs, err := i.rdb.ListDead(qname, pgn)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -377,109 +417,142 @@ func (i *Inspector) ListDeadTasks(opts ...ListOption) ([]*DeadTask, error) {
return nil, nil return nil, nil
} }
// DeleteAllScheduledTasks deletes all tasks in scheduled state, // DeleteAllScheduledTasks deletes all scheduled tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllScheduledTasks() (int, error) { func (i *Inspector) DeleteAllScheduledTasks(qname string) (int, error) {
n, err := i.rdb.DeleteAllScheduledTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllScheduledTasks(qname)
return int(n), err return int(n), err
} }
// DeleteAllRetryTasks deletes all tasks in retry state, // DeleteAllRetryTasks deletes all retry tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllRetryTasks() (int, error) { func (i *Inspector) DeleteAllRetryTasks(qname string) (int, error) {
n, err := i.rdb.DeleteAllRetryTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllRetryTasks(qname)
return int(n), err return int(n), err
} }
// DeleteAllDeadTasks deletes all tasks in dead state, // DeleteAllDeadTasks deletes all dead tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllDeadTasks() (int, error) { func (i *Inspector) DeleteAllDeadTasks(qname string) (int, error) {
n, err := i.rdb.DeleteAllDeadTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllDeadTasks(qname)
return int(n), err return int(n), err
} }
// DeleteTaskByKey deletes a task with the given key. // DeleteTaskByKey deletes a task with the given key from the given queue.
func (i *Inspector) DeleteTaskByKey(key string) error { func (i *Inspector) DeleteTaskByKey(qname, key string) error {
id, score, qtype, err := parseTaskKey(key) if err := validateQueueName(qname); err != nil {
return err
}
id, score, state, err := parseTaskKey(key)
if err != nil { if err != nil {
return err return err
} }
switch qtype { switch state {
case "s": case "s":
return i.rdb.DeleteScheduledTask(id, score) return i.rdb.DeleteScheduledTask(qname, id, score)
case "r": case "r":
return i.rdb.DeleteRetryTask(id, score) return i.rdb.DeleteRetryTask(qname, id, score)
case "d": case "d":
return i.rdb.DeleteDeadTask(id, score) return i.rdb.DeleteDeadTask(qname, id, score)
default: default:
return fmt.Errorf("invalid key") return fmt.Errorf("invalid key")
} }
} }
// EnqueueAllScheduledTasks enqueues all tasks in the scheduled state, // RunAllScheduledTasks transition all scheduled tasks to pending state within the given queue,
// and reports the number of tasks enqueued. // and reports the number of tasks transitioned.
func (i *Inspector) EnqueueAllScheduledTasks() (int, error) { func (i *Inspector) RunAllScheduledTasks(qname string) (int, error) {
n, err := i.rdb.EnqueueAllScheduledTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.RunAllScheduledTasks(qname)
return int(n), err return int(n), err
} }
// EnqueueAllRetryTasks enqueues all tasks in the retry state, // RunAllRetryTasks transition all retry tasks to pending state within the given queue,
// and reports the number of tasks enqueued. // and reports the number of tasks transitioned.
func (i *Inspector) EnqueueAllRetryTasks() (int, error) { func (i *Inspector) RunAllRetryTasks(qname string) (int, error) {
n, err := i.rdb.EnqueueAllRetryTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.RunAllRetryTasks(qname)
return int(n), err return int(n), err
} }
// EnqueueAllDeadTasks enqueues all tasks in the dead state, // RunAllDeadTasks transition all dead tasks to pending state within the given queue,
// and reports the number of tasks enqueued. // and reports the number of tasks transitioned.
func (i *Inspector) EnqueueAllDeadTasks() (int, error) { func (i *Inspector) RunAllDeadTasks(qname string) (int, error) {
n, err := i.rdb.EnqueueAllDeadTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.RunAllDeadTasks(qname)
return int(n), err return int(n), err
} }
// EnqueueTaskByKey enqueues a task with the given key. // RunTaskByKey transition a task to pending state given task key and queue name.
func (i *Inspector) EnqueueTaskByKey(key string) error { func (i *Inspector) RunTaskByKey(qname, key string) error {
id, score, qtype, err := parseTaskKey(key) if err := validateQueueName(qname); err != nil {
return err
}
id, score, state, err := parseTaskKey(key)
if err != nil { if err != nil {
return err return err
} }
switch qtype { switch state {
case "s": case "s":
return i.rdb.EnqueueScheduledTask(id, score) return i.rdb.RunScheduledTask(qname, id, score)
case "r": case "r":
return i.rdb.EnqueueRetryTask(id, score) return i.rdb.RunRetryTask(qname, id, score)
case "d": case "d":
return i.rdb.EnqueueDeadTask(id, score) return i.rdb.RunDeadTask(qname, id, score)
default: default:
return fmt.Errorf("invalid key") return fmt.Errorf("invalid key")
} }
} }
// KillAllScheduledTasks kills all tasks in scheduled state, // KillAllScheduledTasks kills all scheduled tasks within the given queue,
// and reports the number of tasks killed. // and reports the number of tasks killed.
func (i *Inspector) KillAllScheduledTasks() (int, error) { func (i *Inspector) KillAllScheduledTasks(qname string) (int, error) {
n, err := i.rdb.KillAllScheduledTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.KillAllScheduledTasks(qname)
return int(n), err return int(n), err
} }
// KillAllRetryTasks kills all tasks in retry state, // KillAllRetryTasks kills all retry tasks within the given queue,
// and reports the number of tasks killed. // and reports the number of tasks killed.
func (i *Inspector) KillAllRetryTasks() (int, error) { func (i *Inspector) KillAllRetryTasks(qname string) (int, error) {
n, err := i.rdb.KillAllRetryTasks() if err := validateQueueName(qname); err != nil {
return 0, err
}
n, err := i.rdb.KillAllRetryTasks(qname)
return int(n), err return int(n), err
} }
// KillTaskByKey kills a task with the given key. // KillTaskByKey kills a task with the given key in the given queue.
func (i *Inspector) KillTaskByKey(key string) error { func (i *Inspector) KillTaskByKey(qname, key string) error {
id, score, qtype, err := parseTaskKey(key) if err := validateQueueName(qname); err != nil {
return err
}
id, score, state, err := parseTaskKey(key)
if err != nil { if err != nil {
return err return err
} }
switch qtype { switch state {
case "s": case "s":
return i.rdb.KillScheduledTask(id, score) return i.rdb.KillScheduledTask(qname, id, score)
case "r": case "r":
return i.rdb.KillRetryTask(id, score) return i.rdb.KillRetryTask(qname, id, score)
case "d": case "d":
return fmt.Errorf("task already dead") return fmt.Errorf("task already dead")
default: default:
@@ -490,11 +563,44 @@ func (i *Inspector) KillTaskByKey(key string) error {
// PauseQueue pauses task processing on the specified queue. // PauseQueue pauses task processing on the specified queue.
// If the queue is already paused, it will return a non-nil error. // If the queue is already paused, it will return a non-nil error.
func (i *Inspector) PauseQueue(qname string) error { func (i *Inspector) PauseQueue(qname string) error {
if err := validateQueueName(qname); err != nil {
return err
}
return i.rdb.Pause(qname) return i.rdb.Pause(qname)
} }
// UnpauseQueue resumes task processing on the specified queue. // UnpauseQueue resumes task processing on the specified queue.
// If the queue is not paused, it will return a non-nil error. // If the queue is not paused, it will return a non-nil error.
func (i *Inspector) UnpauseQueue(qname string) error { func (i *Inspector) UnpauseQueue(qname string) error {
if err := validateQueueName(qname); err != nil {
return err
}
return i.rdb.Unpause(qname) return i.rdb.Unpause(qname)
} }
// ClusterKeySlot returns an integer identifying the hash slot the given queue hashes to.
func (i *Inspector) ClusterKeySlot(qname string) (int64, error) {
return i.rdb.ClusterKeySlot(qname)
}
// ClusterNode describes a node in redis cluster.
type ClusterNode struct {
// Node ID in the cluster.
ID string
// Address of the node.
Addr string
}
// ClusterNode returns a list of nodes the given queue belongs to.
func (i *Inspector) ClusterNodes(qname string) ([]ClusterNode, error) {
nodes, err := i.rdb.ClusterNodes(qname)
if err != nil {
return nil, err
}
var res []ClusterNode
for _, node := range nodes {
res = append(res, ClusterNode{ID: node.ID, Addr: node.Addr})
}
return res, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -7,6 +7,7 @@ package asynqtest
import ( import (
"encoding/json" "encoding/json"
"math"
"sort" "sort"
"testing" "testing"
@@ -17,6 +18,14 @@ import (
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
// EquateInt64Approx returns a Comparer option that treats int64 values
// to be equal if they are within the given margin.
func EquateInt64Approx(margin int64) cmp.Option {
return cmp.Comparer(func(a, b int64) bool {
return math.Abs(float64(a-b)) <= float64(margin)
})
}
// SortMsgOpt is a cmp.Option to sort base.TaskMessage for comparing slice of task messages. // SortMsgOpt is a cmp.Option to sort base.TaskMessage for comparing slice of task messages.
var SortMsgOpt = cmp.Transformer("SortTaskMessages", func(in []*base.TaskMessage) []*base.TaskMessage { var SortMsgOpt = cmp.Transformer("SortTaskMessages", func(in []*base.TaskMessage) []*base.TaskMessage {
out := append([]*base.TaskMessage(nil), in...) // Copy input to avoid mutating it out := append([]*base.TaskMessage(nil), in...) // Copy input to avoid mutating it
@@ -56,6 +65,24 @@ var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.Worker
return out return out
}) })
// SortSchedulerEntryOpt is a cmp.Option to sort base.SchedulerEntry for comparing slice of entries.
var SortSchedulerEntryOpt = cmp.Transformer("SortSchedulerEntry", func(in []*base.SchedulerEntry) []*base.SchedulerEntry {
out := append([]*base.SchedulerEntry(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].Spec < out[j].Spec
})
return out
})
// SortSchedulerEnqueueEventOpt is a cmp.Option to sort base.SchedulerEnqueueEvent for comparing slice of events.
var SortSchedulerEnqueueEventOpt = cmp.Transformer("SortSchedulerEnqueueEvent", func(in []*base.SchedulerEnqueueEvent) []*base.SchedulerEnqueueEvent {
out := append([]*base.SchedulerEnqueueEvent(nil), in...)
sort.Slice(out, func(i, j int) bool {
return out[i].EnqueuedAt.Unix() < out[j].EnqueuedAt.Unix()
})
return out
})
// SortStringSliceOpt is a cmp.Option to sort string slice. // SortStringSliceOpt is a cmp.Option to sort string slice.
var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []string { var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []string {
out := append([]string(nil), in...) out := append([]string(nil), in...)
@@ -68,26 +95,20 @@ var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload. // NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage { func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage {
return &base.TaskMessage{ return NewTaskMessageWithQueue(taskType, payload, base.DefaultQueueName)
ID: uuid.New(),
Type: taskType,
Queue: base.DefaultQueueName,
Retry: 25,
Payload: payload,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
}
} }
// NewTaskMessageWithQueue returns a new instance of TaskMessage given a // NewTaskMessageWithQueue returns a new instance of TaskMessage given a
// task type, payload and queue name. // task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage { func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage {
return &base.TaskMessage{ return &base.TaskMessage{
ID: uuid.New(), ID: uuid.New(),
Type: taskType, Type: taskType,
Queue: qname, Queue: qname,
Retry: 25, Retry: 25,
Payload: payload, Payload: payload,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
} }
} }
@@ -151,66 +172,113 @@ func MustUnmarshalSlice(tb testing.TB, data []string) []*base.TaskMessage {
} }
// FlushDB deletes all the keys of the currently selected DB. // FlushDB deletes all the keys of the currently selected DB.
func FlushDB(tb testing.TB, r *redis.Client) { func FlushDB(tb testing.TB, r redis.UniversalClient) {
tb.Helper() tb.Helper()
if err := r.FlushDB().Err(); err != nil { switch r := r.(type) {
tb.Fatal(err) case *redis.Client:
if err := r.FlushDB().Err(); err != nil {
tb.Fatal(err)
}
case *redis.ClusterClient:
err := r.ForEachMaster(func(c *redis.Client) error {
if err := c.FlushAll().Err(); err != nil {
return err
}
return nil
})
if err != nil {
tb.Fatal(err)
}
} }
} }
// SeedEnqueuedQueue initializes the specified queue with the given messages. // SeedPendingQueue initializes the specified queue with the given messages.
// func SeedPendingQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
// If queue name option is not passed, it defaults to the default queue.
func SeedEnqueuedQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage, queueOpt ...string) {
tb.Helper() tb.Helper()
queue := base.DefaultQueue r.SAdd(base.AllQueues, qname)
if len(queueOpt) > 0 { seedRedisList(tb, r, base.QueueKey(qname), msgs)
queue = base.QueueKey(queueOpt[0])
}
r.SAdd(base.AllQueues, queue)
seedRedisList(tb, r, queue, msgs)
} }
// SeedAllEnqueuedQueues initializes all of the specified queues with the given messages. // SeedActiveQueue initializes the active queue with the given messages.
// func SeedActiveQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
// enqueued maps a queue name to a list of messages.
func SeedAllEnqueuedQueues(tb testing.TB, r *redis.Client, enqueued map[string][]*base.TaskMessage) {
for q, msgs := range enqueued {
SeedEnqueuedQueue(tb, r, msgs, q)
}
}
// SeedInProgressQueue initializes the in-progress queue with the given messages.
func SeedInProgressQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage) {
tb.Helper() tb.Helper()
seedRedisList(tb, r, base.InProgressQueue, msgs) r.SAdd(base.AllQueues, qname)
seedRedisList(tb, r, base.ActiveKey(qname), msgs)
} }
// SeedScheduledQueue initializes the scheduled queue with the given messages. // SeedScheduledQueue initializes the scheduled queue with the given messages.
func SeedScheduledQueue(tb testing.TB, r *redis.Client, entries []base.Z) { func SeedScheduledQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
seedRedisZSet(tb, r, base.ScheduledQueue, entries) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.ScheduledKey(qname), entries)
} }
// SeedRetryQueue initializes the retry queue with the given messages. // SeedRetryQueue initializes the retry queue with the given messages.
func SeedRetryQueue(tb testing.TB, r *redis.Client, entries []base.Z) { func SeedRetryQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
seedRedisZSet(tb, r, base.RetryQueue, entries) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.RetryKey(qname), entries)
} }
// SeedDeadQueue initializes the dead queue with the given messages. // SeedDeadQueue initializes the dead queue with the given messages.
func SeedDeadQueue(tb testing.TB, r *redis.Client, entries []base.Z) { func SeedDeadQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
seedRedisZSet(tb, r, base.DeadQueue, entries) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.DeadKey(qname), entries)
} }
// SeedDeadlines initializes the deadlines set with the given entries. // SeedDeadlines initializes the deadlines set with the given entries.
func SeedDeadlines(tb testing.TB, r *redis.Client, entries []base.Z) { func SeedDeadlines(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
seedRedisZSet(tb, r, base.KeyDeadlines, entries) r.SAdd(base.AllQueues, qname)
seedRedisZSet(tb, r, base.DeadlinesKey(qname), entries)
} }
func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.TaskMessage) { // SeedAllPendingQueues initializes all of the specified queues with the given messages.
//
// pending maps a queue name to a list of messages.
func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[string][]*base.TaskMessage) {
for q, msgs := range pending {
SeedPendingQueue(tb, r, msgs, q)
}
}
// SeedAllActiveQueues initializes all of the specified active queues with the given messages.
func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[string][]*base.TaskMessage) {
for q, msgs := range active {
SeedActiveQueue(tb, r, msgs, q)
}
}
// SeedAllScheduledQueues initializes all of the specified scheduled queues with the given entries.
func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled map[string][]base.Z) {
for q, entries := range scheduled {
SeedScheduledQueue(tb, r, entries, q)
}
}
// SeedAllRetryQueues initializes all of the specified retry queues with the given entries.
func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string][]base.Z) {
for q, entries := range retry {
SeedRetryQueue(tb, r, entries, q)
}
}
// SeedAllDeadQueues initializes all of the specified dead queues with the given entries.
func SeedAllDeadQueues(tb testing.TB, r redis.UniversalClient, dead map[string][]base.Z) {
for q, entries := range dead {
SeedDeadQueue(tb, r, entries, q)
}
}
// SeedAllDeadlines initializes all of the deadlines with the given entries.
func SeedAllDeadlines(tb testing.TB, r redis.UniversalClient, deadlines map[string][]base.Z) {
for q, entries := range deadlines {
SeedDeadlines(tb, r, entries, q)
}
}
func seedRedisList(tb testing.TB, c redis.UniversalClient, key string, msgs []*base.TaskMessage) {
data := MustMarshalSlice(tb, msgs) data := MustMarshalSlice(tb, msgs)
for _, s := range data { for _, s := range data {
if err := c.LPush(key, s).Err(); err != nil { if err := c.LPush(key, s).Err(); err != nil {
@@ -219,7 +287,7 @@ func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.Task
} }
} }
func seedRedisZSet(tb testing.TB, c *redis.Client, key string, items []base.Z) { func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string, items []base.Z) {
for _, item := range items { for _, item := range items {
z := &redis.Z{Member: MustMarshal(tb, item.Message), Score: float64(item.Score)} z := &redis.Z{Member: MustMarshal(tb, item.Message), Score: float64(item.Score)}
if err := c.ZAdd(key, z).Err(); err != nil { if err := c.ZAdd(key, z).Err(); err != nil {
@@ -228,77 +296,71 @@ func seedRedisZSet(tb testing.TB, c *redis.Client, key string, items []base.Z) {
} }
} }
// GetEnqueuedMessages returns all task messages in the specified queue. // GetPendingMessages returns all pending messages in the given queue.
// func GetPendingMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
// If queue name option is not passed, it defaults to the default queue.
func GetEnqueuedMessages(tb testing.TB, r *redis.Client, queueOpt ...string) []*base.TaskMessage {
tb.Helper() tb.Helper()
queue := base.DefaultQueue return getListMessages(tb, r, base.QueueKey(qname))
if len(queueOpt) > 0 {
queue = base.QueueKey(queueOpt[0])
}
return getListMessages(tb, r, queue)
} }
// GetInProgressMessages returns all task messages in the in-progress queue. // GetActiveMessages returns all active messages in the given queue.
func GetInProgressMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetActiveMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getListMessages(tb, r, base.InProgressQueue) return getListMessages(tb, r, base.ActiveKey(qname))
} }
// GetScheduledMessages returns all task messages in the scheduled queue. // GetScheduledMessages returns all scheduled task messages in the given queue.
func GetScheduledMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetScheduledMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.ScheduledQueue) return getZSetMessages(tb, r, base.ScheduledKey(qname))
} }
// GetRetryMessages returns all task messages in the retry queue. // GetRetryMessages returns all retry messages in the given queue.
func GetRetryMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetRetryMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.RetryQueue) return getZSetMessages(tb, r, base.RetryKey(qname))
} }
// GetDeadMessages returns all task messages in the dead queue. // GetDeadMessages returns all dead messages in the given queue.
func GetDeadMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage { func GetDeadMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper() tb.Helper()
return getZSetMessages(tb, r, base.DeadQueue) return getZSetMessages(tb, r, base.DeadKey(qname))
} }
// GetScheduledEntries returns all task messages and its score in the scheduled queue. // GetScheduledEntries returns all scheduled messages and its score in the given queue.
func GetScheduledEntries(tb testing.TB, r *redis.Client) []base.Z { func GetScheduledEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.ScheduledQueue) return getZSetEntries(tb, r, base.ScheduledKey(qname))
} }
// GetRetryEntries returns all task messages and its score in the retry queue. // GetRetryEntries returns all retry messages and its score in the given queue.
func GetRetryEntries(tb testing.TB, r *redis.Client) []base.Z { func GetRetryEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.RetryQueue) return getZSetEntries(tb, r, base.RetryKey(qname))
} }
// GetDeadEntries returns all task messages and its score in the dead queue. // GetDeadEntries returns all dead messages and its score in the given queue.
func GetDeadEntries(tb testing.TB, r *redis.Client) []base.Z { func GetDeadEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.DeadQueue) return getZSetEntries(tb, r, base.DeadKey(qname))
} }
// GetDeadlinesEntries returns all task messages and its score in the deadlines set. // GetDeadlinesEntries returns all task messages and its score in the deadlines set for the given queue.
func GetDeadlinesEntries(tb testing.TB, r *redis.Client) []base.Z { func GetDeadlinesEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getZSetEntries(tb, r, base.KeyDeadlines) return getZSetEntries(tb, r, base.DeadlinesKey(qname))
} }
func getListMessages(tb testing.TB, r *redis.Client, list string) []*base.TaskMessage { func getListMessages(tb testing.TB, r redis.UniversalClient, list string) []*base.TaskMessage {
data := r.LRange(list, 0, -1).Val() data := r.LRange(list, 0, -1).Val()
return MustUnmarshalSlice(tb, data) return MustUnmarshalSlice(tb, data)
} }
func getZSetMessages(tb testing.TB, r *redis.Client, zset string) []*base.TaskMessage { func getZSetMessages(tb testing.TB, r redis.UniversalClient, zset string) []*base.TaskMessage {
data := r.ZRange(zset, 0, -1).Val() data := r.ZRange(zset, 0, -1).Val()
return MustUnmarshalSlice(tb, data) return MustUnmarshalSlice(tb, data)
} }
func getZSetEntries(tb testing.TB, r *redis.Client, zset string) []base.Z { func getZSetEntries(tb testing.TB, r redis.UniversalClient, zset string) []base.Z {
data := r.ZRangeWithScores(zset, 0, -1).Val() data := r.ZRangeWithScores(zset, 0, -1).Val()
var entries []base.Z var entries []base.Z
for _, z := range data { for _, z := range data {

View File

@@ -9,6 +9,7 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"sort"
"strings" "strings"
"sync" "sync"
"time" "time"
@@ -18,54 +19,115 @@ import (
) )
// Version of asynq library and CLI. // Version of asynq library and CLI.
const Version = "0.10.0" const Version = "0.13.0"
// DefaultQueueName is the queue name used if none are specified by user. // DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default" const DefaultQueueName = "default"
// Redis keys // DefaultQueue is the redis key for the default queue.
var DefaultQueue = QueueKey(DefaultQueueName)
// Global Redis keys.
const ( const (
AllServers = "asynq:servers" // ZSET AllServers = "asynq:servers" // ZSET
serversPrefix = "asynq:servers:" // STRING - asynq:ps:<host>:<pid>:<serverid> AllWorkers = "asynq:workers" // ZSET
AllWorkers = "asynq:workers" // ZSET AllSchedulers = "asynq:schedulers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>:<serverid> AllQueues = "asynq:queues" // SET
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd> CancelChannel = "asynq:cancel" // PubSub channel
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
AllQueues = "asynq:queues" // SET
DefaultQueue = QueuePrefix + DefaultQueueName // LIST
ScheduledQueue = "asynq:scheduled" // ZSET
RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST
KeyDeadlines = "asynq:deadlines" // ZSET
PausedQueues = "asynq:paused" // SET
CancelChannel = "asynq:cancel" // PubSub channel
) )
// QueueKey returns a redis key for the given queue name. // QueueKey returns a redis key for the given queue name.
func QueueKey(qname string) string { func QueueKey(qname string) string {
return QueuePrefix + strings.ToLower(qname) return fmt.Sprintf("asynq:{%s}", qname)
} }
// ProcessedKey returns a redis key for processed count for the given day. // ActiveKey returns a redis key for the active tasks.
func ProcessedKey(t time.Time) string { func ActiveKey(qname string) string {
return processedPrefix + t.UTC().Format("2006-01-02") return fmt.Sprintf("asynq:{%s}:active", qname)
} }
// FailureKey returns a redis key for failure count for the given day. // ScheduledKey returns a redis key for the scheduled tasks.
func FailureKey(t time.Time) string { func ScheduledKey(qname string) string {
return failurePrefix + t.UTC().Format("2006-01-02") return fmt.Sprintf("asynq:{%s}:scheduled", qname)
}
// RetryKey returns a redis key for the retry tasks.
func RetryKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:retry", qname)
}
// DeadKey returns a redis key for the dead tasks.
func DeadKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:dead", qname)
}
// DeadlinesKey returns a redis key for the deadlines.
func DeadlinesKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:deadlines", qname)
}
// PausedKey returns a redis key to indicate that the given queue is paused.
func PausedKey(qname string) string {
return fmt.Sprintf("asynq:{%s}:paused", qname)
}
// ProcessedKey returns a redis key for processed count for the given day for the queue.
func ProcessedKey(qname string, t time.Time) string {
return fmt.Sprintf("asynq:{%s}:processed:%s", qname, t.UTC().Format("2006-01-02"))
}
// FailedKey returns a redis key for failure count for the given day for the queue.
func FailedKey(qname string, t time.Time) string {
return fmt.Sprintf("asynq:{%s}:failed:%s", qname, t.UTC().Format("2006-01-02"))
} }
// ServerInfoKey returns a redis key for process info. // ServerInfoKey returns a redis key for process info.
func ServerInfoKey(hostname string, pid int, sid string) string { func ServerInfoKey(hostname string, pid int, serverID string) string {
return fmt.Sprintf("%s%s:%d:%s", serversPrefix, hostname, pid, sid) return fmt.Sprintf("asynq:servers:{%s:%d:%s}", hostname, pid, serverID)
} }
// WorkersKey returns a redis key for the workers given hostname, pid, and server ID. // WorkersKey returns a redis key for the workers given hostname, pid, and server ID.
func WorkersKey(hostname string, pid int, sid string) string { func WorkersKey(hostname string, pid int, serverID string) string {
return fmt.Sprintf("%s%s:%d:%s", workersPrefix, hostname, pid, sid) return fmt.Sprintf("asynq:workers:{%s:%d:%s}", hostname, pid, serverID)
}
// SchedulerEntriesKey returns a redis key for the scheduler entries given scheduler ID.
func SchedulerEntriesKey(schedulerID string) string {
return fmt.Sprintf("asynq:schedulers:{%s}", schedulerID)
}
// SchedulerHistoryKey returns a redis key for the scheduler's history for the given entry.
func SchedulerHistoryKey(entryID string) string {
return fmt.Sprintf("asynq:scheduler_history:%s", entryID)
}
// UniqueKey returns a redis key with the given type, payload, and queue name.
func UniqueKey(qname, tasktype string, payload map[string]interface{}) string {
return fmt.Sprintf("asynq:{%s}:unique:%s:%s", qname, tasktype, serializePayload(payload))
}
func serializePayload(payload map[string]interface{}) string {
if payload == nil {
return "nil"
}
type entry struct {
k string
v interface{}
}
var es []entry
for k, v := range payload {
es = append(es, entry{k, v})
}
// sort entries by key
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
var b strings.Builder
for _, e := range es {
if b.Len() > 0 {
b.WriteString(",")
}
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
}
return b.String()
} }
// TaskMessage is the internal representation of a task with additional metadata fields. // TaskMessage is the internal representation of a task with additional metadata fields.
@@ -157,10 +219,10 @@ const (
// StatusIdle indicates the server is in idle state. // StatusIdle indicates the server is in idle state.
StatusIdle ServerStatusValue = iota StatusIdle ServerStatusValue = iota
// StatusRunning indicates the servier is up and processing tasks. // StatusRunning indicates the server is up and active.
StatusRunning StatusRunning
// StatusQuiet indicates the server is up but not processing new tasks. // StatusQuiet indicates the server is up but not active.
StatusQuiet StatusQuiet
// StatusStopped indicates the server server has been stopped. // StatusStopped indicates the server server has been stopped.
@@ -222,7 +284,41 @@ type WorkerInfo struct {
Started time.Time Started time.Time
} }
// Cancelations is a collection that holds cancel functions for all in-progress tasks. // SchedulerEntry holds information about a periodic task registered with a scheduler.
type SchedulerEntry struct {
// Identifier of this entry.
ID string
// Spec describes the schedule of this entry.
Spec string
// Type is the task type of the periodic task.
Type string
// Payload is the payload of the periodic task.
Payload map[string]interface{}
// Opts is the options for the periodic task.
Opts []string
// Next shows the next time the task will be enqueued.
Next time.Time
// Prev shows the last time the task was enqueued.
// Zero time if task was never enqueued.
Prev time.Time
}
// SchedulerEnqueueEvent holds information about an enqueue event by a scheduler.
type SchedulerEnqueueEvent struct {
// ID of the task that was enqueued.
TaskID string
// Time the task was enqueued.
EnqueuedAt time.Time
}
// Cancelations is a collection that holds cancel functions for all active tasks.
// //
// Cancelations are safe for concurrent use by multipel goroutines. // Cancelations are safe for concurrent use by multipel goroutines.
type Cancelations struct { type Cancelations struct {
@@ -273,8 +369,8 @@ type Broker interface {
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
Kill(msg *TaskMessage, errMsg string) error Kill(msg *TaskMessage, errMsg string) error
CheckAndEnqueue() error CheckAndEnqueue(qnames ...string) error
ListDeadlineExceeded(deadline time.Time) ([]*TaskMessage, error) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*TaskMessage, error)
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
ClearServerState(host string, pid int, serverID string) error ClearServerState(host string, pid int, serverID string) error
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers

View File

@@ -20,7 +20,8 @@ func TestQueueKey(t *testing.T) {
qname string qname string
want string want string
}{ }{
{"custom", "asynq:queues:custom"}, {"default", "asynq:{default}"},
{"custom", "asynq:{custom}"},
} }
for _, tc := range tests { for _, tc := range tests {
@@ -31,36 +32,140 @@ func TestQueueKey(t *testing.T) {
} }
} }
func TestProcessedKey(t *testing.T) { func TestActiveKey(t *testing.T) {
tests := []struct { tests := []struct {
input time.Time qname string
want string want string
}{ }{
{time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:processed:2019-11-14"}, {"default", "asynq:{default}:active"},
{time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:processed:2020-12-01"}, {"custom", "asynq:{custom}:active"},
{time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:processed:2020-01-06"},
} }
for _, tc := range tests { for _, tc := range tests {
got := ProcessedKey(tc.input) got := ActiveKey(tc.qname)
if got != tc.want {
t.Errorf("ActiveKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestDeadlinesKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:deadlines"},
{"custom", "asynq:{custom}:deadlines"},
}
for _, tc := range tests {
got := DeadlinesKey(tc.qname)
if got != tc.want {
t.Errorf("DeadlinesKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestScheduledKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:scheduled"},
{"custom", "asynq:{custom}:scheduled"},
}
for _, tc := range tests {
got := ScheduledKey(tc.qname)
if got != tc.want {
t.Errorf("ScheduledKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestRetryKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:retry"},
{"custom", "asynq:{custom}:retry"},
}
for _, tc := range tests {
got := RetryKey(tc.qname)
if got != tc.want {
t.Errorf("RetryKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestDeadKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:dead"},
{"custom", "asynq:{custom}:dead"},
}
for _, tc := range tests {
got := DeadKey(tc.qname)
if got != tc.want {
t.Errorf("DeadKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestPausedKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:paused"},
{"custom", "asynq:{custom}:paused"},
}
for _, tc := range tests {
got := PausedKey(tc.qname)
if got != tc.want {
t.Errorf("PausedKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestProcessedKey(t *testing.T) {
tests := []struct {
qname string
input time.Time
want string
}{
{"default", time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:{default}:processed:2019-11-14"},
{"critical", time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:{critical}:processed:2020-12-01"},
{"default", time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:{default}:processed:2020-01-06"},
}
for _, tc := range tests {
got := ProcessedKey(tc.qname, tc.input)
if got != tc.want { if got != tc.want {
t.Errorf("ProcessedKey(%v) = %q, want %q", tc.input, got, tc.want) t.Errorf("ProcessedKey(%v) = %q, want %q", tc.input, got, tc.want)
} }
} }
} }
func TestFailureKey(t *testing.T) { func TestFailedKey(t *testing.T) {
tests := []struct { tests := []struct {
qname string
input time.Time input time.Time
want string want string
}{ }{
{time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:failure:2019-11-14"}, {"default", time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:{default}:failed:2019-11-14"},
{time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:failure:2020-12-01"}, {"custom", time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:{custom}:failed:2020-12-01"},
{time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:failure:2020-01-06"}, {"low", time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:{low}:failed:2020-01-06"},
} }
for _, tc := range tests { for _, tc := range tests {
got := FailureKey(tc.input) got := FailedKey(tc.qname, tc.input)
if got != tc.want { if got != tc.want {
t.Errorf("FailureKey(%v) = %q, want %q", tc.input, got, tc.want) t.Errorf("FailureKey(%v) = %q, want %q", tc.input, got, tc.want)
} }
@@ -74,8 +179,8 @@ func TestServerInfoKey(t *testing.T) {
sid string sid string
want string want string
}{ }{
{"localhost", 9876, "server123", "asynq:servers:localhost:9876:server123"}, {"localhost", 9876, "server123", "asynq:servers:{localhost:9876:server123}"},
{"127.0.0.1", 1234, "server987", "asynq:servers:127.0.0.1:1234:server987"}, {"127.0.0.1", 1234, "server987", "asynq:servers:{127.0.0.1:1234:server987}"},
} }
for _, tc := range tests { for _, tc := range tests {
@@ -94,8 +199,8 @@ func TestWorkersKey(t *testing.T) {
sid string sid string
want string want string
}{ }{
{"localhost", 9876, "server1", "asynq:workers:localhost:9876:server1"}, {"localhost", 9876, "server1", "asynq:workers:{localhost:9876:server1}"},
{"127.0.0.1", 1234, "server2", "asynq:workers:127.0.0.1:1234:server2"}, {"127.0.0.1", 1234, "server2", "asynq:workers:{127.0.0.1:1234:server2}"},
} }
for _, tc := range tests { for _, tc := range tests {
@@ -107,6 +212,98 @@ func TestWorkersKey(t *testing.T) {
} }
} }
func TestSchedulerEntriesKey(t *testing.T) {
tests := []struct {
schedulerID string
want string
}{
{"localhost:9876:scheduler123", "asynq:schedulers:{localhost:9876:scheduler123}"},
{"127.0.0.1:1234:scheduler987", "asynq:schedulers:{127.0.0.1:1234:scheduler987}"},
}
for _, tc := range tests {
got := SchedulerEntriesKey(tc.schedulerID)
if got != tc.want {
t.Errorf("SchedulerEntriesKey(%q) = %q, want %q", tc.schedulerID, got, tc.want)
}
}
}
func TestSchedulerHistoryKey(t *testing.T) {
tests := []struct {
entryID string
want string
}{
{"entry876", "asynq:scheduler_history:entry876"},
{"entry345", "asynq:scheduler_history:entry345"},
}
for _, tc := range tests {
got := SchedulerHistoryKey(tc.entryID)
if got != tc.want {
t.Errorf("SchedulerHistoryKey(%q) = %q, want %q",
tc.entryID, got, tc.want)
}
}
}
func TestUniqueKey(t *testing.T) {
tests := []struct {
desc string
qname string
tasktype string
payload map[string]interface{}
want string
}{
{
"with primitive types",
"default",
"email:send",
map[string]interface{}{"a": 123, "b": "hello", "c": true},
"asynq:{default}:unique:email:send:a=123,b=hello,c=true",
},
{
"with unsorted keys",
"default",
"email:send",
map[string]interface{}{"b": "hello", "c": true, "a": 123},
"asynq:{default}:unique:email:send:a=123,b=hello,c=true",
},
{
"with composite types",
"default",
"email:send",
map[string]interface{}{
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
"names": []string{"bob", "mike", "rob"}},
"asynq:{default}:unique:email:send:address=map[city:Boston line:123 Main St state:MA],names=[bob mike rob]",
},
{
"with complex types",
"default",
"email:send",
map[string]interface{}{
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
"duration": time.Hour},
"asynq:{default}:unique:email:send:duration=1h0m0s,time=2020-07-28 00:00:00 +0000 UTC",
},
{
"with nil payload",
"default",
"reindex",
nil,
"asynq:{default}:unique:reindex:nil",
},
}
for _, tc := range tests {
got := UniqueKey(tc.qname, tc.tasktype, tc.payload)
if got != tc.want {
t.Errorf("%s: UniqueKey(%q, %q, %v) = %q, want %q", tc.desc, tc.qname, tc.tasktype, tc.payload, got, tc.want)
}
}
}
func TestMessageEncoding(t *testing.T) { func TestMessageEncoding(t *testing.T) {
id := uuid.New() id := uuid.New()
tests := []struct { tests := []struct {

View File

@@ -1,41 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package rdb
import (
"testing"
"github.com/go-redis/redis/v7"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
)
func BenchmarkDone(b *testing.B) {
r := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
DB: 8,
})
h.FlushDB(b, r)
// populate in-progress queue with messages
var inProgress []*base.TaskMessage
for i := 0; i < 40; i++ {
inProgress = append(inProgress,
h.NewTaskMessage("send_email", map[string]interface{}{"subject": "hello", "recipient_id": 123}))
}
h.SeedInProgressQueue(b, r, inProgress)
rdb := NewRDB(r)
b.ResetTimer()
for n := 0; n < b.N; n++ {
b.StopTimer()
msg := h.NewTaskMessage("reindex", map[string]interface{}{"config": "path/to/config/file"})
r.LPush(base.InProgressQueue, h.MustMarshal(b, msg))
b.StartTimer()
rdb.Done(msg)
}
}

View File

@@ -7,7 +7,6 @@ package rdb
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"sort"
"strings" "strings"
"time" "time"
@@ -17,54 +16,60 @@ import (
"github.com/spf13/cast" "github.com/spf13/cast"
) )
// Stats represents a state of queues at a certain time. // AllQueues returns a list of all queue names.
type Stats struct { func (r *RDB) AllQueues() ([]string, error) {
Enqueued int return r.client.SMembers(base.AllQueues).Result()
InProgress int
Scheduled int
Retry int
Dead int
Processed int
Failed int
Queues []*Queue
Timestamp time.Time
} }
// Queue represents a task queue. // Stats represents a state of queues at a certain time.
type Queue struct { type Stats struct {
// Name of the queue (e.g. "default", "critical"). // Name of the queue (e.g. "default", "critical").
// Note: It doesn't include the prefix "asynq:queues:". Queue string
Name string
// Paused indicates whether the queue is paused. // Paused indicates whether the queue is paused.
// If true, tasks in the queue should not be processed. // If true, tasks in the queue should not be processed.
Paused bool Paused bool
// Size is the total number of tasks in the queue.
// Size is the number of tasks in the queue.
Size int Size int
// Number of tasks in each state.
Pending int
Active int
Scheduled int
Retry int
Dead int
// Total number of tasks processed during the current date.
// The number includes both succeeded and failed tasks.
Processed int
// Total number of tasks failed during the current date.
Failed int
// Time this stats was taken.
Timestamp time.Time
} }
// DailyStats holds aggregate data for a given day. // DailyStats holds aggregate data for a given day.
type DailyStats struct { type DailyStats struct {
// Name of the queue (e.g. "default", "critical").
Queue string
// Total number of tasks processed during the given day.
// The number includes both succeeded and failed tasks.
Processed int Processed int
Failed int // Total number of tasks failed during the given day.
Time time.Time Failed int
// Date this stats was taken.
Time time.Time
} }
// KEYS[1] -> asynq:queues // KEYS[1] -> asynq:<qname>
// KEYS[2] -> asynq:in_progress // KEYS[2] -> asynq:<qname>:active
// KEYS[3] -> asynq:scheduled // KEYS[3] -> asynq:<qname>:scheduled
// KEYS[4] -> asynq:retry // KEYS[4] -> asynq:<qname>:retry
// KEYS[5] -> asynq:dead // KEYS[5] -> asynq:<qname>:dead
// KEYS[6] -> asynq:processed:<yyyy-mm-dd> // KEYS[6] -> asynq:<qname>:processed:<yyyy-mm-dd>
// KEYS[7] -> asynq:failure:<yyyy-mm-dd> // KEYS[7] -> asynq:<qname>:failed:<yyyy-mm-dd>
// KEYS[8] -> asynq:<qname>:paused
var currentStatsCmd = redis.NewScript(` var currentStatsCmd = redis.NewScript(`
local res = {} local res = {}
local queues = redis.call("SMEMBERS", KEYS[1]) table.insert(res, KEYS[1])
for _, qkey in ipairs(queues) do table.insert(res, redis.call("LLEN", KEYS[1]))
table.insert(res, qkey)
table.insert(res, redis.call("LLEN", qkey))
end
table.insert(res, KEYS[2]) table.insert(res, KEYS[2])
table.insert(res, redis.call("LLEN", KEYS[2])) table.insert(res, redis.call("LLEN", KEYS[2]))
table.insert(res, KEYS[3]) table.insert(res, KEYS[3])
@@ -78,28 +83,38 @@ local p = redis.call("GET", KEYS[6])
if p then if p then
pcount = tonumber(p) pcount = tonumber(p)
end end
table.insert(res, "processed") table.insert(res, KEYS[6])
table.insert(res, pcount) table.insert(res, pcount)
local fcount = 0 local fcount = 0
local f = redis.call("GET", KEYS[7]) local f = redis.call("GET", KEYS[7])
if f then if f then
fcount = tonumber(f) fcount = tonumber(f)
end end
table.insert(res, "failed") table.insert(res, KEYS[7])
table.insert(res, fcount) table.insert(res, fcount)
table.insert(res, KEYS[8])
table.insert(res, redis.call("EXISTS", KEYS[8]))
return res`) return res`)
// CurrentStats returns a current state of the queues. // CurrentStats returns a current state of the queues.
func (r *RDB) CurrentStats() (*Stats, error) { func (r *RDB) CurrentStats(qname string) (*Stats, error) {
exists, err := r.client.SIsMember(base.AllQueues, qname).Result()
if err != nil {
return nil, err
}
if !exists {
return nil, &ErrQueueNotFound{qname}
}
now := time.Now() now := time.Now()
res, err := currentStatsCmd.Run(r.client, []string{ res, err := currentStatsCmd.Run(r.client, []string{
base.AllQueues, base.QueueKey(qname),
base.InProgressQueue, base.ActiveKey(qname),
base.ScheduledQueue, base.ScheduledKey(qname),
base.RetryQueue, base.RetryKey(qname),
base.DeadQueue, base.DeadKey(qname),
base.ProcessedKey(now), base.ProcessedKey(qname, now),
base.FailureKey(now), base.FailedKey(qname, now),
base.PausedKey(qname),
}).Result() }).Result()
if err != nil { if err != nil {
return nil, err return nil, err
@@ -108,46 +123,43 @@ func (r *RDB) CurrentStats() (*Stats, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
paused, err := r.client.SMembersMap(base.PausedQueues).Result()
if err != nil {
return nil, err
}
stats := &Stats{ stats := &Stats{
Queues: make([]*Queue, 0), Queue: qname,
Timestamp: now, Timestamp: now,
} }
size := 0
for i := 0; i < len(data); i += 2 { for i := 0; i < len(data); i += 2 {
key := cast.ToString(data[i]) key := cast.ToString(data[i])
val := cast.ToInt(data[i+1]) val := cast.ToInt(data[i+1])
switch key {
switch { case base.QueueKey(qname):
case strings.HasPrefix(key, base.QueuePrefix): stats.Pending = val
stats.Enqueued += val size += val
q := Queue{ case base.ActiveKey(qname):
Name: strings.TrimPrefix(key, base.QueuePrefix), stats.Active = val
Size: val, size += val
} case base.ScheduledKey(qname):
if _, exist := paused[key]; exist {
q.Paused = true
}
stats.Queues = append(stats.Queues, &q)
case key == base.InProgressQueue:
stats.InProgress = val
case key == base.ScheduledQueue:
stats.Scheduled = val stats.Scheduled = val
case key == base.RetryQueue: size += val
case base.RetryKey(qname):
stats.Retry = val stats.Retry = val
case key == base.DeadQueue: size += val
case base.DeadKey(qname):
stats.Dead = val stats.Dead = val
case key == "processed": size += val
case base.ProcessedKey(qname, now):
stats.Processed = val stats.Processed = val
case key == "failed": case base.FailedKey(qname, now):
stats.Failed = val stats.Failed = val
case base.PausedKey(qname):
if val == 0 {
stats.Paused = false
} else {
stats.Paused = true
}
} }
} }
sort.Slice(stats.Queues, func(i, j int) bool { stats.Size = size
return stats.Queues[i].Name < stats.Queues[j].Name
})
return stats, nil return stats, nil
} }
@@ -156,16 +168,23 @@ local res = {}
for _, key in ipairs(KEYS) do for _, key in ipairs(KEYS) do
local n = redis.call("GET", key) local n = redis.call("GET", key)
if not n then if not n then
n = 0 n = 0
end end
table.insert(res, tonumber(n)) table.insert(res, tonumber(n))
end end
return res`) return res`)
// HistoricalStats returns a list of stats from the last n days. // HistoricalStats returns a list of stats from the last n days for the given queue.
func (r *RDB) HistoricalStats(n int) ([]*DailyStats, error) { func (r *RDB) HistoricalStats(qname string, n int) ([]*DailyStats, error) {
if n < 1 { if n < 1 {
return []*DailyStats{}, nil return nil, fmt.Errorf("the number of days must be positive")
}
exists, err := r.client.SIsMember(base.AllQueues, qname).Result()
if err != nil {
return nil, err
}
if !exists {
return nil, &ErrQueueNotFound{qname}
} }
const day = 24 * time.Hour const day = 24 * time.Hour
now := time.Now().UTC() now := time.Now().UTC()
@@ -174,10 +193,10 @@ func (r *RDB) HistoricalStats(n int) ([]*DailyStats, error) {
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
ts := now.Add(-time.Duration(i) * day) ts := now.Add(-time.Duration(i) * day)
days = append(days, ts) days = append(days, ts)
keys = append(keys, base.ProcessedKey(ts)) keys = append(keys, base.ProcessedKey(qname, ts))
keys = append(keys, base.FailureKey(ts)) keys = append(keys, base.FailedKey(qname, ts))
} }
res, err := historicalStatsCmd.Run(r.client, keys, len(keys)).Result() res, err := historicalStatsCmd.Run(r.client, keys).Result()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -188,6 +207,7 @@ func (r *RDB) HistoricalStats(n int) ([]*DailyStats, error) {
var stats []*DailyStats var stats []*DailyStats
for i := 0; i < len(data); i += 2 { for i := 0; i < len(data); i += 2 {
stats = append(stats, &DailyStats{ stats = append(stats, &DailyStats{
Queue: qname,
Processed: data[i], Processed: data[i],
Failed: data[i+1], Failed: data[i+1],
Time: days[i/2], Time: days[i/2],
@@ -202,8 +222,21 @@ func (r *RDB) RedisInfo() (map[string]string, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
return parseInfo(res)
}
// RedisClusterInfo returns a map of redis cluster info.
func (r *RDB) RedisClusterInfo() (map[string]string, error) {
res, err := r.client.ClusterInfo().Result()
if err != nil {
return nil, err
}
return parseInfo(res)
}
func parseInfo(infoStr string) (map[string]string, error) {
info := make(map[string]string) info := make(map[string]string)
lines := strings.Split(res, "\r\n") lines := strings.Split(infoStr, "\r\n")
for _, l := range lines { for _, l := range lines {
kv := strings.Split(l, ":") kv := strings.Split(l, ":")
if len(kv) == 2 { if len(kv) == 2 {
@@ -238,18 +271,20 @@ func (p Pagination) stop() int64 {
return int64(p.Size*p.Page + p.Size - 1) return int64(p.Size*p.Page + p.Size - 1)
} }
// ListEnqueued returns enqueued tasks that are ready to be processed. // ListPending returns pending tasks that are ready to be processed.
func (r *RDB) ListEnqueued(qname string, pgn Pagination) ([]*base.TaskMessage, error) { func (r *RDB) ListPending(qname string, pgn Pagination) ([]*base.TaskMessage, error) {
qkey := base.QueueKey(qname) if !r.client.SIsMember(base.AllQueues, qname).Val() {
if !r.client.SIsMember(base.AllQueues, qkey).Val() {
return nil, fmt.Errorf("queue %q does not exist", qname) return nil, fmt.Errorf("queue %q does not exist", qname)
} }
return r.listMessages(qkey, pgn) return r.listMessages(base.QueueKey(qname), pgn)
} }
// ListInProgress returns all tasks that are currently being processed. // ListActive returns all tasks that are currently being processed for the given queue.
func (r *RDB) ListInProgress(pgn Pagination) ([]*base.TaskMessage, error) { func (r *RDB) ListActive(qname string, pgn Pagination) ([]*base.TaskMessage, error) {
return r.listMessages(base.InProgressQueue, pgn) if !r.client.SIsMember(base.AllQueues, qname).Val() {
return nil, fmt.Errorf("queue %q does not exist", qname)
}
return r.listMessages(base.ActiveKey(qname), pgn)
} }
// listMessages returns a list of TaskMessage in Redis list with the given key. // listMessages returns a list of TaskMessage in Redis list with the given key.
@@ -275,21 +310,30 @@ func (r *RDB) listMessages(key string, pgn Pagination) ([]*base.TaskMessage, err
} }
// ListScheduled returns all tasks that are scheduled to be processed // ListScheduled returns all tasks from the given queue that are scheduled
// in the future. // to be processed in the future.
func (r *RDB) ListScheduled(pgn Pagination) ([]base.Z, error) { func (r *RDB) ListScheduled(qname string, pgn Pagination) ([]base.Z, error) {
return r.listZSetEntries(base.ScheduledQueue, pgn) if !r.client.SIsMember(base.AllQueues, qname).Val() {
return nil, fmt.Errorf("queue %q does not exist", qname)
}
return r.listZSetEntries(base.ScheduledKey(qname), pgn)
} }
// ListRetry returns all tasks that have failed before and willl be retried // ListRetry returns all tasks from the given queue that have failed before
// in the future. // and willl be retried in the future.
func (r *RDB) ListRetry(pgn Pagination) ([]base.Z, error) { func (r *RDB) ListRetry(qname string, pgn Pagination) ([]base.Z, error) {
return r.listZSetEntries(base.RetryQueue, pgn) if !r.client.SIsMember(base.AllQueues, qname).Val() {
return nil, fmt.Errorf("queue %q does not exist", qname)
}
return r.listZSetEntries(base.RetryKey(qname), pgn)
} }
// ListDead returns all tasks that have exhausted its retry limit. // ListDead returns all tasks from the given queue that have exhausted its retry limit.
func (r *RDB) ListDead(pgn Pagination) ([]base.Z, error) { func (r *RDB) ListDead(qname string, pgn Pagination) ([]base.Z, error) {
return r.listZSetEntries(base.DeadQueue, pgn) if !r.client.SIsMember(base.AllQueues, qname).Val() {
return nil, fmt.Errorf("queue %q does not exist", qname)
}
return r.listZSetEntries(base.DeadKey(qname), pgn)
} }
// listZSetEntries returns a list of message and score pairs in Redis sorted-set // listZSetEntries returns a list of message and score pairs in Redis sorted-set
@@ -314,11 +358,11 @@ func (r *RDB) listZSetEntries(key string, pgn Pagination) ([]base.Z, error) {
return res, nil return res, nil
} }
// EnqueueDeadTask finds a task that matches the given id and score from dead queue // RunDeadTask finds a dead task that matches the given id and score from
// and enqueues it for processing. If a task that matches the id and score // the given queue and enqueues it for processing.
// does not exist, it returns ErrTaskNotFound. //If a task that matches the id and score does not exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueDeadTask(id uuid.UUID, score int64) error { func (r *RDB) RunDeadTask(qname string, id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.DeadQueue, id.String(), float64(score)) n, err := r.removeAndRun(base.DeadKey(qname), base.QueueKey(qname), id.String(), float64(score))
if err != nil { if err != nil {
return err return err
} }
@@ -328,11 +372,11 @@ func (r *RDB) EnqueueDeadTask(id uuid.UUID, score int64) error {
return nil return nil
} }
// EnqueueRetryTask finds a task that matches the given id and score from retry queue // RunRetryTask finds a retry task that matches the given id and score from
// and enqueues it for processing. If a task that matches the id and score // the given queue and enqueues it for processing.
// does not exist, it returns ErrTaskNotFound. // If a task that matches the id and score does not exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueRetryTask(id uuid.UUID, score int64) error { func (r *RDB) RunRetryTask(qname string, id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.RetryQueue, id.String(), float64(score)) n, err := r.removeAndRun(base.RetryKey(qname), base.QueueKey(qname), id.String(), float64(score))
if err != nil { if err != nil {
return err return err
} }
@@ -342,11 +386,11 @@ func (r *RDB) EnqueueRetryTask(id uuid.UUID, score int64) error {
return nil return nil
} }
// EnqueueScheduledTask finds a task that matches the given id and score from scheduled queue // RunScheduledTask finds a scheduled task that matches the given id and score from
// and enqueues it for processing. If a task that matches the id and score does not // from the given queue and enqueues it for processing.
// exist, it returns ErrTaskNotFound. // If a task that matches the id and score does not exist, it returns ErrTaskNotFound.
func (r *RDB) EnqueueScheduledTask(id uuid.UUID, score int64) error { func (r *RDB) RunScheduledTask(qname string, id uuid.UUID, score int64) error {
n, err := r.removeAndEnqueue(base.ScheduledQueue, id.String(), float64(score)) n, err := r.removeAndRun(base.ScheduledKey(qname), base.QueueKey(qname), id.String(), float64(score))
if err != nil { if err != nil {
return err return err
} }
@@ -356,39 +400,38 @@ func (r *RDB) EnqueueScheduledTask(id uuid.UUID, score int64) error {
return nil return nil
} }
// EnqueueAllScheduledTasks enqueues all tasks from scheduled queue // RunAllScheduledTasks enqueues all scheduled tasks from the given queue
// and returns the number of tasks enqueued. // and returns the number of tasks enqueued.
func (r *RDB) EnqueueAllScheduledTasks() (int64, error) { func (r *RDB) RunAllScheduledTasks(qname string) (int64, error) {
return r.removeAndEnqueueAll(base.ScheduledQueue) return r.removeAndRunAll(base.ScheduledKey(qname), base.QueueKey(qname))
} }
// EnqueueAllRetryTasks enqueues all tasks from retry queue // RunAllRetryTasks enqueues all retry tasks from the given queue
// and returns the number of tasks enqueued. // and returns the number of tasks enqueued.
func (r *RDB) EnqueueAllRetryTasks() (int64, error) { func (r *RDB) RunAllRetryTasks(qname string) (int64, error) {
return r.removeAndEnqueueAll(base.RetryQueue) return r.removeAndRunAll(base.RetryKey(qname), base.QueueKey(qname))
} }
// EnqueueAllDeadTasks enqueues all tasks from dead queue // RunAllDeadTasks enqueues all tasks from dead queue
// and returns the number of tasks enqueued. // and returns the number of tasks enqueued.
func (r *RDB) EnqueueAllDeadTasks() (int64, error) { func (r *RDB) RunAllDeadTasks(qname string) (int64, error) {
return r.removeAndEnqueueAll(base.DeadQueue) return r.removeAndRunAll(base.DeadKey(qname), base.QueueKey(qname))
} }
var removeAndEnqueueCmd = redis.NewScript(` var removeAndRunCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1]) local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], ARGV[1], ARGV[1])
for _, msg in ipairs(msgs) do for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg) local decoded = cjson.decode(msg)
if decoded["ID"] == ARGV[2] then if decoded["ID"] == ARGV[2] then
local qkey = ARGV[3] .. decoded["Queue"] redis.call("LPUSH", KEYS[2], msg)
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg) redis.call("ZREM", KEYS[1], msg)
return 1 return 1
end end
end end
return 0`) return 0`)
func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) { func (r *RDB) removeAndRun(zset, qkey, id string, score float64) (int64, error) {
res, err := removeAndEnqueueCmd.Run(r.client, []string{zset}, score, id, base.QueuePrefix).Result() res, err := removeAndRunCmd.Run(r.client, []string{zset, qkey}, score, id).Result()
if err != nil { if err != nil {
return 0, err return 0, err
} }
@@ -399,18 +442,16 @@ func (r *RDB) removeAndEnqueue(zset, id string, score float64) (int64, error) {
return n, nil return n, nil
} }
var removeAndEnqueueAllCmd = redis.NewScript(` var removeAndRunAllCmd = redis.NewScript(`
local msgs = redis.call("ZRANGE", KEYS[1], 0, -1) local msgs = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, msg in ipairs(msgs) do for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg) redis.call("LPUSH", KEYS[2], msg)
local qkey = ARGV[1] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg) redis.call("ZREM", KEYS[1], msg)
end end
return table.getn(msgs)`) return table.getn(msgs)`)
func (r *RDB) removeAndEnqueueAll(zset string) (int64, error) { func (r *RDB) removeAndRunAll(zset, qkey string) (int64, error) {
res, err := removeAndEnqueueAllCmd.Run(r.client, []string{zset}, base.QueuePrefix).Result() res, err := removeAndRunAllCmd.Run(r.client, []string{zset, qkey}).Result()
if err != nil { if err != nil {
return 0, err return 0, err
} }
@@ -421,11 +462,10 @@ func (r *RDB) removeAndEnqueueAll(zset string) (int64, error) {
return n, nil return n, nil
} }
// KillRetryTask finds a task that matches the given id and score from retry queue // KillRetryTask finds a retry task that matches the given id and score from the given queue
// and moves it to dead queue. If a task that maches the id and score does not exist, // and kills it. If a task that maches the id and score does not exist, it returns ErrTaskNotFound.
// it returns ErrTaskNotFound. func (r *RDB) KillRetryTask(qname string, id uuid.UUID, score int64) error {
func (r *RDB) KillRetryTask(id uuid.UUID, score int64) error { n, err := r.removeAndKill(base.RetryKey(qname), base.DeadKey(qname), id.String(), float64(score))
n, err := r.removeAndKill(base.RetryQueue, id.String(), float64(score))
if err != nil { if err != nil {
return err return err
} }
@@ -435,11 +475,10 @@ func (r *RDB) KillRetryTask(id uuid.UUID, score int64) error {
return nil return nil
} }
// KillScheduledTask finds a task that matches the given id and score from scheduled queue // KillScheduledTask finds a scheduled task that matches the given id and score from the given queue
// and moves it to dead queue. If a task that maches the id and score does not exist, // and kills it. If a task that maches the id and score does not exist, it returns ErrTaskNotFound.
// it returns ErrTaskNotFound. func (r *RDB) KillScheduledTask(qname string, id uuid.UUID, score int64) error {
func (r *RDB) KillScheduledTask(id uuid.UUID, score int64) error { n, err := r.removeAndKill(base.ScheduledKey(qname), base.DeadKey(qname), id.String(), float64(score))
n, err := r.removeAndKill(base.ScheduledQueue, id.String(), float64(score))
if err != nil { if err != nil {
return err return err
} }
@@ -449,20 +488,20 @@ func (r *RDB) KillScheduledTask(id uuid.UUID, score int64) error {
return nil return nil
} }
// KillAllRetryTasks moves all tasks from retry queue to dead queue and // KillAllRetryTasks kills all retry tasks from the given queue and
// returns the number of tasks that were moved. // returns the number of tasks that were moved.
func (r *RDB) KillAllRetryTasks() (int64, error) { func (r *RDB) KillAllRetryTasks(qname string) (int64, error) {
return r.removeAndKillAll(base.RetryQueue) return r.removeAndKillAll(base.RetryKey(qname), base.DeadKey(qname))
} }
// KillAllScheduledTasks moves all tasks from scheduled queue to dead queue and // KillAllScheduledTasks kills all scheduled tasks from the given queue and
// returns the number of tasks that were moved. // returns the number of tasks that were moved.
func (r *RDB) KillAllScheduledTasks() (int64, error) { func (r *RDB) KillAllScheduledTasks(qname string) (int64, error) {
return r.removeAndKillAll(base.ScheduledQueue) return r.removeAndKillAll(base.ScheduledKey(qname), base.DeadKey(qname))
} }
// KEYS[1] -> ZSET to move task from (e.g., retry queue) // KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead // KEYS[2] -> asynq:{<qname>}:dead
// ARGV[1] -> score of the task to kill // ARGV[1] -> score of the task to kill
// ARGV[2] -> id of the task to kill // ARGV[2] -> id of the task to kill
// ARGV[3] -> current timestamp // ARGV[3] -> current timestamp
@@ -482,11 +521,11 @@ for _, msg in ipairs(msgs) do
end end
return 0`) return 0`)
func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) { func (r *RDB) removeAndKill(src, dst, id string, score float64) (int64, error) {
now := time.Now() now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago
res, err := removeAndKillCmd.Run(r.client, res, err := removeAndKillCmd.Run(r.client,
[]string{zset, base.DeadQueue}, []string{src, dst},
score, id, now.Unix(), limit, maxDeadTasks).Result() score, id, now.Unix(), limit, maxDeadTasks).Result()
if err != nil { if err != nil {
return 0, err return 0, err
@@ -499,7 +538,7 @@ func (r *RDB) removeAndKill(zset, id string, score float64) (int64, error) {
} }
// KEYS[1] -> ZSET to move task from (e.g., retry queue) // KEYS[1] -> ZSET to move task from (e.g., retry queue)
// KEYS[2] -> asynq:dead // KEYS[2] -> asynq:{<qname>}:dead
// ARGV[1] -> current timestamp // ARGV[1] -> current timestamp
// ARGV[2] -> cutoff timestamp (e.g., 90 days ago) // ARGV[2] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[3] -> max number of tasks in dead queue (e.g., 100) // ARGV[3] -> max number of tasks in dead queue (e.g., 100)
@@ -513,10 +552,10 @@ for _, msg in ipairs(msgs) do
end end
return table.getn(msgs)`) return table.getn(msgs)`)
func (r *RDB) removeAndKillAll(zset string) (int64, error) { func (r *RDB) removeAndKillAll(src, dst string) (int64, error) {
now := time.Now() now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago
res, err := removeAndKillAllCmd.Run(r.client, []string{zset, base.DeadQueue}, res, err := removeAndKillAllCmd.Run(r.client, []string{src, dst},
now.Unix(), limit, maxDeadTasks).Result() now.Unix(), limit, maxDeadTasks).Result()
if err != nil { if err != nil {
return 0, err return 0, err
@@ -528,25 +567,22 @@ func (r *RDB) removeAndKillAll(zset string) (int64, error) {
return n, nil return n, nil
} }
// DeleteDeadTask finds a task that matches the given id and score from dead queue // DeleteDeadTask deletes a dead task that matches the given id and score from the given queue.
// and deletes it. If a task that matches the id and score does not exist, // If a task that matches the id and score does not exist, it returns ErrTaskNotFound.
// it returns ErrTaskNotFound. func (r *RDB) DeleteDeadTask(qname string, id uuid.UUID, score int64) error {
func (r *RDB) DeleteDeadTask(id uuid.UUID, score int64) error { return r.deleteTask(base.DeadKey(qname), id.String(), float64(score))
return r.deleteTask(base.DeadQueue, id.String(), float64(score))
} }
// DeleteRetryTask finds a task that matches the given id and score from retry queue // DeleteRetryTask deletes a retry task that matches the given id and score from the given queue.
// and deletes it. If a task that matches the id and score does not exist, // If a task that matches the id and score does not exist, it returns ErrTaskNotFound.
// it returns ErrTaskNotFound. func (r *RDB) DeleteRetryTask(qname string, id uuid.UUID, score int64) error {
func (r *RDB) DeleteRetryTask(id uuid.UUID, score int64) error { return r.deleteTask(base.RetryKey(qname), id.String(), float64(score))
return r.deleteTask(base.RetryQueue, id.String(), float64(score))
} }
// DeleteScheduledTask finds a task that matches the given id and score from // DeleteScheduledTask deletes a scheduled task that matches the given id and score from the given queue.
// scheduled queue and deletes it. If a task that matches the id and score // If a task that matches the id and score does not exist, it returns ErrTaskNotFound.
//does not exist, it returns ErrTaskNotFound. func (r *RDB) DeleteScheduledTask(qname string, id uuid.UUID, score int64) error {
func (r *RDB) DeleteScheduledTask(id uuid.UUID, score int64) error { return r.deleteTask(base.ScheduledKey(qname), id.String(), float64(score))
return r.deleteTask(base.ScheduledQueue, id.String(), float64(score))
} }
var deleteTaskCmd = redis.NewScript(` var deleteTaskCmd = redis.NewScript(`
@@ -560,8 +596,8 @@ for _, msg in ipairs(msgs) do
end end
return 0`) return 0`)
func (r *RDB) deleteTask(zset, id string, score float64) error { func (r *RDB) deleteTask(key, id string, score float64) error {
res, err := deleteTaskCmd.Run(r.client, []string{zset}, score, id).Result() res, err := deleteTaskCmd.Run(r.client, []string{key}, score, id).Result()
if err != nil { if err != nil {
return err return err
} }
@@ -581,22 +617,22 @@ local n = redis.call("ZCARD", KEYS[1])
redis.call("DEL", KEYS[1]) redis.call("DEL", KEYS[1])
return n`) return n`)
// DeleteAllDeadTasks deletes all tasks from the dead queue // DeleteAllDeadTasks deletes all dead tasks from the given queue
// and returns the number of tasks deleted. // and returns the number of tasks deleted.
func (r *RDB) DeleteAllDeadTasks() (int64, error) { func (r *RDB) DeleteAllDeadTasks(qname string) (int64, error) {
return r.deleteAll(base.DeadQueue) return r.deleteAll(base.DeadKey(qname))
} }
// DeleteAllRetryTasks deletes all tasks from the dead queue // DeleteAllRetryTasks deletes all retry tasks from the given queue
// and returns the number of tasks deleted. // and returns the number of tasks deleted.
func (r *RDB) DeleteAllRetryTasks() (int64, error) { func (r *RDB) DeleteAllRetryTasks(qname string) (int64, error) {
return r.deleteAll(base.RetryQueue) return r.deleteAll(base.RetryKey(qname))
} }
// DeleteAllScheduledTasks deletes all tasks from the dead queue // DeleteAllScheduledTasks deletes all scheduled tasks from the given queue
// and returns the number of tasks deleted. // and returns the number of tasks deleted.
func (r *RDB) DeleteAllScheduledTasks() (int64, error) { func (r *RDB) DeleteAllScheduledTasks(qname string) (int64, error) {
return r.deleteAll(base.ScheduledQueue) return r.deleteAll(base.ScheduledKey(qname))
} }
func (r *RDB) deleteAll(key string) (int64, error) { func (r *RDB) deleteAll(key string) (int64, error) {
@@ -629,155 +665,262 @@ func (e *ErrQueueNotEmpty) Error() string {
return fmt.Sprintf("queue %q is not empty", e.qname) return fmt.Sprintf("queue %q is not empty", e.qname)
} }
// Skip checking whether queue is empty before removing. // Only check whether active queue is empty before removing.
// KEYS[1] -> asynq:{<qname>}
// KEYS[2] -> asynq:{<qname>}:active
// KEYS[3] -> asynq:{<qname>}:scheduled
// KEYS[4] -> asynq:{<qname>}:retry
// KEYS[5] -> asynq:{<qname>}:dead
// KEYS[6] -> asynq:{<qname>}:deadlines
var removeQueueForceCmd = redis.NewScript(` var removeQueueForceCmd = redis.NewScript(`
local n = redis.call("SREM", KEYS[1], KEYS[2]) local active = redis.call("LLEN", KEYS[2])
if n == 0 then if active > 0 then
return redis.error_reply("LIST NOT FOUND") return redis.error_reply("Queue has tasks active")
end end
redis.call("DEL", KEYS[1])
redis.call("DEL", KEYS[2]) redis.call("DEL", KEYS[2])
redis.call("DEL", KEYS[3])
redis.call("DEL", KEYS[4])
redis.call("DEL", KEYS[5])
redis.call("DEL", KEYS[6])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Checks whether queue is empty before removing. // Checks whether queue is empty before removing.
// KEYS[1] -> asynq:{<qname>}
// KEYS[2] -> asynq:{<qname>}:active
// KEYS[3] -> asynq:{<qname>}:scheduled
// KEYS[4] -> asynq:{<qname>}:retry
// KEYS[5] -> asynq:{<qname>}:dead
// KEYS[6] -> asynq:{<qname>}:deadlines
var removeQueueCmd = redis.NewScript(` var removeQueueCmd = redis.NewScript(`
local l = redis.call("LLEN", KEYS[2]) if l > 0 then local pending = redis.call("LLEN", KEYS[1])
return redis.error_reply("LIST NOT EMPTY") local active = redis.call("LLEN", KEYS[2])
end local scheduled = redis.call("SCARD", KEYS[3])
local n = redis.call("SREM", KEYS[1], KEYS[2]) local retry = redis.call("SCARD", KEYS[4])
if n == 0 then local dead = redis.call("SCARD", KEYS[5])
return redis.error_reply("LIST NOT FOUND") local total = pending + active + scheduled + retry + dead
if total > 0 then
return redis.error_reply("QUEUE NOT EMPTY")
end end
redis.call("DEL", KEYS[1])
redis.call("DEL", KEYS[2]) redis.call("DEL", KEYS[2])
redis.call("DEL", KEYS[3])
redis.call("DEL", KEYS[4])
redis.call("DEL", KEYS[5])
redis.call("DEL", KEYS[6])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// RemoveQueue removes the specified queue. // RemoveQueue removes the specified queue.
// //
// If force is set to true, it will remove the queue regardless // If force is set to true, it will remove the queue regardless
// of whether the queue is empty. // as long as no tasks are active for the queue.
// If force is set to false, it will only remove the queue if // If force is set to false, it will only remove the queue if
// it is empty. // the queue is empty.
func (r *RDB) RemoveQueue(qname string, force bool) error { func (r *RDB) RemoveQueue(qname string, force bool) error {
exists, err := r.client.SIsMember(base.AllQueues, qname).Result()
if err != nil {
return err
}
if !exists {
return &ErrQueueNotFound{qname}
}
var script *redis.Script var script *redis.Script
if force { if force {
script = removeQueueForceCmd script = removeQueueForceCmd
} else { } else {
script = removeQueueCmd script = removeQueueCmd
} }
err := script.Run(r.client, keys := []string{
[]string{base.AllQueues, base.QueueKey(qname)}, base.QueueKey(qname),
force).Err() base.ActiveKey(qname),
if err != nil { base.ScheduledKey(qname),
switch err.Error() { base.RetryKey(qname),
case "LIST NOT FOUND": base.DeadKey(qname),
return &ErrQueueNotFound{qname} base.DeadlinesKey(qname),
case "LIST NOT EMPTY": }
if err := script.Run(r.client, keys).Err(); err != nil {
if err.Error() == "QUEUE NOT EMPTY" {
return &ErrQueueNotEmpty{qname} return &ErrQueueNotEmpty{qname}
default: } else {
return err return err
} }
} }
return nil
return r.client.SRem(base.AllQueues, qname).Err()
} }
// Note: Script also removes stale keys. // Note: Script also removes stale keys.
var listServersCmd = redis.NewScript(` var listServerKeysCmd = redis.NewScript(`
local res = {}
local now = tonumber(ARGV[1]) local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf") local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do
local s = redis.call("GET", key)
if s then
table.insert(res, s)
end
end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1) redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`) return keys`)
// ListServers returns the list of server info. // ListServers returns the list of server info.
func (r *RDB) ListServers() ([]*base.ServerInfo, error) { func (r *RDB) ListServers() ([]*base.ServerInfo, error) {
res, err := listServersCmd.Run(r.client, now := time.Now()
[]string{base.AllServers}, time.Now().UTC().Unix()).Result() res, err := listServerKeysCmd.Run(r.client, []string{base.AllServers}, now.Unix()).Result()
if err != nil { if err != nil {
return nil, err return nil, err
} }
data, err := cast.ToStringSliceE(res) keys, err := cast.ToStringSliceE(res)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var servers []*base.ServerInfo var servers []*base.ServerInfo
for _, s := range data { for _, key := range keys {
var info base.ServerInfo data, err := r.client.Get(key).Result()
err := json.Unmarshal([]byte(s), &info)
if err != nil { if err != nil {
continue // skip bad data continue // skip bad data
} }
var info base.ServerInfo
if err := json.Unmarshal([]byte(data), &info); err != nil {
continue // skip bad data
}
servers = append(servers, &info) servers = append(servers, &info)
} }
return servers, nil return servers, nil
} }
// Note: Script also removes stale keys. // Note: Script also removes stale keys.
var listWorkersCmd = redis.NewScript(` var listWorkerKeysCmd = redis.NewScript(`
local res = {}
local now = tonumber(ARGV[1]) local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf") local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do
local workers = redis.call("HVALS", key)
for _, w in ipairs(workers) do
table.insert(res, w)
end
end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1) redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`) return keys`)
// ListWorkers returns the list of worker stats. // ListWorkers returns the list of worker stats.
func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) { func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) {
res, err := listWorkersCmd.Run(r.client, []string{base.AllWorkers}, time.Now().UTC().Unix()).Result() now := time.Now()
res, err := listWorkerKeysCmd.Run(r.client, []string{base.AllWorkers}, now.Unix()).Result()
if err != nil { if err != nil {
return nil, err return nil, err
} }
data, err := cast.ToStringSliceE(res) keys, err := cast.ToStringSliceE(res)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var workers []*base.WorkerInfo var workers []*base.WorkerInfo
for _, s := range data { for _, key := range keys {
var w base.WorkerInfo data, err := r.client.HVals(key).Result()
err := json.Unmarshal([]byte(s), &w)
if err != nil { if err != nil {
continue // skip bad data continue // skip bad data
} }
workers = append(workers, &w) for _, s := range data {
var w base.WorkerInfo
if err := json.Unmarshal([]byte(s), &w); err != nil {
continue // skip bad data
}
workers = append(workers, &w)
}
} }
return workers, nil return workers, nil
} }
// KEYS[1] -> asynq:paused // Note: Script also removes stale keys.
// ARGV[1] -> asynq:queues:<qname> - queue to pause var listSchedulerKeysCmd = redis.NewScript(`
var pauseCmd = redis.NewScript(` local now = tonumber(ARGV[1])
local ismem = redis.call("SISMEMBER", KEYS[1], ARGV[1]) local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
if ismem == 1 then redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return redis.error_reply("queue is already paused") return keys`)
end
return redis.call("SADD", KEYS[1], ARGV[1])`) // ListSchedulerEntries returns the list of scheduler entries.
func (r *RDB) ListSchedulerEntries() ([]*base.SchedulerEntry, error) {
now := time.Now()
res, err := listSchedulerKeysCmd.Run(r.client, []string{base.AllSchedulers}, now.Unix()).Result()
if err != nil {
return nil, err
}
keys, err := cast.ToStringSliceE(res)
if err != nil {
return nil, err
}
var entries []*base.SchedulerEntry
for _, key := range keys {
data, err := r.client.LRange(key, 0, -1).Result()
if err != nil {
continue // skip bad data
}
for _, s := range data {
var e base.SchedulerEntry
if err := json.Unmarshal([]byte(s), &e); err != nil {
continue // skip bad data
}
entries = append(entries, &e)
}
}
return entries, nil
}
// ListSchedulerEnqueueEvents returns the list of scheduler enqueue events.
func (r *RDB) ListSchedulerEnqueueEvents(entryID string) ([]*base.SchedulerEnqueueEvent, error) {
key := base.SchedulerHistoryKey(entryID)
zs, err := r.client.ZRangeWithScores(key, 0, -1).Result()
if err != nil {
return nil, err
}
var events []*base.SchedulerEnqueueEvent
for _, z := range zs {
data, err := cast.ToStringE(z.Member)
if err != nil {
return nil, err
}
var e base.SchedulerEnqueueEvent
if err := json.Unmarshal([]byte(data), &e); err != nil {
return nil, err
}
events = append(events, &e)
}
return events, nil
}
// Pause pauses processing of tasks from the given queue. // Pause pauses processing of tasks from the given queue.
func (r *RDB) Pause(qname string) error { func (r *RDB) Pause(qname string) error {
qkey := base.QueueKey(qname) key := base.PausedKey(qname)
return pauseCmd.Run(r.client, []string{base.PausedQueues}, qkey).Err() ok, err := r.client.SetNX(key, time.Now().Unix(), 0).Result()
if err != nil {
return err
}
if !ok {
return fmt.Errorf("queue %q is already paused", qname)
}
return nil
} }
// KEYS[1] -> asynq:paused
// ARGV[1] -> asynq:queues:<qname> - queue to unpause
var unpauseCmd = redis.NewScript(`
local ismem = redis.call("SISMEMBER", KEYS[1], ARGV[1])
if ismem == 0 then
return redis.error_reply("queue is not paused")
end
return redis.call("SREM", KEYS[1], ARGV[1])`)
// Unpause resumes processing of tasks from the given queue. // Unpause resumes processing of tasks from the given queue.
func (r *RDB) Unpause(qname string) error { func (r *RDB) Unpause(qname string) error {
qkey := base.QueueKey(qname) key := base.PausedKey(qname)
return unpauseCmd.Run(r.client, []string{base.PausedQueues}, qkey).Err() deleted, err := r.client.Del(key).Result()
if err != nil {
return err
}
if deleted == 0 {
return fmt.Errorf("queue %q is not paused", qname)
}
return nil
}
// ClusterKeySlot returns an integer identifying the hash slot the given queue hashes to.
func (r *RDB) ClusterKeySlot(qname string) (int64, error) {
key := base.QueueKey(qname)
return r.client.ClusterKeySlot(key).Result()
}
// ClusterNodes returns a list of nodes the given queue belongs to.
func (r *RDB) ClusterNodes(qname string) ([]redis.ClusterNode, error) {
keyslot, err := r.ClusterKeySlot(qname)
if err != nil {
return nil, err
}
clusterSlots, err := r.client.ClusterSlots().Result()
if err != nil {
return nil, err
}
for _, slotRange := range clusterSlots {
if int64(slotRange.Start) <= keyslot && keyslot <= int64(slotRange.End) {
return slotRange.Nodes, nil
}
}
return nil, fmt.Errorf("nodes not found")
} }

File diff suppressed because it is too large Load Diff

View File

@@ -32,11 +32,11 @@ const statsTTL = 90 * 24 * time.Hour // 90 days
// RDB is a client interface to query and mutate task queues. // RDB is a client interface to query and mutate task queues.
type RDB struct { type RDB struct {
client *redis.Client client redis.UniversalClient
} }
// NewRDB returns a new instance of RDB. // NewRDB returns a new instance of RDB.
func NewRDB(client *redis.Client) *RDB { func NewRDB(client redis.UniversalClient) *RDB {
return &RDB{client} return &RDB{client}
} }
@@ -50,27 +50,21 @@ func (r *RDB) Ping() error {
return r.client.Ping().Err() return r.client.Ping().Err()
} }
// KEYS[1] -> asynq:queues:<qname>
// KEYS[2] -> asynq:queues
// ARGV[1] -> task message data
var enqueueCmd = redis.NewScript(`
redis.call("LPUSH", KEYS[1], ARGV[1])
redis.call("SADD", KEYS[2], KEYS[1])
return 1`)
// Enqueue inserts the given task to the tail of the queue. // Enqueue inserts the given task to the tail of the queue.
func (r *RDB) Enqueue(msg *base.TaskMessage) error { func (r *RDB) Enqueue(msg *base.TaskMessage) error {
encoded, err := base.EncodeMessage(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
key := base.QueueKey(msg.Queue) key := base.QueueKey(msg.Queue)
return enqueueCmd.Run(r.client, []string{key, base.AllQueues}, encoded).Err() return r.client.LPush(key, encoded).Err()
} }
// KEYS[1] -> unique key in the form <type>:<payload>:<qname> // KEYS[1] -> unique key
// KEYS[2] -> asynq:queues:<qname> // KEYS[2] -> asynq:{<qname>}
// KEYS[2] -> asynq:queues
// ARGV[1] -> task ID // ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL // ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> task message data // ARGV[3] -> task message data
@@ -80,7 +74,6 @@ if not ok then
return 0 return 0
end end
redis.call("LPUSH", KEYS[2], ARGV[3]) redis.call("LPUSH", KEYS[2], ARGV[3])
redis.call("SADD", KEYS[3], KEYS[2])
return 1 return 1
`) `)
@@ -91,9 +84,11 @@ func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
if err != nil { if err != nil {
return err return err
} }
key := base.QueueKey(msg.Queue) if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
res, err := enqueueUniqueCmd.Run(r.client, res, err := enqueueUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, key, base.AllQueues}, []string{msg.UniqueKey, base.QueueKey(msg.Queue)},
msg.ID.String(), int(ttl.Seconds()), encoded).Result() msg.ID.String(), int(ttl.Seconds()), encoded).Result()
if err != nil { if err != nil {
return err return err
@@ -113,14 +108,7 @@ func (r *RDB) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
// Dequeue skips a queue if the queue is paused. // Dequeue skips a queue if the queue is paused.
// If all queues are empty, ErrNoProcessableTask error is returned. // If all queues are empty, ErrNoProcessableTask error is returned.
func (r *RDB) Dequeue(qnames ...string) (msg *base.TaskMessage, deadline time.Time, err error) { func (r *RDB) Dequeue(qnames ...string) (msg *base.TaskMessage, deadline time.Time, err error) {
var qkeys []interface{} data, d, err := r.dequeue(qnames...)
for _, q := range qnames {
qkeys = append(qkeys, base.QueueKey(q))
}
data, d, err := r.dequeue(qkeys...)
if err == redis.Nil {
return nil, time.Time{}, ErrNoProcessableTask
}
if err != nil { if err != nil {
return nil, time.Time{}, err return nil, time.Time{}, err
} }
@@ -130,75 +118,76 @@ func (r *RDB) Dequeue(qnames ...string) (msg *base.TaskMessage, deadline time.Ti
return msg, time.Unix(d, 0), nil return msg, time.Unix(d, 0), nil
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}
// KEYS[2] -> asynq:paused // KEYS[2] -> asynq:{<qname>}:paused
// KEYS[3] -> asynq:deadlines // KEYS[3] -> asynq:{<qname>}:active
// KEYS[4] -> asynq:{<qname>}:deadlines
// ARGV[1] -> current time in Unix time // ARGV[1] -> current time in Unix time
// ARGV[2:] -> List of queues to query in order
// //
// dequeueCmd checks whether a queue is paused first, before // dequeueCmd checks whether a queue is paused first, before
// calling RPOPLPUSH to pop a task from the queue. // calling RPOPLPUSH to pop a task from the queue.
// It computes the task deadline by inspecting Timout and Deadline fields, // It computes the task deadline by inspecting Timout and Deadline fields,
// and inserts the task with deadlines set. // and inserts the task with deadlines set.
var dequeueCmd = redis.NewScript(` var dequeueCmd = redis.NewScript(`
for i = 2, table.getn(ARGV) do if redis.call("EXISTS", KEYS[2]) == 0 then
local qkey = ARGV[i] local msg = redis.call("RPOPLPUSH", KEYS[1], KEYS[3])
if redis.call("SISMEMBER", KEYS[2], qkey) == 0 then if msg then
local msg = redis.call("RPOPLPUSH", qkey, KEYS[1]) local decoded = cjson.decode(msg)
if msg then local timeout = decoded["Timeout"]
local decoded = cjson.decode(msg) local deadline = decoded["Deadline"]
local timeout = decoded["Timeout"] local score
local deadline = decoded["Deadline"] if timeout ~= 0 and deadline ~= 0 then
local score score = math.min(ARGV[1]+timeout, deadline)
if timeout ~= 0 and deadline ~= 0 then elseif timeout ~= 0 then
score = math.min(ARGV[1]+timeout, deadline) score = ARGV[1] + timeout
elseif timeout ~= 0 then elseif deadline ~= 0 then
score = ARGV[1] + timeout score = deadline
elseif deadline ~= 0 then else
score = deadline return redis.error_reply("asynq internal error: both timeout and deadline are not set")
else
return redis.error_reply("asynq internal error: both timeout and deadline are not set")
end
redis.call("ZADD", KEYS[3], score, msg)
return {msg, score}
end end
redis.call("ZADD", KEYS[4], score, msg)
return {msg, score}
end end
end end
return nil`) return nil`)
func (r *RDB) dequeue(qkeys ...interface{}) (msgjson string, deadline int64, err error) { func (r *RDB) dequeue(qnames ...string) (msgjson string, deadline int64, err error) {
var args []interface{} for _, qname := range qnames {
args = append(args, time.Now().Unix()) keys := []string{
args = append(args, qkeys...) base.QueueKey(qname),
res, err := dequeueCmd.Run(r.client, base.PausedKey(qname),
[]string{base.InProgressQueue, base.PausedQueues, base.KeyDeadlines}, args...).Result() base.ActiveKey(qname),
if err != nil { base.DeadlinesKey(qname),
return "", 0, err }
res, err := dequeueCmd.Run(r.client, keys, time.Now().Unix()).Result()
if err == redis.Nil {
continue
} else if err != nil {
return "", 0, err
}
data, err := cast.ToSliceE(res)
if err != nil {
return "", 0, err
}
if len(data) != 2 {
return "", 0, fmt.Errorf("asynq: internal error: dequeue command returned %d values", len(data))
}
if msgjson, err = cast.ToStringE(data[0]); err != nil {
return "", 0, err
}
if deadline, err = cast.ToInt64E(data[1]); err != nil {
return "", 0, err
}
return msgjson, deadline, nil
} }
data, err := cast.ToSliceE(res) return "", 0, ErrNoProcessableTask
if err != nil {
return "", 0, err
}
if len(data) != 2 {
return "", 0, fmt.Errorf("asynq: internal error: dequeue command returned %d values", len(data))
}
if msgjson, err = cast.ToStringE(data[0]); err != nil {
return "", 0, err
}
if deadline, err = cast.ToInt64E(data[1]); err != nil {
return "", 0, err
}
return msgjson, deadline, nil
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:deadlines // KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:processed:<yyyy-mm-dd> // KEYS[3] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// KEYS[4] -> unique key in the format <type>:<payload>:<qname>
// ARGV[1] -> base.TaskMessage value // ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp // ARGV[2] -> stats expiration timestamp
// ARGV[3] -> task ID
// Note: LREM count ZERO means "remove all elements equal to val"
var doneCmd = redis.NewScript(` var doneCmd = redis.NewScript(`
if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND") return redis.error_reply("NOT FOUND")
@@ -210,13 +199,34 @@ local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[2]) redis.call("EXPIREAT", KEYS[3], ARGV[2])
end end
if string.len(KEYS[4]) > 0 and redis.call("GET", KEYS[4]) == ARGV[3] then return redis.status_reply("OK")
`)
// KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// KEYS[4] -> unique key
// ARGV[1] -> base.TaskMessage value
// ARGV[2] -> stats expiration timestamp
// ARGV[3] -> task ID
var doneUniqueCmd = redis.NewScript(`
if redis.call("LREM", KEYS[1], 0, ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
if redis.call("ZREM", KEYS[2], ARGV[1]) == 0 then
return redis.error_reply("NOT FOUND")
end
local n = redis.call("INCR", KEYS[3])
if tonumber(n) == 1 then
redis.call("EXPIREAT", KEYS[3], ARGV[2])
end
if redis.call("GET", KEYS[4]) == ARGV[3] then
redis.call("DEL", KEYS[4]) redis.call("DEL", KEYS[4])
end end
return redis.status_reply("OK") return redis.status_reply("OK")
`) `)
// Done removes the task from in-progress queue to mark the task as done. // Done removes the task from active queue to mark the task as done.
// It removes a uniqueness lock acquired by the task, if any. // It removes a uniqueness lock acquired by the task, if any.
func (r *RDB) Done(msg *base.TaskMessage) error { func (r *RDB) Done(msg *base.TaskMessage) error {
encoded, err := base.EncodeMessage(msg) encoded, err := base.EncodeMessage(msg)
@@ -224,16 +234,24 @@ func (r *RDB) Done(msg *base.TaskMessage) error {
return err return err
} }
now := time.Now() now := time.Now()
processedKey := base.ProcessedKey(now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return doneCmd.Run(r.client, keys := []string{
[]string{base.InProgressQueue, base.KeyDeadlines, processedKey, msg.UniqueKey}, base.ActiveKey(msg.Queue),
encoded, expireAt.Unix(), msg.ID.String()).Err() base.DeadlinesKey(msg.Queue),
base.ProcessedKey(msg.Queue, now),
}
args := []interface{}{encoded, expireAt.Unix()}
if len(msg.UniqueKey) > 0 {
keys = append(keys, msg.UniqueKey)
args = append(args, msg.ID.String())
return doneUniqueCmd.Run(r.client, keys, args...).Err()
}
return doneCmd.Run(r.client, keys, args...).Err()
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:deadlines // KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:queues:<qname> // KEYS[3] -> asynq:{<qname>}
// ARGV[1] -> base.TaskMessage value // ARGV[1] -> base.TaskMessage value
// Note: Use RPUSH to push to the head of the queue. // Note: Use RPUSH to push to the head of the queue.
var requeueCmd = redis.NewScript(` var requeueCmd = redis.NewScript(`
@@ -246,56 +264,42 @@ end
redis.call("RPUSH", KEYS[3], ARGV[1]) redis.call("RPUSH", KEYS[3], ARGV[1])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Requeue moves the task from in-progress queue to the specified queue. // Requeue moves the task from active queue to the specified queue.
func (r *RDB) Requeue(msg *base.TaskMessage) error { func (r *RDB) Requeue(msg *base.TaskMessage) error {
encoded, err := base.EncodeMessage(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
return requeueCmd.Run(r.client, return requeueCmd.Run(r.client,
[]string{base.InProgressQueue, base.KeyDeadlines, base.QueueKey(msg.Queue)}, []string{base.ActiveKey(msg.Queue), base.DeadlinesKey(msg.Queue), base.QueueKey(msg.Queue)},
encoded).Err() encoded).Err()
} }
// KEYS[1] -> asynq:scheduled
// KEYS[2] -> asynq:queues
// ARGV[1] -> score (process_at timestamp)
// ARGV[2] -> task message
// ARGV[3] -> queue key
var scheduleCmd = redis.NewScript(`
redis.call("ZADD", KEYS[1], ARGV[1], ARGV[2])
redis.call("SADD", KEYS[2], ARGV[3])
return 1
`)
// Schedule adds the task to the backlog queue to be processed in the future. // Schedule adds the task to the backlog queue to be processed in the future.
func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error { func (r *RDB) Schedule(msg *base.TaskMessage, processAt time.Time) error {
encoded, err := base.EncodeMessage(msg) encoded, err := base.EncodeMessage(msg)
if err != nil { if err != nil {
return err return err
} }
qkey := base.QueueKey(msg.Queue) if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
score := float64(processAt.Unix()) score := float64(processAt.Unix())
return scheduleCmd.Run(r.client, return r.client.ZAdd(base.ScheduledKey(msg.Queue), &redis.Z{Score: score, Member: encoded}).Err()
[]string{base.ScheduledQueue, base.AllQueues},
score, encoded, qkey).Err()
} }
// KEYS[1] -> unique key in the format <type>:<payload>:<qname> // KEYS[1] -> unique key
// KEYS[2] -> asynq:scheduled // KEYS[2] -> asynq:{<qname>}:scheduled
// KEYS[3] -> asynq:queues
// ARGV[1] -> task ID // ARGV[1] -> task ID
// ARGV[2] -> uniqueness lock TTL // ARGV[2] -> uniqueness lock TTL
// ARGV[3] -> score (process_at timestamp) // ARGV[3] -> score (process_at timestamp)
// ARGV[4] -> task message // ARGV[4] -> task message
// ARGV[5] -> queue key
var scheduleUniqueCmd = redis.NewScript(` var scheduleUniqueCmd = redis.NewScript(`
local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2]) local ok = redis.call("SET", KEYS[1], ARGV[1], "NX", "EX", ARGV[2])
if not ok then if not ok then
return 0 return 0
end end
redis.call("ZADD", KEYS[2], ARGV[3], ARGV[4]) redis.call("ZADD", KEYS[2], ARGV[3], ARGV[4])
redis.call("SADD", KEYS[3], ARGV[5])
return 1 return 1
`) `)
@@ -306,11 +310,13 @@ func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl tim
if err != nil { if err != nil {
return err return err
} }
qkey := base.QueueKey(msg.Queue) if err := r.client.SAdd(base.AllQueues, msg.Queue).Err(); err != nil {
return err
}
score := float64(processAt.Unix()) score := float64(processAt.Unix())
res, err := scheduleUniqueCmd.Run(r.client, res, err := scheduleUniqueCmd.Run(r.client,
[]string{msg.UniqueKey, base.ScheduledQueue, base.AllQueues}, []string{msg.UniqueKey, base.ScheduledKey(msg.Queue)},
msg.ID.String(), int(ttl.Seconds()), score, encoded, qkey).Result() msg.ID.String(), int(ttl.Seconds()), score, encoded).Result()
if err != nil { if err != nil {
return err return err
} }
@@ -324,12 +330,12 @@ func (r *RDB) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl tim
return nil return nil
} }
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:deadlines // KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:retry // KEYS[3] -> asynq:{<qname>}:retry
// KEYS[4] -> asynq:processed:<yyyy-mm-dd> // KEYS[4] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// KEYS[5] -> asynq:failure:<yyyy-mm-dd> // KEYS[5] -> asynq:{<qname>}:failed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue // ARGV[1] -> base.TaskMessage value to remove from base.ActiveQueue queue
// ARGV[2] -> base.TaskMessage value to add to Retry queue // ARGV[2] -> base.TaskMessage value to add to Retry queue
// ARGV[3] -> retry_at UNIX timestamp // ARGV[3] -> retry_at UNIX timestamp
// ARGV[4] -> stats expiration timestamp // ARGV[4] -> stats expiration timestamp
@@ -351,7 +357,7 @@ if tonumber(m) == 1 then
end end
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Retry moves the task from in-progress to retry queue, incrementing retry count // Retry moves the task from active to retry queue, incrementing retry count
// and assigning error message to the task message. // and assigning error message to the task message.
func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error { func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
msgToRemove, err := base.EncodeMessage(msg) msgToRemove, err := base.EncodeMessage(msg)
@@ -366,11 +372,11 @@ func (r *RDB) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) e
return err return err
} }
now := time.Now() now := time.Now()
processedKey := base.ProcessedKey(now) processedKey := base.ProcessedKey(msg.Queue, now)
failureKey := base.FailureKey(now) failedKey := base.FailedKey(msg.Queue, now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return retryCmd.Run(r.client, return retryCmd.Run(r.client,
[]string{base.InProgressQueue, base.KeyDeadlines, base.RetryQueue, processedKey, failureKey}, []string{base.ActiveKey(msg.Queue), base.DeadlinesKey(msg.Queue), base.RetryKey(msg.Queue), processedKey, failedKey},
msgToRemove, msgToAdd, processAt.Unix(), expireAt.Unix()).Err() msgToRemove, msgToAdd, processAt.Unix(), expireAt.Unix()).Err()
} }
@@ -379,12 +385,12 @@ const (
deadExpirationInDays = 90 deadExpirationInDays = 90
) )
// KEYS[1] -> asynq:in_progress // KEYS[1] -> asynq:{<qname>}:active
// KEYS[2] -> asynq:deadlines // KEYS[2] -> asynq:{<qname>}:deadlines
// KEYS[3] -> asynq:dead // KEYS[3] -> asynq:{<qname>}:dead
// KEYS[4] -> asynq:processed:<yyyy-mm-dd> // KEYS[4] -> asynq:{<qname>}:processed:<yyyy-mm-dd>
// KEYS[5] -> asynq.failure:<yyyy-mm-dd> // KEYS[5] -> asynq:{<qname>}:failed:<yyyy-mm-dd>
// ARGV[1] -> base.TaskMessage value to remove from base.InProgressQueue queue // ARGV[1] -> base.TaskMessage value to remove from base.ActiveQueue queue
// ARGV[2] -> base.TaskMessage value to add to Dead queue // ARGV[2] -> base.TaskMessage value to add to Dead queue
// ARGV[3] -> died_at UNIX timestamp // ARGV[3] -> died_at UNIX timestamp
// ARGV[4] -> cutoff timestamp (e.g., 90 days ago) // ARGV[4] -> cutoff timestamp (e.g., 90 days ago)
@@ -410,7 +416,7 @@ if tonumber(m) == 1 then
end end
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// Kill sends the task to "dead" queue from in-progress queue, assigning // Kill sends the task to "dead" queue from active queue, assigning
// the error message to the task. // the error message to the task.
// It also trims the set by timestamp and set size. // It also trims the set by timestamp and set size.
func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error { func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
@@ -426,96 +432,101 @@ func (r *RDB) Kill(msg *base.TaskMessage, errMsg string) error {
} }
now := time.Now() now := time.Now()
limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago limit := now.AddDate(0, 0, -deadExpirationInDays).Unix() // 90 days ago
processedKey := base.ProcessedKey(now) processedKey := base.ProcessedKey(msg.Queue, now)
failureKey := base.FailureKey(now) failedKey := base.FailedKey(msg.Queue, now)
expireAt := now.Add(statsTTL) expireAt := now.Add(statsTTL)
return killCmd.Run(r.client, return killCmd.Run(r.client,
[]string{base.InProgressQueue, base.KeyDeadlines, base.DeadQueue, processedKey, failureKey}, []string{base.ActiveKey(msg.Queue), base.DeadlinesKey(msg.Queue), base.DeadKey(msg.Queue), processedKey, failedKey},
msgToRemove, msgToAdd, now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err() msgToRemove, msgToAdd, now.Unix(), limit, maxDeadTasks, expireAt.Unix()).Err()
} }
// CheckAndEnqueue checks for all scheduled/retry tasks and enqueues any tasks that // CheckAndEnqueue checks for scheduled/retry tasks for the given queues
// are ready to be processed. //and enqueues any tasks that are ready to be processed.
func (r *RDB) CheckAndEnqueue() (err error) { func (r *RDB) CheckAndEnqueue(qnames ...string) error {
delayed := []string{base.ScheduledQueue, base.RetryQueue} for _, qname := range qnames {
for _, zset := range delayed { if err := r.forwardAll(base.ScheduledKey(qname), base.QueueKey(qname)); err != nil {
n := 1 return err
for n != 0 { }
n, err = r.forward(zset) if err := r.forwardAll(base.RetryKey(qname), base.QueueKey(qname)); err != nil {
if err != nil { return err
return err
}
} }
} }
return nil return nil
} }
// KEYS[1] -> source queue (e.g. scheduled or retry queue) // KEYS[1] -> source queue (e.g. asynq:{<qname>:scheduled or asynq:{<qname>}:retry})
// KEYS[2] -> destination queue (e.g. asynq:{<qname>})
// ARGV[1] -> current unix time // ARGV[1] -> current unix time
// ARGV[2] -> queue prefix
// Note: Script moves tasks up to 100 at a time to keep the runtime of script short. // Note: Script moves tasks up to 100 at a time to keep the runtime of script short.
var forwardCmd = redis.NewScript(` var forwardCmd = redis.NewScript(`
local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1], "LIMIT", 0, 100) local msgs = redis.call("ZRANGEBYSCORE", KEYS[1], "-inf", ARGV[1], "LIMIT", 0, 100)
for _, msg in ipairs(msgs) do for _, msg in ipairs(msgs) do
local decoded = cjson.decode(msg) redis.call("LPUSH", KEYS[2], msg)
local qkey = ARGV[2] .. decoded["Queue"]
redis.call("LPUSH", qkey, msg)
redis.call("ZREM", KEYS[1], msg) redis.call("ZREM", KEYS[1], msg)
end end
return table.getn(msgs)`) return table.getn(msgs)`)
// forward moves tasks with a score less than the current unix time // forward moves tasks with a score less than the current unix time
// from the src zset. It returns the number of tasks moved. // from the src zset to the dst list. It returns the number of tasks moved.
func (r *RDB) forward(src string) (int, error) { func (r *RDB) forward(src, dst string) (int, error) {
now := float64(time.Now().Unix()) now := float64(time.Now().Unix())
res, err := forwardCmd.Run(r.client, res, err := forwardCmd.Run(r.client, []string{src, dst}, now).Result()
[]string{src}, now, base.QueuePrefix).Result()
if err != nil { if err != nil {
return 0, err return 0, err
} }
return cast.ToInt(res), nil return cast.ToInt(res), nil
} }
// ListDeadlineExceeded returns a list of task messages that have exceeded the given deadline. // forwardAll moves tasks with a score less than the current unix time from the src zset,
func (r *RDB) ListDeadlineExceeded(deadline time.Time) ([]*base.TaskMessage, error) { // until there's no more tasks.
func (r *RDB) forwardAll(src, dst string) (err error) {
n := 1
for n != 0 {
n, err = r.forward(src, dst)
if err != nil {
return err
}
}
return nil
}
// ListDeadlineExceeded returns a list of task messages that have exceeded the deadline from the given queues.
func (r *RDB) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) {
var msgs []*base.TaskMessage var msgs []*base.TaskMessage
opt := &redis.ZRangeBy{ opt := &redis.ZRangeBy{
Min: "-inf", Min: "-inf",
Max: strconv.FormatInt(deadline.Unix(), 10), Max: strconv.FormatInt(deadline.Unix(), 10),
} }
res, err := r.client.ZRangeByScore(base.KeyDeadlines, opt).Result() for _, qname := range qnames {
if err != nil { res, err := r.client.ZRangeByScore(base.DeadlinesKey(qname), opt).Result()
return nil, err
}
for _, s := range res {
msg, err := base.DecodeMessage(s)
if err != nil { if err != nil {
return nil, err return nil, err
} }
msgs = append(msgs, msg) for _, s := range res {
msg, err := base.DecodeMessage(s)
if err != nil {
return nil, err
}
msgs = append(msgs, msg)
}
} }
return msgs, nil return msgs, nil
} }
// KEYS[1] -> asynq:servers:<host:pid:sid> // KEYS[1] -> asynq:servers:{<host:pid:sid>}
// KEYS[2] -> asynq:servers // KEYS[2] -> asynq:workers:{<host:pid:sid>}
// KEYS[3] -> asynq:workers<host:pid:sid> // ARGV[1] -> TTL in seconds
// KEYS[4] -> asynq:workers // ARGV[2] -> server info
// ARGV[1] -> expiration time // ARGV[3:] -> alternate key-value pair of (worker id, worker data)
// ARGV[2] -> TTL in seconds
// ARGV[3] -> server info
// ARGV[4:] -> alternate key-value pair of (worker id, worker data)
// Note: Add key to ZSET with expiration time as score. // Note: Add key to ZSET with expiration time as score.
// ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996 // ref: https://github.com/antirez/redis/issues/135#issuecomment-2361996
var writeServerStateCmd = redis.NewScript(` var writeServerStateCmd = redis.NewScript(`
redis.call("SETEX", KEYS[1], ARGV[2], ARGV[3]) redis.call("SETEX", KEYS[1], ARGV[1], ARGV[2])
redis.call("ZADD", KEYS[2], ARGV[1], KEYS[1]) redis.call("DEL", KEYS[2])
redis.call("DEL", KEYS[3]) for i = 3, table.getn(ARGV)-1, 2 do
for i = 4, table.getn(ARGV)-1, 2 do redis.call("HSET", KEYS[2], ARGV[i], ARGV[i+1])
redis.call("HSET", KEYS[3], ARGV[i], ARGV[i+1])
end end
redis.call("EXPIRE", KEYS[3], ARGV[2]) redis.call("EXPIRE", KEYS[2], ARGV[1])
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// WriteServerState writes server state data to redis with expiration set to the value ttl. // WriteServerState writes server state data to redis with expiration set to the value ttl.
@@ -525,7 +536,7 @@ func (r *RDB) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo
return err return err
} }
exp := time.Now().Add(ttl).UTC() exp := time.Now().Add(ttl).UTC()
args := []interface{}{float64(exp.Unix()), ttl.Seconds(), bytes} // args to the lua script args := []interface{}{ttl.Seconds(), bytes} // args to the lua script
for _, w := range workers { for _, w := range workers {
bytes, err := json.Marshal(w) bytes, err := json.Marshal(w)
if err != nil { if err != nil {
@@ -535,28 +546,72 @@ func (r *RDB) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo
} }
skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID) skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
wkey := base.WorkersKey(info.Host, info.PID, info.ServerID) wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
return writeServerStateCmd.Run(r.client, if err := r.client.ZAdd(base.AllServers, &redis.Z{Score: float64(exp.Unix()), Member: skey}).Err(); err != nil {
[]string{skey, base.AllServers, wkey, base.AllWorkers}, return err
args...).Err() }
if err := r.client.ZAdd(base.AllWorkers, &redis.Z{Score: float64(exp.Unix()), Member: wkey}).Err(); err != nil {
return err
}
return writeServerStateCmd.Run(r.client, []string{skey, wkey}, args...).Err()
} }
// KEYS[1] -> asynq:servers // KEYS[1] -> asynq:servers:{<host:pid:sid>}
// KEYS[2] -> asynq:servers:<host:pid:sid> // KEYS[2] -> asynq:workers:{<host:pid:sid>}
// KEYS[3] -> asynq:workers
// KEYS[4] -> asynq:workers<host:pid:sid>
var clearServerStateCmd = redis.NewScript(` var clearServerStateCmd = redis.NewScript(`
redis.call("ZREM", KEYS[1], KEYS[2]) redis.call("DEL", KEYS[1])
redis.call("DEL", KEYS[2]) redis.call("DEL", KEYS[2])
redis.call("ZREM", KEYS[3], KEYS[4])
redis.call("DEL", KEYS[4])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// ClearServerState deletes server state data from redis. // ClearServerState deletes server state data from redis.
func (r *RDB) ClearServerState(host string, pid int, serverID string) error { func (r *RDB) ClearServerState(host string, pid int, serverID string) error {
skey := base.ServerInfoKey(host, pid, serverID) skey := base.ServerInfoKey(host, pid, serverID)
wkey := base.WorkersKey(host, pid, serverID) wkey := base.WorkersKey(host, pid, serverID)
return clearServerStateCmd.Run(r.client, if err := r.client.ZRem(base.AllServers, skey).Err(); err != nil {
[]string{base.AllServers, skey, base.AllWorkers, wkey}).Err() return err
}
if err := r.client.ZRem(base.AllWorkers, wkey).Err(); err != nil {
return err
}
return clearServerStateCmd.Run(r.client, []string{skey, wkey}).Err()
}
// KEYS[1] -> asynq:schedulers:{<schedulerID>}
// ARGV[1] -> TTL in seconds
// ARGV[2:] -> schedler entries
var writeSchedulerEntriesCmd = redis.NewScript(`
redis.call("DEL", KEYS[1])
for i = 2, #ARGV do
redis.call("LPUSH", KEYS[1], ARGV[i])
end
redis.call("EXPIRE", KEYS[1], ARGV[1])
return redis.status_reply("OK")`)
// WriteSchedulerEntries writes scheduler entries data to redis with expiration set to the value ttl.
func (r *RDB) WriteSchedulerEntries(schedulerID string, entries []*base.SchedulerEntry, ttl time.Duration) error {
args := []interface{}{ttl.Seconds()}
for _, e := range entries {
bytes, err := json.Marshal(e)
if err != nil {
continue // skip bad data
}
args = append(args, bytes)
}
exp := time.Now().Add(ttl).UTC()
key := base.SchedulerEntriesKey(schedulerID)
err := r.client.ZAdd(base.AllSchedulers, &redis.Z{Score: float64(exp.Unix()), Member: key}).Err()
if err != nil {
return err
}
return writeSchedulerEntriesCmd.Run(r.client, []string{key}, args...).Err()
}
// ClearSchedulerEntries deletes scheduler entries data from redis.
func (r *RDB) ClearSchedulerEntries(scheduelrID string) error {
key := base.SchedulerEntriesKey(scheduelrID)
if err := r.client.ZRem(base.AllSchedulers, key).Err(); err != nil {
return err
}
return r.client.Del(key).Err()
} }
// CancelationPubSub returns a pubsub for cancelation messages. // CancelationPubSub returns a pubsub for cancelation messages.
@@ -574,3 +629,26 @@ func (r *RDB) CancelationPubSub() (*redis.PubSub, error) {
func (r *RDB) PublishCancelation(id string) error { func (r *RDB) PublishCancelation(id string) error {
return r.client.Publish(base.CancelChannel, id).Err() return r.client.Publish(base.CancelChannel, id).Err()
} }
// KEYS[1] -> asynq:scheduler_history:<entryID>
// ARGV[1] -> enqueued_at timestamp
// ARGV[2] -> serialized SchedulerEnqueueEvent data
// ARGV[3] -> max number of events to be persisted
var recordSchedulerEnqueueEventCmd = redis.NewScript(`
redis.call("ZADD", KEYS[1], ARGV[1], ARGV[2])
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", ARGV[3])
return redis.status_reply("OK")`)
// Maximum number of enqueue events to store per entry.
const maxEvents = 10000
// RecordSchedulerEnqueueEvent records the time when the given task was enqueued.
func (r *RDB) RecordSchedulerEnqueueEvent(entryID string, event *base.SchedulerEnqueueEvent) error {
key := base.SchedulerHistoryKey(entryID)
data, err := json.Marshal(event)
if err != nil {
return err
}
return recordSchedulerEnqueueEventCmd.Run(
r.client, []string{key}, event.EnqueuedAt.Unix(), data, maxEvents).Err()
}

File diff suppressed because it is too large Load Diff

View File

@@ -126,22 +126,22 @@ func (tb *TestBroker) Kill(msg *base.TaskMessage, errMsg string) error {
return tb.real.Kill(msg, errMsg) return tb.real.Kill(msg, errMsg)
} }
func (tb *TestBroker) CheckAndEnqueue() error { func (tb *TestBroker) CheckAndEnqueue(qnames ...string) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.CheckAndEnqueue() return tb.real.CheckAndEnqueue(qnames...)
} }
func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time) ([]*base.TaskMessage, error) { func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return nil, errRedisDown return nil, errRedisDown
} }
return tb.real.ListDeadlineExceeded(deadline) return tb.real.ListDeadlineExceeded(deadline, qnames...)
} }
func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error { func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {

View File

@@ -44,6 +44,16 @@ func toInt(v interface{}) (int, error) {
} }
} }
// String returns a string representation of payload data.
func (p Payload) String() string {
return fmt.Sprint(p.data)
}
// MarshalJSON returns the JSON encoding of payload data.
func (p Payload) MarshalJSON() ([]byte, error) {
return json.Marshal(p.data)
}
// GetString returns a string value if a string type is associated with // GetString returns a string value if a string type is associated with
// the key, otherwise reports an error. // the key, otherwise reports an error.
func (p Payload) GetString(key string) (string, error) { func (p Payload) GetString(key string) (string, error) {

View File

@@ -6,6 +6,7 @@ package asynq
import ( import (
"encoding/json" "encoding/json"
"fmt"
"testing" "testing"
"time" "time"
@@ -645,3 +646,30 @@ func TestPayloadHas(t *testing.T) {
t.Errorf("Payload.Has(%q) = true, want false", "name") t.Errorf("Payload.Has(%q) = true, want false", "name")
} }
} }
func TestPayloadDebuggingStrings(t *testing.T) {
data := map[string]interface{}{
"foo": 123,
"bar": "hello",
"baz": false,
}
payload := Payload{data: data}
if payload.String() != fmt.Sprint(data) {
t.Errorf("Payload.String() = %q, want %q",
payload.String(), fmt.Sprint(data))
}
got, err := payload.MarshalJSON()
if err != nil {
t.Fatal(err)
}
want, err := json.Marshal(data)
if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(got, want); diff != "" {
t.Errorf("Payload.MarhsalJSON() = %s, want %s; (-want,+got)\n%s",
got, want, diff)
}
}

View File

@@ -56,7 +56,7 @@ type processor struct {
// abort channel communicates to the in-flight worker goroutines to stop. // abort channel communicates to the in-flight worker goroutines to stop.
abort chan struct{} abort chan struct{}
// cancelations is a set of cancel functions for all in-progress tasks. // cancelations is a set of cancel functions for all active tasks.
cancelations *base.Cancelations cancelations *base.Cancelations
starting chan<- *base.TaskMessage starting chan<- *base.TaskMessage
@@ -88,22 +88,23 @@ func newProcessor(params processorParams) *processor {
orderedQueues = sortByPriority(queues) orderedQueues = sortByPriority(queues)
} }
return &processor{ return &processor{
logger: params.logger, logger: params.logger,
broker: params.broker, broker: params.broker,
queueConfig: queues, queueConfig: queues,
orderedQueues: orderedQueues, orderedQueues: orderedQueues,
retryDelayFunc: params.retryDelayFunc, retryDelayFunc: params.retryDelayFunc,
syncRequestCh: params.syncCh, syncRequestCh: params.syncCh,
cancelations: params.cancelations, cancelations: params.cancelations,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1), errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, params.concurrency), sema: make(chan struct{}, params.concurrency),
done: make(chan struct{}), done: make(chan struct{}),
quit: make(chan struct{}), quit: make(chan struct{}),
abort: make(chan struct{}), abort: make(chan struct{}),
errHandler: params.errHandler, errHandler: params.errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }), handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
starting: params.starting, shutdownTimeout: params.shutdownTimeout,
finished: params.finished, starting: params.starting,
finished: params.finished,
} }
} }
@@ -216,9 +217,9 @@ func (p *processor) exec() {
return return
case resErr := <-resCh: case resErr := <-resCh:
// Note: One of three things should happen. // Note: One of three things should happen.
// 1) Done -> Removes the message from InProgress // 1) Done -> Removes the message from Active
// 2) Retry -> Removes the message from InProgress & Adds the message to Retry // 2) Retry -> Removes the message from Active & Adds the message to Retry
// 3) Kill -> Removes the message from InProgress & Adds the message to Dead // 3) Kill -> Removes the message from Active & Adds the message to Dead
if resErr != nil { if resErr != nil {
p.retryOrKill(ctx, msg, resErr) p.retryOrKill(ctx, msg, resErr)
return return
@@ -241,7 +242,7 @@ func (p *processor) requeue(msg *base.TaskMessage) {
func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) { func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) {
err := p.broker.Done(msg) err := p.broker.Done(msg)
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.InProgressQueue, err) errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.ActiveKey(msg.Queue), err)
deadline, ok := ctx.Deadline() deadline, ok := ctx.Deadline()
if !ok { if !ok {
panic("asynq: internal error: missing deadline in context") panic("asynq: internal error: missing deadline in context")
@@ -274,7 +275,7 @@ func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error) {
retryAt := time.Now().Add(d) retryAt := time.Now().Add(d)
err := p.broker.Retry(msg, retryAt, e.Error()) err := p.broker.Retry(msg, retryAt, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.RetryKey(msg.Queue))
deadline, ok := ctx.Deadline() deadline, ok := ctx.Deadline()
if !ok { if !ok {
panic("asynq: internal error: missing deadline in context") panic("asynq: internal error: missing deadline in context")
@@ -293,7 +294,7 @@ func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error) {
func (p *processor) kill(ctx context.Context, msg *base.TaskMessage, e error) { func (p *processor) kill(ctx context.Context, msg *base.TaskMessage, e error) {
err := p.broker.Kill(msg, e.Error()) err := p.broker.Kill(msg, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.DeadKey(msg.Queue))
deadline, ok := ctx.Deadline() deadline, ok := ctx.Deadline()
if !ok { if !ok {
panic("asynq: internal error: missing deadline in context") panic("asynq: internal error: missing deadline in context")

View File

@@ -42,14 +42,14 @@ func fakeSyncer(syncCh <-chan *syncRequest, done <-chan struct{}) {
} }
} }
func TestProcessorSuccess(t *testing.T) { func TestProcessorSuccessWithSingleQueue(t *testing.T) {
r := setup(t) r := setup(t)
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("send_email", nil) m1 := h.NewTaskMessage("task1", nil)
m2 := h.NewTaskMessage("gen_thumbnail", nil) m2 := h.NewTaskMessage("task2", nil)
m3 := h.NewTaskMessage("reindex", nil) m3 := h.NewTaskMessage("task3", nil)
m4 := h.NewTaskMessage("sync", nil) m4 := h.NewTaskMessage("task4", nil)
t1 := NewTask(m1.Type, m1.Payload) t1 := NewTask(m1.Type, m1.Payload)
t2 := NewTask(m2.Type, m2.Payload) t2 := NewTask(m2.Type, m2.Payload)
@@ -57,25 +57,25 @@ func TestProcessorSuccess(t *testing.T) {
t4 := NewTask(m4.Type, m4.Payload) t4 := NewTask(m4.Type, m4.Payload)
tests := []struct { tests := []struct {
enqueued []*base.TaskMessage // initial default queue state pending []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run incoming []*base.TaskMessage // tasks to be enqueued during run
wantProcessed []*Task // tasks to be processed at the end wantProcessed []*Task // tasks to be processed at the end
}{ }{
{ {
enqueued: []*base.TaskMessage{m1}, pending: []*base.TaskMessage{m1},
incoming: []*base.TaskMessage{m2, m3, m4}, incoming: []*base.TaskMessage{m2, m3, m4},
wantProcessed: []*Task{t1, t2, t3, t4}, wantProcessed: []*Task{t1, t2, t3, t4},
}, },
{ {
enqueued: []*base.TaskMessage{}, pending: []*base.TaskMessage{},
incoming: []*base.TaskMessage{m1}, incoming: []*base.TaskMessage{m1},
wantProcessed: []*Task{t1}, wantProcessed: []*Task{t1},
}, },
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue. h.SeedPendingQueue(t, r, tc.pending, base.DefaultQueueName) // initialize default queue.
// instantiate a new processor // instantiate a new processor
var mu sync.Mutex var mu sync.Mutex
@@ -117,9 +117,101 @@ func TestProcessorSuccess(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
} }
time.Sleep(2 * time.Second) // wait for two second to allow all enqueued tasks to be processed. time.Sleep(2 * time.Second) // wait for two second to allow all pending tasks to be processed.
if l := r.LLen(base.InProgressQueue).Val(); l != 0 { if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l) t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
}
p.terminate()
mu.Lock()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
}
mu.Unlock()
}
}
func TestProcessorSuccessWithMultipleQueues(t *testing.T) {
var (
r = setup(t)
rdbClient = rdb.NewRDB(r)
m1 = h.NewTaskMessage("task1", nil)
m2 = h.NewTaskMessage("task2", nil)
m3 = h.NewTaskMessageWithQueue("task3", nil, "high")
m4 = h.NewTaskMessageWithQueue("task4", nil, "low")
t1 = NewTask(m1.Type, m1.Payload)
t2 = NewTask(m2.Type, m2.Payload)
t3 = NewTask(m3.Type, m3.Payload)
t4 = NewTask(m4.Type, m4.Payload)
)
tests := []struct {
pending map[string][]*base.TaskMessage
queues []string // list of queues to consume the tasks from
wantProcessed []*Task // tasks to be processed at the end
}{
{
pending: map[string][]*base.TaskMessage{
"default": {m1, m2},
"high": {m3},
"low": {m4},
},
queues: []string{"default", "high", "low"},
wantProcessed: []*Task{t1, t2, t3, t4},
},
}
for _, tc := range tests {
// Set up test case.
h.FlushDB(t, r)
h.SeedAllPendingQueues(t, r, tc.pending)
// Instantiate a new processor.
var mu sync.Mutex
var processed []*Task
handler := func(ctx context.Context, task *Task) error {
mu.Lock()
defer mu.Unlock()
processed = append(processed, task)
return nil
}
starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{})
defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
retryDelayFunc: defaultDelayFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: map[string]int{
"default": 2,
"high": 3,
"low": 1,
},
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = HandlerFunc(handler)
p.start(&sync.WaitGroup{})
// Wait for two second to allow all pending tasks to be processed.
time.Sleep(2 * time.Second)
// Make sure no messages are stuck in active list.
for _, qname := range tc.queues {
if l := r.LLen(base.ActiveKey(qname)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
}
} }
p.terminate() p.terminate()
@@ -140,18 +232,18 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
t1 := NewTask(m1.Type, m1.Payload) t1 := NewTask(m1.Type, m1.Payload)
tests := []struct { tests := []struct {
enqueued []*base.TaskMessage // initial default queue state pending []*base.TaskMessage // initial default queue state
wantProcessed []*Task // tasks to be processed at the end wantProcessed []*Task // tasks to be processed at the end
}{ }{
{ {
enqueued: []*base.TaskMessage{m1}, pending: []*base.TaskMessage{m1},
wantProcessed: []*Task{t1}, wantProcessed: []*Task{t1},
}, },
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue. h.SeedPendingQueue(t, r, tc.pending, base.DefaultQueueName) // initialize default queue.
var mu sync.Mutex var mu sync.Mutex
var processed []*Task var processed []*Task
@@ -190,9 +282,9 @@ func TestProcessTasksWithLargeNumberInPayload(t *testing.T) {
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
p.start(&sync.WaitGroup{}) p.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second) // wait for two second to allow all enqueued tasks to be processed. time.Sleep(2 * time.Second) // wait for two second to allow all pending tasks to be processed.
if l := r.LLen(base.InProgressQueue).Val(); l != 0 { if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l) t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
} }
p.terminate() p.terminate()
@@ -218,7 +310,7 @@ func TestProcessorRetry(t *testing.T) {
now := time.Now() now := time.Now()
tests := []struct { tests := []struct {
enqueued []*base.TaskMessage // initial default queue state pending []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run incoming []*base.TaskMessage // tasks to be enqueued during run
delay time.Duration // retry delay duration delay time.Duration // retry delay duration
handler Handler // task handler handler Handler // task handler
@@ -228,7 +320,7 @@ func TestProcessorRetry(t *testing.T) {
wantErrCount int // number of times error handler should be called wantErrCount int // number of times error handler should be called
}{ }{
{ {
enqueued: []*base.TaskMessage{m1, m2}, pending: []*base.TaskMessage{m1, m2},
incoming: []*base.TaskMessage{m3, m4}, incoming: []*base.TaskMessage{m3, m4},
delay: time.Minute, delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error { handler: HandlerFunc(func(ctx context.Context, task *Task) error {
@@ -246,8 +338,8 @@ func TestProcessorRetry(t *testing.T) {
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
h.SeedEnqueuedQueue(t, r, tc.enqueued) // initialize default queue. h.SeedPendingQueue(t, r, tc.pending, base.DefaultQueueName) // initialize default queue.
// instantiate a new processor // instantiate a new processor
delayFunc := func(n int, e error, t *Task) time.Duration { delayFunc := func(n int, e error, t *Task) time.Duration {
@@ -294,19 +386,19 @@ func TestProcessorRetry(t *testing.T) {
time.Sleep(tc.wait) // FIXME: This makes test flaky. time.Sleep(tc.wait) // FIXME: This makes test flaky.
p.terminate() p.terminate()
cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to a second difference in zset score cmpOpt := h.EquateInt64Approx(1) // allow up to a second difference in zset score
gotRetry := h.GetRetryEntries(t, r) gotRetry := h.GetRetryEntries(t, r, base.DefaultQueueName)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" { if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff) t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryKey(base.DefaultQueueName), diff)
} }
gotDead := h.GetDeadMessages(t, r) gotDead := h.GetDeadMessages(t, r, base.DefaultQueueName)
if diff := cmp.Diff(tc.wantDead, gotDead, h.SortMsgOpt); diff != "" { if diff := cmp.Diff(tc.wantDead, gotDead, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.DeadQueue, diff) t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.DeadKey(base.DefaultQueueName), diff)
} }
if l := r.LLen(base.InProgressQueue).Val(); l != 0 { if l := r.LLen(base.ActiveKey(base.DefaultQueueName)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l) t.Errorf("%q has %d tasks, want 0", base.ActiveKey(base.DefaultQueueName), l)
} }
if n != tc.wantErrCount { if n != tc.wantErrCount {
@@ -371,36 +463,42 @@ func TestProcessorQueues(t *testing.T) {
} }
func TestProcessorWithStrictPriority(t *testing.T) { func TestProcessorWithStrictPriority(t *testing.T) {
r := setup(t) var (
rdbClient := rdb.NewRDB(r) r = setup(t)
m1 := h.NewTaskMessage("send_email", nil) rdbClient = rdb.NewRDB(r)
m2 := h.NewTaskMessage("send_email", nil)
m3 := h.NewTaskMessage("send_email", nil)
m4 := h.NewTaskMessage("gen_thumbnail", nil)
m5 := h.NewTaskMessage("gen_thumbnail", nil)
m6 := h.NewTaskMessage("sync", nil)
m7 := h.NewTaskMessage("sync", nil)
t1 := NewTask(m1.Type, m1.Payload) m1 = h.NewTaskMessageWithQueue("task1", nil, "critical")
t2 := NewTask(m2.Type, m2.Payload) m2 = h.NewTaskMessageWithQueue("task2", nil, "critical")
t3 := NewTask(m3.Type, m3.Payload) m3 = h.NewTaskMessageWithQueue("task3", nil, "critical")
t4 := NewTask(m4.Type, m4.Payload) m4 = h.NewTaskMessageWithQueue("task4", nil, base.DefaultQueueName)
t5 := NewTask(m5.Type, m5.Payload) m5 = h.NewTaskMessageWithQueue("task5", nil, base.DefaultQueueName)
t6 := NewTask(m6.Type, m6.Payload) m6 = h.NewTaskMessageWithQueue("task6", nil, "low")
t7 := NewTask(m7.Type, m7.Payload) m7 = h.NewTaskMessageWithQueue("task7", nil, "low")
t1 = NewTask(m1.Type, m1.Payload)
t2 = NewTask(m2.Type, m2.Payload)
t3 = NewTask(m3.Type, m3.Payload)
t4 = NewTask(m4.Type, m4.Payload)
t5 = NewTask(m5.Type, m5.Payload)
t6 = NewTask(m6.Type, m6.Payload)
t7 = NewTask(m7.Type, m7.Payload)
)
defer r.Close()
tests := []struct { tests := []struct {
enqueued map[string][]*base.TaskMessage // initial queues state pending map[string][]*base.TaskMessage // initial queues state
queues []string // list of queues to consume tasks from
wait time.Duration // wait duration between starting and stopping processor for this test case wait time.Duration // wait duration between starting and stopping processor for this test case
wantProcessed []*Task // tasks to be processed at the end wantProcessed []*Task // tasks to be processed at the end
}{ }{
{ {
enqueued: map[string][]*base.TaskMessage{ pending: map[string][]*base.TaskMessage{
base.DefaultQueueName: {m4, m5}, base.DefaultQueueName: {m4, m5},
"critical": {m1, m2, m3}, "critical": {m1, m2, m3},
"low": {m6, m7}, "low": {m6, m7},
}, },
queues: []string{base.DefaultQueueName, "critical", "low"},
wait: time.Second, wait: time.Second,
wantProcessed: []*Task{t1, t2, t3, t4, t5, t6, t7}, wantProcessed: []*Task{t1, t2, t3, t4, t5, t6, t7},
}, },
@@ -408,8 +506,8 @@ func TestProcessorWithStrictPriority(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. h.FlushDB(t, r) // clean up db before each test case.
for qname, msgs := range tc.enqueued { for qname, msgs := range tc.pending {
h.SeedEnqueuedQueue(t, r, msgs, qname) h.SeedPendingQueue(t, r, msgs, qname)
} }
// instantiate a new processor // instantiate a new processor
@@ -422,20 +520,22 @@ func TestProcessorWithStrictPriority(t *testing.T) {
return nil return nil
} }
queueCfg := map[string]int{ queueCfg := map[string]int{
"critical": 3,
base.DefaultQueueName: 2, base.DefaultQueueName: 2,
"critical": 3,
"low": 1, "low": 1,
} }
starting := make(chan *base.TaskMessage) starting := make(chan *base.TaskMessage)
finished := make(chan *base.TaskMessage) finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{}) done := make(chan struct{})
defer func() { close(done) }() defer func() { close(done) }()
go fakeHeartbeater(starting, finished, done) go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{ p := newProcessor(processorParams{
logger: testLogger, logger: testLogger,
broker: rdbClient, broker: rdbClient,
retryDelayFunc: defaultDelayFunc, retryDelayFunc: defaultDelayFunc,
syncCh: nil, syncCh: syncCh,
cancelations: base.NewCancelations(), cancelations: base.NewCancelations(),
concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time. concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time.
queues: queueCfg, queues: queueCfg,
@@ -449,15 +549,18 @@ func TestProcessorWithStrictPriority(t *testing.T) {
p.start(&sync.WaitGroup{}) p.start(&sync.WaitGroup{})
time.Sleep(tc.wait) time.Sleep(tc.wait)
// Make sure no tasks are stuck in active list.
for _, qname := range tc.queues {
if l := r.LLen(base.ActiveKey(qname)).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.ActiveKey(qname), l)
}
}
p.terminate() p.terminate()
if diff := cmp.Diff(tc.wantProcessed, processed, cmp.AllowUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, cmp.AllowUnexported(Payload{})); diff != "" {
t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff) t.Errorf("mismatch found in processed tasks; (-want, +got)\n%s", diff)
} }
if l := r.LLen(base.InProgressQueue).Val(); l != 0 {
t.Errorf("%q has %d tasks, want 0", base.InProgressQueue, l)
}
} }
} }

View File

@@ -21,6 +21,9 @@ type recoverer struct {
// channel to communicate back to the long running "recoverer" goroutine. // channel to communicate back to the long running "recoverer" goroutine.
done chan struct{} done chan struct{}
// list of queues to check for deadline.
queues []string
// poll interval. // poll interval.
interval time.Duration interval time.Duration
} }
@@ -28,6 +31,7 @@ type recoverer struct {
type recovererParams struct { type recovererParams struct {
logger *log.Logger logger *log.Logger
broker base.Broker broker base.Broker
queues []string
interval time.Duration interval time.Duration
retryDelayFunc retryDelayFunc retryDelayFunc retryDelayFunc
} }
@@ -37,6 +41,7 @@ func newRecoverer(params recovererParams) *recoverer {
logger: params.logger, logger: params.logger,
broker: params.broker, broker: params.broker,
done: make(chan struct{}), done: make(chan struct{}),
queues: params.queues,
interval: params.interval, interval: params.interval,
retryDelayFunc: params.retryDelayFunc, retryDelayFunc: params.retryDelayFunc,
} }
@@ -62,7 +67,7 @@ func (r *recoverer) start(wg *sync.WaitGroup) {
case <-timer.C: case <-timer.C:
// Get all tasks which have expired 30 seconds ago or earlier. // Get all tasks which have expired 30 seconds ago or earlier.
deadline := time.Now().Add(-30 * time.Second) deadline := time.Now().Add(-30 * time.Second)
msgs, err := r.broker.ListDeadlineExceeded(deadline) msgs, err := r.broker.ListDeadlineExceeded(deadline, r.queues...)
if err != nil { if err != nil {
r.logger.Warn("recoverer: could not list deadline exceeded tasks") r.logger.Warn("recoverer: could not list deadline exceeded tasks")
continue continue

View File

@@ -17,12 +17,13 @@ import (
func TestRecoverer(t *testing.T) { func TestRecoverer(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
t1 := h.NewTaskMessage("task1", nil) t1 := h.NewTaskMessageWithQueue("task1", nil, "default")
t2 := h.NewTaskMessage("task2", nil) t2 := h.NewTaskMessageWithQueue("task2", nil, "default")
t3 := h.NewTaskMessageWithQueue("task3", nil, "critical") t3 := h.NewTaskMessageWithQueue("task3", nil, "critical")
t4 := h.NewTaskMessage("task4", nil) t4 := h.NewTaskMessageWithQueue("task4", nil, "default")
t4.Retried = t4.Retry // t4 has reached its max retry count t4.Retried = t4.Retry // t4 has reached its max retry count
now := time.Now() now := time.Now()
@@ -32,107 +33,205 @@ func TestRecoverer(t *testing.T) {
oneHourAgo := now.Add(-1 * time.Hour) oneHourAgo := now.Add(-1 * time.Hour)
tests := []struct { tests := []struct {
desc string desc string
inProgress []*base.TaskMessage inProgress map[string][]*base.TaskMessage
deadlines []base.Z deadlines map[string][]base.Z
retry []base.Z retry map[string][]base.Z
dead []base.Z dead map[string][]base.Z
wantInProgress []*base.TaskMessage wantActive map[string][]*base.TaskMessage
wantDeadlines []base.Z wantDeadlines map[string][]base.Z
wantRetry []*base.TaskMessage wantRetry map[string][]*base.TaskMessage
wantDead []*base.TaskMessage wantDead map[string][]*base.TaskMessage
}{ }{
{ {
desc: "with one task in-progress", desc: "with one active task",
inProgress: []*base.TaskMessage{t1}, inProgress: map[string][]*base.TaskMessage{
deadlines: []base.Z{ "default": {t1},
{Message: t1, Score: fiveMinutesAgo.Unix()},
}, },
retry: []base.Z{}, deadlines: map[string][]base.Z{
dead: []base.Z{}, "default": {{Message: t1, Score: fiveMinutesAgo.Unix()}},
wantInProgress: []*base.TaskMessage{}, },
wantDeadlines: []base.Z{}, retry: map[string][]base.Z{
wantRetry: []*base.TaskMessage{ "default": {},
h.TaskMessageAfterRetry(*t1, "deadline exceeded"), },
dead: map[string][]base.Z{
"default": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
},
wantDeadlines: map[string][]base.Z{
"default": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
},
wantDead: map[string][]*base.TaskMessage{
"default": {},
}, },
wantDead: []*base.TaskMessage{},
}, },
{ {
desc: "with a task with max-retry reached", desc: "with a task with max-retry reached",
inProgress: []*base.TaskMessage{t4}, inProgress: map[string][]*base.TaskMessage{
deadlines: []base.Z{ "default": {t4},
{Message: t4, Score: fiveMinutesAgo.Unix()}, "critical": {},
},
deadlines: map[string][]base.Z{
"default": {{Message: t4, Score: fiveMinutesAgo.Unix()}},
"critical": {},
},
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
dead: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantDeadlines: map[string][]base.Z{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantDead: map[string][]*base.TaskMessage{
"default": {h.TaskMessageWithError(*t4, "deadline exceeded")},
"critical": {},
}, },
retry: []base.Z{},
dead: []base.Z{},
wantInProgress: []*base.TaskMessage{},
wantDeadlines: []base.Z{},
wantRetry: []*base.TaskMessage{},
wantDead: []*base.TaskMessage{h.TaskMessageWithError(*t4, "deadline exceeded")},
}, },
{ {
desc: "with multiple tasks in-progress, and one expired", desc: "with multiple active tasks, and one expired",
inProgress: []*base.TaskMessage{t1, t2, t3}, inProgress: map[string][]*base.TaskMessage{
deadlines: []base.Z{ "default": {t1, t2},
{Message: t1, Score: oneHourAgo.Unix()}, "critical": {t3},
{Message: t2, Score: fiveMinutesFromNow.Unix()},
{Message: t3, Score: oneHourFromNow.Unix()},
}, },
retry: []base.Z{}, deadlines: map[string][]base.Z{
dead: []base.Z{}, "default": {
wantInProgress: []*base.TaskMessage{t2, t3}, {Message: t1, Score: oneHourAgo.Unix()},
wantDeadlines: []base.Z{ {Message: t2, Score: fiveMinutesFromNow.Unix()},
{Message: t2, Score: fiveMinutesFromNow.Unix()}, },
{Message: t3, Score: oneHourFromNow.Unix()}, "critical": {
{Message: t3, Score: oneHourFromNow.Unix()},
},
}, },
wantRetry: []*base.TaskMessage{ retry: map[string][]base.Z{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"), "default": {},
"critical": {},
},
dead: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {t2},
"critical": {t3},
},
wantDeadlines: map[string][]base.Z{
"default": {{Message: t2, Score: fiveMinutesFromNow.Unix()}},
"critical": {{Message: t3, Score: oneHourFromNow.Unix()}},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
"critical": {},
},
wantDead: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
}, },
wantDead: []*base.TaskMessage{},
}, },
{ {
desc: "with multiple expired tasks in-progress", desc: "with multiple expired active tasks",
inProgress: []*base.TaskMessage{t1, t2, t3}, inProgress: map[string][]*base.TaskMessage{
deadlines: []base.Z{ "default": {t1, t2},
{Message: t1, Score: oneHourAgo.Unix()}, "critical": {t3},
{Message: t2, Score: fiveMinutesAgo.Unix()},
{Message: t3, Score: oneHourFromNow.Unix()},
}, },
retry: []base.Z{}, deadlines: map[string][]base.Z{
dead: []base.Z{}, "default": {
wantInProgress: []*base.TaskMessage{t3}, {Message: t1, Score: oneHourAgo.Unix()},
wantDeadlines: []base.Z{ {Message: t2, Score: oneHourFromNow.Unix()},
{Message: t3, Score: oneHourFromNow.Unix()}, },
"critical": {
{Message: t3, Score: fiveMinutesAgo.Unix()},
},
}, },
wantRetry: []*base.TaskMessage{ retry: map[string][]base.Z{
h.TaskMessageAfterRetry(*t1, "deadline exceeded"), "default": {},
h.TaskMessageAfterRetry(*t2, "deadline exceeded"), "cricial": {},
},
dead: map[string][]base.Z{
"default": {},
"cricial": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {t2},
"critical": {},
},
wantDeadlines: map[string][]base.Z{
"default": {{Message: t2, Score: oneHourFromNow.Unix()}},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {h.TaskMessageAfterRetry(*t1, "deadline exceeded")},
"critical": {h.TaskMessageAfterRetry(*t3, "deadline exceeded")},
},
wantDead: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
}, },
wantDead: []*base.TaskMessage{},
}, },
{ {
desc: "with empty in-progress queue", desc: "with empty active queue",
inProgress: []*base.TaskMessage{}, inProgress: map[string][]*base.TaskMessage{
deadlines: []base.Z{}, "default": {},
retry: []base.Z{}, "critical": {},
dead: []base.Z{}, },
wantInProgress: []*base.TaskMessage{}, deadlines: map[string][]base.Z{
wantDeadlines: []base.Z{}, "default": {},
wantRetry: []*base.TaskMessage{}, "critical": {},
wantDead: []*base.TaskMessage{}, },
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
dead: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantDeadlines: map[string][]base.Z{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantDead: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
}, },
} }
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
h.SeedInProgressQueue(t, r, tc.inProgress) h.SeedAllActiveQueues(t, r, tc.inProgress)
h.SeedDeadlines(t, r, tc.deadlines) h.SeedAllDeadlines(t, r, tc.deadlines)
h.SeedRetryQueue(t, r, tc.retry) h.SeedAllRetryQueues(t, r, tc.retry)
h.SeedDeadQueue(t, r, tc.dead) h.SeedAllDeadQueues(t, r, tc.dead)
recoverer := newRecoverer(recovererParams{ recoverer := newRecoverer(recovererParams{
logger: testLogger, logger: testLogger,
broker: rdbClient, broker: rdbClient,
queues: []string{"default", "critical"},
interval: 1 * time.Second, interval: 1 * time.Second,
retryDelayFunc: func(n int, err error, task *Task) time.Duration { return 30 * time.Second }, retryDelayFunc: func(n int, err error, task *Task) time.Duration { return 30 * time.Second },
}) })
@@ -142,21 +241,29 @@ func TestRecoverer(t *testing.T) {
time.Sleep(2 * time.Second) time.Sleep(2 * time.Second)
recoverer.terminate() recoverer.terminate()
gotInProgress := h.GetInProgressMessages(t, r) for qname, want := range tc.wantActive {
if diff := cmp.Diff(tc.wantInProgress, gotInProgress, h.SortMsgOpt); diff != "" { gotActive := h.GetActiveMessages(t, r, qname)
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.InProgressQueue, diff) if diff := cmp.Diff(want, gotActive, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.ActiveKey(qname), diff)
}
} }
gotDeadlines := h.GetDeadlinesEntries(t, r) for qname, want := range tc.wantDeadlines {
if diff := cmp.Diff(tc.wantDeadlines, gotDeadlines, h.SortZSetEntryOpt); diff != "" { gotDeadlines := h.GetDeadlinesEntries(t, r, qname)
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.KeyDeadlines, diff) if diff := cmp.Diff(want, gotDeadlines, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.DeadlinesKey(qname), diff)
}
} }
gotRetry := h.GetRetryMessages(t, r) for qname, want := range tc.wantRetry {
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortMsgOpt); diff != "" { gotRetry := h.GetRetryMessages(t, r, qname)
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryQueue, diff) if diff := cmp.Diff(want, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff)
}
} }
gotDead := h.GetDeadMessages(t, r) for qname, want := range tc.wantDead {
if diff := cmp.Diff(tc.wantDead, gotDead, h.SortMsgOpt); diff != "" { gotDead := h.GetDeadMessages(t, r, qname)
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.DeadQueue, diff) if diff := cmp.Diff(want, gotDead, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.DeadKey(qname), diff)
}
} }
} }
} }

View File

@@ -5,64 +5,235 @@
package asynq package asynq
import ( import (
"fmt"
"os"
"sync" "sync"
"time" "time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
"github.com/robfig/cron/v3"
) )
type scheduler struct { // A Scheduler kicks off tasks at regular intervals based on the user defined schedule.
logger *log.Logger type Scheduler struct {
broker base.Broker id string
status *base.ServerStatus
// channel to communicate back to the long running "scheduler" goroutine. logger *log.Logger
done chan struct{} client *Client
rdb *rdb.RDB
// poll interval on average cron *cron.Cron
avgInterval time.Duration location *time.Location
done chan struct{}
wg sync.WaitGroup
errHandler func(task *Task, opts []Option, err error)
} }
type schedulerParams struct { // NewScheduler returns a new Scheduler instance given the redis connection option.
logger *log.Logger // The parameter opts is optional, defaults will be used if opts is set to nil
broker base.Broker func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler {
interval time.Duration if opts == nil {
} opts = &SchedulerOpts{}
}
func newScheduler(params schedulerParams) *scheduler { logger := log.NewLogger(opts.Logger)
return &scheduler{ loglevel := opts.LogLevel
logger: params.logger, if loglevel == level_unspecified {
broker: params.broker, loglevel = InfoLevel
done: make(chan struct{}), }
avgInterval: params.interval, logger.SetLevel(toInternalLogLevel(loglevel))
loc := opts.Location
if loc == nil {
loc = time.UTC
}
return &Scheduler{
id: generateSchedulerID(),
status: base.NewServerStatus(base.StatusIdle),
logger: logger,
client: NewClient(r),
rdb: rdb.NewRDB(createRedisClient(r)),
cron: cron.New(cron.WithLocation(loc)),
location: loc,
done: make(chan struct{}),
errHandler: opts.EnqueueErrorHandler,
} }
} }
func (s *scheduler) terminate() { func generateSchedulerID() string {
s.logger.Debug("Scheduler shutting down...") host, err := os.Hostname()
// Signal the scheduler goroutine to stop polling. if err != nil {
s.done <- struct{}{} host = "unknown-host"
}
return fmt.Sprintf("%s:%d:%v", host, os.Getpid(), uuid.New())
} }
// start starts the "scheduler" goroutine. // SchedulerOpts specifies scheduler options.
func (s *scheduler) start(wg *sync.WaitGroup) { type SchedulerOpts struct {
wg.Add(1) // Logger specifies the logger used by the scheduler instance.
go func() { //
defer wg.Done() // If unset, the default logger is used.
for { Logger Logger
select {
case <-s.done: // LogLevel specifies the minimum log level to enable.
s.logger.Debug("Scheduler done") //
return // If unset, InfoLevel is used by default.
case <-time.After(s.avgInterval): LogLevel LogLevel
s.exec()
} // Location specifies the time zone location.
//
// If unset, the UTC time zone (time.UTC) is used.
Location *time.Location
// EnqueueErrorHandler gets called when scheduler cannot enqueue a registered task
// due to an error.
EnqueueErrorHandler func(task *Task, opts []Option, err error)
}
// enqueueJob encapsulates the job of enqueing a task and recording the event.
type enqueueJob struct {
id uuid.UUID
cronspec string
task *Task
opts []Option
location *time.Location
logger *log.Logger
client *Client
rdb *rdb.RDB
errHandler func(task *Task, opts []Option, err error)
}
func (j *enqueueJob) Run() {
res, err := j.client.Enqueue(j.task, j.opts...)
if err != nil {
j.logger.Errorf("scheduler could not enqueue a task %+v: %v", j.task, err)
if j.errHandler != nil {
j.errHandler(j.task, j.opts, err)
} }
}() return
} }
j.logger.Infof("scheduler enqueued a task: %+v", res)
func (s *scheduler) exec() { event := &base.SchedulerEnqueueEvent{
if err := s.broker.CheckAndEnqueue(); err != nil { TaskID: res.ID,
s.logger.Errorf("Could not enqueue scheduled tasks: %v", err) EnqueuedAt: res.EnqueuedAt.In(j.location),
}
err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event)
if err != nil {
j.logger.Errorf("scheduler could not record enqueue event of enqueued task %+v: %v", j.task, err)
} }
} }
// Register registers a task to be enqueued on the given schedule specified by the cronspec.
// It returns an ID of the newly registered entry.
func (s *Scheduler) Register(cronspec string, task *Task, opts ...Option) (entryID string, err error) {
job := &enqueueJob{
id: uuid.New(),
cronspec: cronspec,
task: task,
opts: opts,
location: s.location,
client: s.client,
rdb: s.rdb,
logger: s.logger,
errHandler: s.errHandler,
}
if _, err = s.cron.AddJob(cronspec, job); err != nil {
return "", err
}
return job.id.String(), nil
}
// Run starts the scheduler until an os signal to exit the program is received.
// It returns an error if scheduler is already running or has been stopped.
func (s *Scheduler) Run() error {
if err := s.Start(); err != nil {
return err
}
s.waitForSignals()
return s.Stop()
}
// Start starts the scheduler.
// It returns an error if the scheduler is already running or has been stopped.
func (s *Scheduler) Start() error {
switch s.status.Get() {
case base.StatusRunning:
return fmt.Errorf("asynq: the scheduler is already running")
case base.StatusStopped:
return fmt.Errorf("asynq: the scheduler has already been stopped")
}
s.logger.Info("Scheduler starting")
s.logger.Infof("Scheduler timezone is set to %v", s.location)
s.cron.Start()
s.wg.Add(1)
go s.runHeartbeater()
s.status.Set(base.StatusRunning)
return nil
}
// Stop stops the scheduler.
// It returns an error if the scheduler is not currently running.
func (s *Scheduler) Stop() error {
if s.status.Get() != base.StatusRunning {
return fmt.Errorf("asynq: the scheduler is not running")
}
s.logger.Info("Scheduler shutting down")
close(s.done) // signal heartbeater to stop
ctx := s.cron.Stop()
<-ctx.Done()
s.wg.Wait()
s.client.Close()
s.rdb.Close()
s.status.Set(base.StatusStopped)
s.logger.Info("Scheduler stopped")
return nil
}
func (s *Scheduler) runHeartbeater() {
defer s.wg.Done()
ticker := time.NewTicker(5 * time.Second)
for {
select {
case <-s.done:
s.logger.Debugf("Scheduler heatbeater shutting down")
s.rdb.ClearSchedulerEntries(s.id)
return
case <-ticker.C:
s.beat()
}
}
}
// beat writes a snapshot of entries to redis.
func (s *Scheduler) beat() {
var entries []*base.SchedulerEntry
for _, entry := range s.cron.Entries() {
job := entry.Job.(*enqueueJob)
e := &base.SchedulerEntry{
ID: job.id.String(),
Spec: job.cronspec,
Type: job.task.Type,
Payload: job.task.Payload.data,
Opts: stringifyOptions(job.opts),
Next: entry.Next,
Prev: entry.Prev,
}
entries = append(entries, e)
}
s.logger.Debugf("Writing entries %v", entries)
if err := s.rdb.WriteSchedulerEntries(s.id, entries, 5*time.Second); err != nil {
s.logger.Warnf("Scheduler could not write heartbeat data: %v", err)
}
}
func stringifyOptions(opts []Option) []string {
var res []string
for _, opt := range opts {
res = append(res, opt.String())
}
return res
}

View File

@@ -10,88 +10,109 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest" "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
) )
func TestScheduler(t *testing.T) { func TestScheduler(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newScheduler(schedulerParams{
logger: testLogger,
broker: rdbClient,
interval: pollInterval,
})
t1 := h.NewTaskMessage("gen_thumbnail", nil)
t2 := h.NewTaskMessage("send_email", nil)
t3 := h.NewTaskMessage("reindex", nil)
t4 := h.NewTaskMessage("sync", nil)
now := time.Now()
tests := []struct { tests := []struct {
initScheduled []base.Z // scheduled queue initial state cronspec string
initRetry []base.Z // retry queue initial state task *Task
initQueue []*base.TaskMessage // default queue initial state opts []Option
wait time.Duration // wait duration before checking for final state wait time.Duration
wantScheduled []*base.TaskMessage // schedule queue final state queue string
wantRetry []*base.TaskMessage // retry queue final state want []*base.TaskMessage
wantQueue []*base.TaskMessage // default queue final state
}{ }{
{ {
initScheduled: []base.Z{ cronspec: "@every 3s",
{Message: t1, Score: now.Add(time.Hour).Unix()}, task: NewTask("task1", nil),
{Message: t2, Score: now.Add(-2 * time.Second).Unix()}, opts: []Option{MaxRetry(10)},
wait: 10 * time.Second,
queue: "default",
want: []*base.TaskMessage{
{
Type: "task1",
Payload: nil,
Retry: 10,
Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
{
Type: "task1",
Payload: nil,
Retry: 10,
Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
{
Type: "task1",
Payload: nil,
Retry: 10,
Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
}, },
initRetry: []base.Z{
{Message: t3, Score: time.Now().Add(-500 * time.Millisecond).Unix()},
},
initQueue: []*base.TaskMessage{t4},
wait: pollInterval * 2,
wantScheduled: []*base.TaskMessage{t1},
wantRetry: []*base.TaskMessage{},
wantQueue: []*base.TaskMessage{t2, t3, t4},
},
{
initScheduled: []base.Z{
{Message: t1, Score: now.Unix()},
{Message: t2, Score: now.Add(-2 * time.Second).Unix()},
{Message: t3, Score: now.Add(-500 * time.Millisecond).Unix()},
},
initRetry: []base.Z{},
initQueue: []*base.TaskMessage{t4},
wait: pollInterval * 2,
wantScheduled: []*base.TaskMessage{},
wantRetry: []*base.TaskMessage{},
wantQueue: []*base.TaskMessage{t1, t2, t3, t4},
}, },
} }
r := setup(t)
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case. scheduler := NewScheduler(getRedisConnOpt(t), nil)
h.SeedScheduledQueue(t, r, tc.initScheduled) // initialize scheduled queue if _, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...); err != nil {
h.SeedRetryQueue(t, r, tc.initRetry) // initialize retry queue t.Fatal(err)
h.SeedEnqueuedQueue(t, r, tc.initQueue) // initialize default queue }
var wg sync.WaitGroup if err := scheduler.Start(); err != nil {
s.start(&wg) t.Fatal(err)
}
time.Sleep(tc.wait) time.Sleep(tc.wait)
s.terminate() if err := scheduler.Stop(); err != nil {
t.Fatal(err)
gotScheduled := h.GetScheduledMessages(t, r)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.ScheduledQueue, diff)
} }
gotRetry := h.GetRetryMessages(t, r) got := asynqtest.GetPendingMessages(t, r, tc.queue)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortMsgOpt); diff != "" { if diff := cmp.Diff(tc.want, got, asynqtest.IgnoreIDOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.RetryQueue, diff) t.Errorf("mismatch found in queue %q: (-want,+got)\n%s", tc.queue, diff)
}
gotEnqueued := h.GetEnqueuedMessages(t, r)
if diff := cmp.Diff(tc.wantQueue, gotEnqueued, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.DefaultQueue, diff)
} }
} }
} }
func TestSchedulerWhenRedisDown(t *testing.T) {
var (
mu sync.Mutex
counter int
)
errorHandler := func(task *Task, opts []Option, err error) {
mu.Lock()
counter++
mu.Unlock()
}
// Connect to non-existent redis instance to simulate a redis server being down.
scheduler := NewScheduler(
RedisClientOpt{Addr: ":9876"},
&SchedulerOpts{EnqueueErrorHandler: errorHandler},
)
task := NewTask("test", nil)
if _, err := scheduler.Register("@every 3s", task); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
// Scheduler should attempt to enqueue the task three times (every 3s).
time.Sleep(10 * time.Second)
if err := scheduler.Stop(); err != nil {
t.Fatal(err)
}
mu.Lock()
if counter != 3 {
t.Errorf("EnqueueErrorHandler was called %d times, want 3", counter)
}
mu.Unlock()
}

View File

@@ -41,7 +41,7 @@ type Server struct {
// wait group to wait for all goroutines to finish. // wait group to wait for all goroutines to finish.
wg sync.WaitGroup wg sync.WaitGroup
scheduler *scheduler forwarder *forwarder
processor *processor processor *processor
syncer *syncer syncer *syncer
heartbeater *heartbeater heartbeater *heartbeater
@@ -75,11 +75,13 @@ type Config struct {
// Priority is treated as follows to avoid starving low priority queues. // Priority is treated as follows to avoid starving low priority queues.
// //
// Example: // Example:
// Queues: map[string]int{ //
// "critical": 6, // Queues: map[string]int{
// "default": 3, // "critical": 6,
// "low": 1, // "default": 3,
// } // "low": 1,
// }
//
// With the above config and given that all queues are not empty, the tasks // With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of // in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively. // the time respectively.
@@ -99,14 +101,17 @@ type Config struct {
// HandleError is invoked only if the task handler returns a non-nil error. // HandleError is invoked only if the task handler returns a non-nil error.
// //
// Example: // Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
// //
// ErrorHandler: asynq.ErrorHandlerFunc(reportError) // func reportError(ctx context, task *asynq.Task, err error) {
// retried, _ := asynq.GetRetryCount(ctx)
// maxRetry, _ := asynq.GetMaxRetry(ctx)
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler ErrorHandler ErrorHandler
// Logger specifies the logger used by the server instance. // Logger specifies the logger used by the server instance.
@@ -286,6 +291,10 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
if len(queues) == 0 { if len(queues) == 0 {
queues = defaultQueueConfig queues = defaultQueueConfig
} }
var qnames []string
for q, _ := range queues {
qnames = append(qnames, q)
}
shutdownTimeout := cfg.ShutdownTimeout shutdownTimeout := cfg.ShutdownTimeout
if shutdownTimeout == 0 { if shutdownTimeout == 0 {
shutdownTimeout = defaultShutdownTimeout shutdownTimeout = defaultShutdownTimeout
@@ -324,9 +333,10 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
starting: starting, starting: starting,
finished: finished, finished: finished,
}) })
scheduler := newScheduler(schedulerParams{ forwarder := newForwarder(forwarderParams{
logger: logger, logger: logger,
broker: rdb, broker: rdb,
queues: qnames,
interval: 5 * time.Second, interval: 5 * time.Second,
}) })
subscriber := newSubscriber(subscriberParams{ subscriber := newSubscriber(subscriberParams{
@@ -352,6 +362,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
logger: logger, logger: logger,
broker: rdb, broker: rdb,
retryDelayFunc: delayFunc, retryDelayFunc: delayFunc,
queues: qnames,
interval: 1 * time.Minute, interval: 1 * time.Minute,
}) })
healthchecker := newHealthChecker(healthcheckerParams{ healthchecker := newHealthChecker(healthcheckerParams{
@@ -364,7 +375,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
logger: logger, logger: logger,
broker: rdb, broker: rdb,
status: status, status: status,
scheduler: scheduler, forwarder: forwarder,
processor: processor, processor: processor,
syncer: syncer, syncer: syncer,
heartbeater: heartbeater, heartbeater: heartbeater,
@@ -442,7 +453,7 @@ func (srv *Server) Start(handler Handler) error {
srv.subscriber.start(&srv.wg) srv.subscriber.start(&srv.wg)
srv.syncer.start(&srv.wg) srv.syncer.start(&srv.wg)
srv.recoverer.start(&srv.wg) srv.recoverer.start(&srv.wg)
srv.scheduler.start(&srv.wg) srv.forwarder.start(&srv.wg)
srv.processor.start(&srv.wg) srv.processor.start(&srv.wg)
return nil return nil
} }
@@ -463,7 +474,7 @@ func (srv *Server) Stop() {
// Sender goroutines should be terminated before the receiver goroutines. // Sender goroutines should be terminated before the receiver goroutines.
// processor -> syncer (via syncCh) // processor -> syncer (via syncCh)
// processor -> heartbeater (via starting, finished channels) // processor -> heartbeater (via starting, finished channels)
srv.scheduler.terminate() srv.forwarder.terminate()
srv.processor.terminate() srv.processor.terminate()
srv.recoverer.terminate() srv.recoverer.terminate()
srv.syncer.terminate() srv.syncer.terminate()

View File

@@ -21,12 +21,10 @@ func TestServer(t *testing.T) {
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper") ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt) defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{ redisConnOpt := getRedisConnOpt(t)
Addr: "localhost:6379", c := NewClient(redisConnOpt)
DB: 15, defer c.Close()
} srv := NewServer(redisConnOpt, Config{
c := NewClient(r)
srv := NewServer(r, Config{
Concurrency: 10, Concurrency: 10,
LogLevel: testLogLevel, LogLevel: testLogLevel,
}) })
@@ -46,7 +44,7 @@ func TestServer(t *testing.T) {
t.Errorf("could not enqueue a task: %v", err) t.Errorf("could not enqueue a task: %v", err)
} }
_, err = c.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456})) _, err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 456}), ProcessIn(1*time.Hour))
if err != nil { if err != nil {
t.Errorf("could not enqueue a task: %v", err) t.Errorf("could not enqueue a task: %v", err)
} }
@@ -129,7 +127,7 @@ func TestServerWithRedisDown(t *testing.T) {
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel}) srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv.broker = testBroker srv.broker = testBroker
srv.scheduler.broker = testBroker srv.forwarder.broker = testBroker
srv.heartbeater.broker = testBroker srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker srv.processor.broker = testBroker
srv.subscriber.broker = testBroker srv.subscriber.broker = testBroker
@@ -159,14 +157,15 @@ func TestServerWithFlakyBroker(t *testing.T) {
}() }()
r := rdb.NewRDB(setup(t)) r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: redisAddr, DB: redisDB}, Config{LogLevel: testLogLevel}) redisConnOpt := getRedisConnOpt(t)
srv := NewServer(redisConnOpt, Config{LogLevel: testLogLevel})
srv.broker = testBroker srv.broker = testBroker
srv.scheduler.broker = testBroker srv.forwarder.broker = testBroker
srv.heartbeater.broker = testBroker srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker srv.processor.broker = testBroker
srv.subscriber.broker = testBroker srv.subscriber.broker = testBroker
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB}) c := NewClient(redisConnOpt)
h := func(ctx context.Context, task *Task) error { h := func(ctx context.Context, task *Task) error {
// force task retry. // force task retry.
@@ -191,7 +190,7 @@ func TestServerWithFlakyBroker(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
_, err = c.EnqueueIn(time.Duration(i)*time.Second, NewTask("scheduled", nil)) _, err = c.Enqueue(NewTask("scheduled", nil), ProcessIn(time.Duration(i)*time.Second))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@@ -28,3 +28,10 @@ func (srv *Server) waitForSignals() {
break break
} }
} }
func (s *Scheduler) waitForSignals() {
s.logger.Info("Send signal TERM or INT to stop the scheduler")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs
}

View File

@@ -20,3 +20,10 @@ func (srv *Server) waitForSignals() {
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT) signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs <-sigs
} }
func (s *Scheduler) waitForSignals() {
s.logger.Info("Send signal TERM or INT to stop the scheduler")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}

View File

@@ -20,7 +20,7 @@ type subscriber struct {
// channel to communicate back to the long running "subscriber" goroutine. // channel to communicate back to the long running "subscriber" goroutine.
done chan struct{} done chan struct{}
// cancelations hold cancel functions for all in-progress tasks. // cancelations hold cancel functions for all active tasks.
cancelations *base.Cancelations cancelations *base.Cancelations
// time to wait before retrying to connect to redis. // time to wait before retrying to connect to redis.

View File

@@ -16,6 +16,7 @@ import (
func TestSubscriber(t *testing.T) { func TestSubscriber(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
tests := []struct { tests := []struct {
@@ -76,6 +77,7 @@ func TestSubscriberWithRedisDown(t *testing.T) {
} }
}() }()
r := rdb.NewRDB(setup(t)) r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
cancelations := base.NewCancelations() cancelations := base.NewCancelations()

View File

@@ -22,8 +22,9 @@ func TestSyncer(t *testing.T) {
h.NewTaskMessage("gen_thumbnail", nil), h.NewTaskMessage("gen_thumbnail", nil),
} }
r := setup(t) r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
h.SeedInProgressQueue(t, r, inProgress) h.SeedActiveQueue(t, r, inProgress, base.DefaultQueueName)
const interval = time.Second const interval = time.Second
syncRequestCh := make(chan *syncRequest) syncRequestCh := make(chan *syncRequest)
@@ -48,9 +49,9 @@ func TestSyncer(t *testing.T) {
time.Sleep(2 * interval) // ensure that syncer runs at least once time.Sleep(2 * interval) // ensure that syncer runs at least once
gotInProgress := h.GetInProgressMessages(t, r) gotActive := h.GetActiveMessages(t, r, base.DefaultQueueName)
if l := len(gotInProgress); l != 0 { if l := len(gotActive); l != 0 {
t.Errorf("%q has length %d; want 0", base.InProgressQueue, l) t.Errorf("%q has length %d; want 0", base.ActiveKey(base.DefaultQueueName), l)
} }
} }

View File

@@ -1,20 +1,11 @@
# Asynq CLI # Asynq CLI
Asynq CLI is a command line tool to monitor the tasks managed by `asynq` package. Asynq CLI is a command line tool to monitor the queues and tasks managed by `asynq` package.
## Table of Contents ## Table of Contents
- [Installation](#installation) - [Installation](#installation)
- [Quick Start](#quick-start) - [Usage](#usage)
- [Stats](#stats)
- [History](#history)
- [Servers](#servers)
- [List](#list)
- [Enqueue](#enqueue)
- [Delete](#delete)
- [Kill](#kill)
- [Cancel](#cancel)
- [Pause](#pause)
- [Config File](#config-file) - [Config File](#config-file)
## Installation ## Installation
@@ -25,144 +16,41 @@ In order to use the tool, compile it using the following command:
This will create the asynq executable under your `$GOPATH/bin` directory. This will create the asynq executable under your `$GOPATH/bin` directory.
## Quickstart ## Usage
The tool has a few commands to inspect the state of tasks and queues. ### Commands
Run `asynq help` to see all the available commands. To view details on any command, use `asynq help <command> <subcommand>`.
- `asynq stats`
- `asynq queue [ls inspect history rm pause unpause]`
- `asynq task [ls cancel delete kill run delete-all kill-all run-all]`
- `asynq server [ls]`
### Global flags
Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application. Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
To connect to a redis cluster, pass `--cluster` and `--cluster_addrs` flags.
By default, CLI will try to connect to a redis server running at `localhost:6379`. By default, CLI will try to connect to a redis server running at `localhost:6379`.
### Stats ```
--config string config file to set flag defaut values (default is $HOME/.asynq.yaml)
-n, --db int redis database number (default is 0)
-h, --help help for asynq
-p, --password string password to use when connecting to redis server
-u, --uri string redis server URI (default "127.0.0.1:6379")
Stats command gives the overview of the current state of tasks and queues. You can run it in conjunction with `watch` command to repeatedly run `stats`. --cluster connect to redis cluster
--cluster_addrs string list of comma-separated redis server addresses
Example: ```
watch -n 3 asynq stats
This will run `asynq stats` command every 3 seconds.
![Gif](/docs/assets/asynq_stats.gif)
### History
History command shows the number of processed and failed tasks from the last x days.
By default, it shows the stats from the last 10 days. Use `--days` to specify the number of days.
Example:
asynq history --days=30
![Gif](/docs/assets/asynq_history.gif)
### Servers
Servers command shows the list of running worker servers pulling tasks from the given redis instance.
Example:
asynq servers
### List
List command shows all tasks in the specified state in a table format
Example:
asynq ls retry
asynq ls scheduled
asynq ls dead
asynq ls enqueued:default
asynq ls inprogress
### Enqueue
There are two commands to enqueue tasks.
Command `enq` takes a task ID and moves the task to **Enqueued** state. You can obtain the task ID by running `ls` command.
Example:
asynq enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example:
asynq enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state.
### Delete
There are two commands for task deletion.
Command `del` takes a task ID and deletes the task. You can obtain the task ID by running `ls` command.
Example:
asynq del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state.
Example:
asynq delall retry
Running the above command will delete all **Retry** tasks.
### Kill
There are two commands to kill (i.e. move to dead state) tasks.
Command `kill` takes a task ID and kills the task. You can obtain the task ID by running `ls` command.
Example:
asynq kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state.
Example:
asynq killall retry
Running the above command will move all **Retry** tasks to **Dead** state.
### Cancel
Command `cancel` takes a task ID and sends a cancelation signal to the goroutine processing the specified task.
You can obtain the task ID by running `ls` command.
The task should be in "in-progress" state.
Handler implementation needs to be context aware in order to actually stop processing.
Example:
asynq cancel bnogo8gt6toe23vhef0g
### Pause
Command `pause` pauses the spcified queue. Tasks in paused queues are not processed by servers.
To resume processing from the queue, use `unpause` command.
To see which queues are currently paused, use `stats` command.
Example:
asynq pause email
asynq unpause email
## Config File ## Config File
You can use a config file to set default values for the flags. You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server.
By default, `asynq` will try to read config file located in By default, `asynq` will try to read config file located in
`$HOME/.asynq.(yaml|json)`. You can specify the file location via `--config` flag. `$HOME/.asynq.(yml|json)`. You can specify the file location via `--config` flag.
Config file example: Config file example:

View File

@@ -1,53 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// cancelCmd represents the cancel command
var cancelCmd = &cobra.Command{
Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynq cancel) will send a cancelation signal to the goroutine processing
the specified task.
The command takes one argument which specifies the task to cancel.
The task should be in in-progress state.
Identifier for a task should be obtained by running "asynq ls" command.
Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing.
Example: asynq cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: cancel,
}
func init() {
rootCmd.AddCommand(cancelCmd)
}
func cancel(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
err := r.PublishCancelation(args[0])
if err != nil {
fmt.Printf("could not send cancelation signal: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully sent cancelation siganl for task %s\n", args[0])
}

122
tools/asynq/cmd/cron.go Normal file
View File

@@ -0,0 +1,122 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"time"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(cronCmd)
cronCmd.AddCommand(cronListCmd)
cronCmd.AddCommand(cronHistoryCmd)
}
var cronCmd = &cobra.Command{
Use: "cron",
Short: "Manage cron",
}
var cronListCmd = &cobra.Command{
Use: "ls",
Short: "List cron entries",
Run: cronList,
}
var cronHistoryCmd = &cobra.Command{
Use: "history",
Short: "Show history of each cron tasks",
Args: cobra.MinimumNArgs(1),
Run: cronHistory,
}
func cronList(cmd *cobra.Command, args []string) {
r := createRDB()
entries, err := r.ListSchedulerEntries()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(entries) == 0 {
fmt.Println("No scheduler entries")
return
}
// Sort entries by spec.
sort.Slice(entries, func(i, j int) bool {
x, y := entries[i], entries[j]
return x.Spec < y.Spec
})
cols := []string{"EntryID", "Spec", "Type", "Payload", "Options", "Next", "Prev"}
printRows := func(w io.Writer, tmpl string) {
for _, e := range entries {
fmt.Fprintf(w, tmpl, e.ID, e.Spec, e.Type, e.Payload, e.Opts,
nextEnqueue(e.Next), prevEnqueue(e.Prev))
}
}
printTable(cols, printRows)
}
// Returns a string describing when the next enqueue will happen.
func nextEnqueue(nextEnqueueAt time.Time) string {
d := nextEnqueueAt.Sub(time.Now()).Round(time.Second)
if d < 0 {
return "Now"
}
return fmt.Sprintf("In %v", d)
}
// Returns a string describing when the previous enqueue was.
func prevEnqueue(prevEnqueuedAt time.Time) string {
if prevEnqueuedAt.IsZero() {
return "N/A"
}
return fmt.Sprintf("%v ago", time.Since(prevEnqueuedAt).Round(time.Second))
}
// TODO: Paginate the result set.
func cronHistory(cmd *cobra.Command, args []string) {
r := createRDB()
for i, entryID := range args {
if i > 0 {
fmt.Printf("\n%s\n", separator)
}
fmt.Println()
fmt.Printf("Entry: %s\n\n", entryID)
events, err := r.ListSchedulerEnqueueEvents(entryID)
if err != nil {
fmt.Printf("error: %v\n", err)
continue
}
if len(events) == 0 {
fmt.Printf("No scheduler enqueue events found for entry: %s\n", entryID)
continue
}
// Sort entries by enqueuedAt timestamp.
sort.Slice(events, func(i, j int) bool {
x, y := events[i], events[j]
return x.EnqueuedAt.Unix() > y.EnqueuedAt.Unix()
})
cols := []string{"TaskID", "EnqueuedAt"}
printRows := func(w io.Writer, tmpl string) {
for _, e := range events {
fmt.Fprintf(w, tmpl, e.TaskID, e.EnqueuedAt)
}
}
printTable(cols, printRows)
}
}

View File

@@ -1,57 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// delCmd represents the del command
var delCmd = &cobra.Command{
Use: "del [task key]",
Short: "Deletes a task given an identifier",
Long: `Del (asynq del) will delete a task given an identifier.
The command takes one argument which specifies the task to delete.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynq ls" command.
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: del,
}
func init() {
rootCmd.AddCommand(delCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// delCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// delCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func del(cmd *cobra.Command, args []string) {
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
err := i.DeleteTaskByKey(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully deleted %v\n", args[0])
}

View File

@@ -1,72 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var delallValidArgs = []string{"scheduled", "retry", "dead"}
// delallCmd represents the delall command
var delallCmd = &cobra.Command{
Use: "delall [state]",
Short: "Deletes all tasks in the specified state",
Long: `Delall (asynq delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
Example: asynq delall dead -> Deletes all dead tasks`,
ValidArgs: delallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: delall,
}
func init() {
rootCmd.AddCommand(delallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// delallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// delallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func delall(cmd *cobra.Command, args []string) {
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
var (
n int
err error
)
switch args[0] {
case "scheduled":
n, err = i.DeleteAllScheduledTasks()
case "retry":
n, err = i.DeleteAllRetryTasks()
case "dead":
n, err = i.DeleteAllDeadTasks()
default:
fmt.Printf("error: `asynq delall [state]` only accepts %v as the argument.\n", delallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Deleted all %d tasks in %q state\n", n, args[0])
}

View File

@@ -1,60 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// enqCmd represents the enq command
var enqCmd = &cobra.Command{
Use: "enq [task key]",
Short: "Enqueues a task given an identifier",
Long: `Enq (asynq enq) will enqueue a task given an identifier.
The command takes one argument which specifies the task to enqueue.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynq ls" command.
The task enqueued by this command will be processed as soon as the task
gets dequeued by a processor.
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: enq,
}
func init() {
rootCmd.AddCommand(enqCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// enqCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// enqCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func enq(cmd *cobra.Command, args []string) {
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
err := i.EnqueueTaskByKey(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully enqueued %v\n", args[0])
}

View File

@@ -1,75 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var enqallValidArgs = []string{"scheduled", "retry", "dead"}
// enqallCmd represents the enqall command
var enqallCmd = &cobra.Command{
Use: "enqall [state]",
Short: "Enqueues all tasks in the specified state",
Long: `Enqall (asynq enqall) will enqueue all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
The tasks enqueued by this command will be processed as soon as it
gets dequeued by a processor.
Example: asynq enqall dead -> Enqueues all dead tasks`,
ValidArgs: enqallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: enqall,
}
func init() {
rootCmd.AddCommand(enqallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// enqallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// enqallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func enqall(cmd *cobra.Command, args []string) {
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
var (
n int
err error
)
switch args[0] {
case "scheduled":
n, err = i.EnqueueAllScheduledTasks()
case "retry":
n, err = i.EnqueueAllRetryTasks()
case "dead":
n, err = i.EnqueueAllDeadTasks()
default:
fmt.Printf("error: `asynq enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Enqueued %d tasks in %q state\n", n, args[0])
}

View File

@@ -1,69 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"strings"
"text/tabwriter"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var days int
// historyCmd represents the history command
var historyCmd = &cobra.Command{
Use: "history",
Short: "Shows historical aggregate data",
Long: `History (asynq history) will show the number of processed and failed tasks
from the last x days.
By default, it will show the data from the last 10 days.
Example: asynq history -x=30 -> Shows stats from the last 30 days`,
Args: cobra.NoArgs,
Run: history,
}
func init() {
rootCmd.AddCommand(historyCmd)
historyCmd.Flags().IntVarP(&days, "days", "x", 10, "show data from last x days")
}
func history(cmd *cobra.Command, args []string) {
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
stats, err := i.History(days)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
printDailyStats(stats)
}
func printDailyStats(stats []*asynq.DailyStats) {
format := strings.Repeat("%v\t", 4) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "Date (UTC)", "Processed", "Failed", "Error Rate")
fmt.Fprintf(tw, format, "----------", "---------", "------", "----------")
for _, s := range stats {
var errrate string
if s.Processed == 0 {
errrate = "N/A"
} else {
errrate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
}
fmt.Fprintf(tw, format, s.Date.Format("2006-01-02"), s.Processed, s.Failed, errrate)
}
tw.Flush()
}

View File

@@ -1,58 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// killCmd represents the kill command
var killCmd = &cobra.Command{
Use: "kill [task key]",
Short: "Kills a task given an identifier",
Long: `Kill (asynq kill) will put a task in dead state given an identifier.
The command takes one argument which specifies the task to kill.
The task should be in either scheduled or retry state.
Identifier for a task should be obtained by running "asynq ls" command.
Example: asynq kill r:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: kill,
}
func init() {
rootCmd.AddCommand(killCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// killCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// killCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func kill(cmd *cobra.Command, args []string) {
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
err := i.KillTaskByKey(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully killed %v\n", args[0])
}

View File

@@ -1,70 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var killallValidArgs = []string{"scheduled", "retry"}
// killallCmd represents the killall command
var killallCmd = &cobra.Command{
Use: "killall [state]",
Short: "Kills all tasks in the specified state",
Long: `Killall (asynq killall) will update all tasks from the specified state to dead state.
The argument should be either "scheduled" or "retry".
Example: asynq killall retry -> Update all retry tasks to dead tasks`,
ValidArgs: killallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: killall,
}
func init() {
rootCmd.AddCommand(killallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// killallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// killallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func killall(cmd *cobra.Command, args []string) {
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
var (
n int
err error
)
switch args[0] {
case "scheduled":
n, err = i.KillAllScheduledTasks()
case "retry":
n, err = i.KillAllRetryTasks()
default:
fmt.Printf("error: `asynq killall [state]` only accepts %v as the argument.\n", killallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully updated %d tasks to \"dead\" state\n", n)
}

View File

@@ -1,190 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"strings"
"time"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var lsValidArgs = []string{"enqueued", "inprogress", "scheduled", "retry", "dead"}
// lsCmd represents the ls command
var lsCmd = &cobra.Command{
Use: "ls [state]",
Short: "Lists tasks in the specified state",
Long: `Ls (asynq ls) will list all tasks in the specified state in a table format.
The command takes one argument which specifies the state of tasks.
The argument value should be one of "enqueued", "inprogress", "scheduled",
"retry", or "dead".
Example:
asynq ls dead -> Lists all tasks in dead state
Enqueued tasks requires a queue name after ":"
Example:
asynq ls enqueued:default -> List tasks from default queue
asynq ls enqueued:critical -> List tasks from critical queue
`,
Args: cobra.ExactValidArgs(1),
Run: ls,
}
// Flags
var pageSize int
var pageNum int
func init() {
rootCmd.AddCommand(lsCmd)
lsCmd.Flags().IntVar(&pageSize, "size", 30, "page size")
lsCmd.Flags().IntVar(&pageNum, "page", 0, "page number - zero indexed (default 0)")
}
func ls(cmd *cobra.Command, args []string) {
if pageSize < 0 {
fmt.Println("page size cannot be negative.")
os.Exit(1)
}
if pageNum < 0 {
fmt.Println("page number cannot be negative.")
os.Exit(1)
}
i := asynq.NewInspector(asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
parts := strings.Split(args[0], ":")
switch parts[0] {
case "enqueued":
if len(parts) != 2 {
fmt.Printf("error: Missing queue name\n`asynq ls enqueued:[queue name]`\n")
os.Exit(1)
}
listEnqueued(i, parts[1])
case "inprogress":
listInProgress(i)
case "scheduled":
listScheduled(i)
case "retry":
listRetry(i)
case "dead":
listDead(i)
default:
fmt.Printf("error: `asynq ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
os.Exit(1)
}
}
func listEnqueued(i *asynq.Inspector, qname string) {
tasks, err := i.ListEnqueuedTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No enqueued tasks in %q queue\n", qname)
return
}
cols := []string{"ID", "Type", "Payload", "Queue"}
printTable(cols, func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload, t.Queue)
}
})
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listInProgress(i *asynq.Inspector) {
tasks, err := i.ListInProgressTasks(asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No in-progress tasks")
return
}
cols := []string{"ID", "Type", "Payload"}
printTable(cols, func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload)
}
})
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listScheduled(i *asynq.Inspector) {
tasks, err := i.ListScheduledTasks(asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No scheduled tasks")
return
}
cols := []string{"Key", "Type", "Payload", "Process In", "Queue"}
printTable(cols, func(w io.Writer, tmpl string) {
for _, t := range tasks {
processIn := fmt.Sprintf("%.0f seconds",
t.NextEnqueueAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, processIn, t.Queue)
}
})
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listRetry(i *asynq.Inspector) {
tasks, err := i.ListRetryTasks(asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No retry tasks")
return
}
cols := []string{"Key", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry", "Queue"}
printTable(cols, func(w io.Writer, tmpl string) {
for _, t := range tasks {
var nextRetry string
if d := t.NextEnqueueAt.Sub(time.Now()); d > 0 {
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
} else {
nextRetry = "right now"
}
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, nextRetry, t.ErrorMsg, t.Retried, t.MaxRetry, t.Queue)
}
})
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}
func listDead(i *asynq.Inspector) {
tasks, err := i.ListDeadTasks(asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Println("No dead tasks")
return
}
cols := []string{"Key", "Type", "Payload", "Last Failed", "Last Error", "Queue"}
printTable(cols, func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, t.LastFailedAt, t.ErrorMsg, t.Queue)
}
})
fmt.Printf("\nShowing %d tasks from page %d\n", len(tasks), pageNum)
}

View File

@@ -19,11 +19,10 @@ import (
"github.com/spf13/viper" "github.com/spf13/viper"
) )
// migrateCmd represents the migrate command
var migrateCmd = &cobra.Command{ var migrateCmd = &cobra.Command{
Use: "migrate", Use: "migrate",
Short: fmt.Sprintf("Migrate all tasks to be compatible with asynq@%s", base.Version), Short: fmt.Sprintf("Migrate all tasks to be compatible with asynq v%s", base.Version),
Long: fmt.Sprintf("Migrate (asynq migrate) will convert all tasks in redis to be compatible with asynq@%s.", base.Version), Args: cobra.NoArgs,
Run: migrate, Run: migrate,
} }
@@ -37,28 +36,183 @@ func migrate(cmd *cobra.Command, args []string) {
DB: viper.GetInt("db"), DB: viper.GetInt("db"),
Password: viper.GetString("password"), Password: viper.GetString("password"),
}) })
r := createRDB()
lists := []string{base.InProgressQueue} /*** Migrate from 0.9 to 0.10, 0.11 compatible ***/
lists := []string{"asynq:in_progress"}
allQueues, err := c.SMembers(base.AllQueues).Result() allQueues, err := c.SMembers(base.AllQueues).Result()
if err != nil { if err != nil {
fmt.Printf("error: could not read all queues: %v", err) printError(fmt.Errorf("could not read all queues: %v", err))
os.Exit(1) os.Exit(1)
} }
lists = append(lists, allQueues...) lists = append(lists, allQueues...)
for _, key := range lists { for _, key := range lists {
if err := migrateList(c, key); err != nil { if err := migrateList(c, key); err != nil {
fmt.Printf("error: %v", err) printError(err)
os.Exit(1) os.Exit(1)
} }
} }
zsets := []string{base.ScheduledQueue, base.RetryQueue, base.DeadQueue} zsets := []string{"asynq:scheduled", "asynq:retry", "asynq:dead"}
for _, key := range zsets { for _, key := range zsets {
if err := migrateZSet(c, key); err != nil { if err := migrateZSet(c, key); err != nil {
fmt.Printf("error: %v", err) printError(err)
os.Exit(1) os.Exit(1)
} }
} }
/*** Migrate from 0.11 to 0.12 compatible ***/
if err := createBackup(c, base.AllQueues); err != nil {
printError(err)
os.Exit(1)
}
for _, qkey := range allQueues {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if err := c.SAdd(base.AllQueues, qname).Err(); err != nil {
err = fmt.Errorf("could not add queue name %q to %q set: %v\n",
qname, base.AllQueues, err)
printError(err)
os.Exit(1)
}
}
if err := deleteBackup(c, base.AllQueues); err != nil {
printError(err)
os.Exit(1)
}
for _, qkey := range allQueues {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if exists := c.Exists(qkey).Val(); exists == 1 {
if err := c.Rename(qkey, base.QueueKey(qname)).Err(); err != nil {
printError(fmt.Errorf("could not rename key %q: %v\n", qkey, err))
os.Exit(1)
}
}
}
if err := partitionZSetMembersByQueue(c, "asynq:scheduled", base.ScheduledKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionZSetMembersByQueue(c, "asynq:retry", base.RetryKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionZSetMembersByQueue(c, "asynq:dead", base.DeadKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionZSetMembersByQueue(c, "asynq:deadlines", base.DeadlinesKey); err != nil {
printError(err)
os.Exit(1)
}
if err := partitionListMembersByQueue(c, "asynq:in_progress", base.ActiveKey); err != nil {
printError(err)
os.Exit(1)
}
paused, err := c.SMembers("asynq:paused").Result()
if err != nil {
printError(fmt.Errorf("command SMEMBERS asynq:paused failed: ", err))
os.Exit(1)
}
for _, qkey := range paused {
qname := strings.TrimPrefix(qkey, "asynq:queues:")
if err := r.Pause(qname); err != nil {
printError(err)
os.Exit(1)
}
}
if err := deleteKey(c, "asynq:paused"); err != nil {
printError(err)
os.Exit(1)
}
if err := deleteKey(c, "asynq:servers"); err != nil {
printError(err)
os.Exit(1)
}
if err := deleteKey(c, "asynq:workers"); err != nil {
printError(err)
os.Exit(1)
}
}
func backupKey(key string) string {
return fmt.Sprintf("%s:backup", key)
}
func createBackup(c *redis.Client, key string) error {
err := c.Rename(key, backupKey(key)).Err()
if err != nil {
return fmt.Errorf("could not rename key %q: %v", key, err)
}
return nil
}
func deleteBackup(c *redis.Client, key string) error {
return deleteKey(c, backupKey(key))
}
func deleteKey(c *redis.Client, key string) error {
exists := c.Exists(key).Val()
if exists == 0 {
// key does not exist
return nil
}
err := c.Del(key).Err()
if err != nil {
return fmt.Errorf("could not delete key %q: %v", key, err)
}
return nil
}
func printError(err error) {
fmt.Println(err)
fmt.Println()
fmt.Println("Migrate command error")
fmt.Println("Please file an issue on Github at https://github.com/hibiken/asynq/issues/new/choose")
}
func partitionZSetMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error {
zs, err := c.ZRangeWithScores(key, 0, -1).Result()
if err != nil {
return fmt.Errorf("command ZRANGE %s 0 -1 WITHSCORES failed: %v", key, err)
}
for _, z := range zs {
s := cast.ToString(z.Member)
msg, err := base.DecodeMessage(s)
if err != nil {
return fmt.Errorf("could not decode message from %q: %v", key, err)
}
if err := c.ZAdd(newKeyFunc(msg.Queue), &z).Err(); err != nil {
return fmt.Errorf("could not add %v to %q: %v", z, newKeyFunc(msg.Queue))
}
}
if err := deleteKey(c, key); err != nil {
return err
}
return nil
}
func partitionListMembersByQueue(c *redis.Client, key string, newKeyFunc func(string) string) error {
data, err := c.LRange(key, 0, -1).Result()
if err != nil {
return fmt.Errorf("command LRANGE %s 0 -1 failed: %v", key, err)
}
for _, s := range data {
msg, err := base.DecodeMessage(s)
if err != nil {
return fmt.Errorf("could not decode message from %q: %v", key, err)
}
if err := c.LPush(newKeyFunc(msg.Queue), s).Err(); err != nil {
return fmt.Errorf("could not add %v to %q: %v", s, newKeyFunc(msg.Queue))
}
}
if err := deleteKey(c, key); err != nil {
return err
}
return nil
} }
type oldTaskMessage struct { type oldTaskMessage struct {

View File

@@ -1,47 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// pauseCmd represents the pause command
var pauseCmd = &cobra.Command{
Use: "pause [queue name]",
Short: "Pauses the specified queue",
Long: `Pause (asynq pause) will pause the specified queue.
Asynq servers will not process tasks from paused queues.
Use the "unpause" command to resume a paused queue.
Example: asynq pause default -> Pause the "default" queue`,
Args: cobra.ExactValidArgs(1),
Run: pause,
}
func init() {
rootCmd.AddCommand(pauseCmd)
}
func pause(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.Pause(args[0])
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully paused queue %q\n", args[0])
}

256
tools/asynq/cmd/queue.go Normal file
View File

@@ -0,0 +1,256 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"github.com/fatih/color"
"github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
)
const separator = "================================================="
func init() {
rootCmd.AddCommand(queueCmd)
queueCmd.AddCommand(queueListCmd)
queueCmd.AddCommand(queueInspectCmd)
queueCmd.AddCommand(queueHistoryCmd)
queueHistoryCmd.Flags().IntP("days", "x", 10, "show data from last x days")
queueCmd.AddCommand(queuePauseCmd)
queueCmd.AddCommand(queueUnpauseCmd)
queueCmd.AddCommand(queueRemoveCmd)
queueRemoveCmd.Flags().BoolP("force", "f", false, "remove the queue regardless of its size")
}
var queueCmd = &cobra.Command{
Use: "queue",
Short: "Manage queues",
}
var queueListCmd = &cobra.Command{
Use: "ls",
Short: "List queues",
// TODO: Use RunE instead?
Run: queueList,
}
var queueInspectCmd = &cobra.Command{
Use: "inspect QUEUE [QUEUE...]",
Short: "Display detailed information on one or more queues",
Args: cobra.MinimumNArgs(1),
// TODO: Use RunE instead?
Run: queueInspect,
}
var queueHistoryCmd = &cobra.Command{
Use: "history QUEUE [QUEUE...]",
Short: "Display historical aggregate data from one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queueHistory,
}
var queuePauseCmd = &cobra.Command{
Use: "pause QUEUE [QUEUE...]",
Short: "Pause one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queuePause,
}
var queueUnpauseCmd = &cobra.Command{
Use: "unpause QUEUE [QUEUE...]",
Short: "Unpause one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queueUnpause,
}
var queueRemoveCmd = &cobra.Command{
Use: "rm QUEUE [QUEUE...]",
Short: "Remove one or more queues",
Args: cobra.MinimumNArgs(1),
Run: queueRemove,
}
func queueList(cmd *cobra.Command, args []string) {
type queueInfo struct {
name string
keyslot int64
nodes []asynq.ClusterNode
}
inspector := createInspector()
queues, err := inspector.Queues()
if err != nil {
fmt.Printf("error: Could not fetch list of queues: %v\n", err)
os.Exit(1)
}
var qs []queueInfo
for _, qname := range queues {
q := queueInfo{name: qname}
if useRedisCluster {
keyslot, err := inspector.ClusterKeySlot(qname)
if err != nil {
fmt.Errorf("error: Could not get cluster keyslot for %q\n", qname)
continue
}
q.keyslot = keyslot
nodes, err := inspector.ClusterNodes(qname)
if err != nil {
fmt.Errorf("error: Could not get cluster nodes for %q\n", qname)
continue
}
q.nodes = nodes
}
qs = append(qs, q)
}
if useRedisCluster {
printTable(
[]string{"Queue", "Cluster KeySlot", "Cluster Nodes"},
func(w io.Writer, tmpl string) {
for _, q := range qs {
fmt.Fprintf(w, tmpl, q.name, q.keyslot, q.nodes)
}
},
)
} else {
for _, q := range qs {
fmt.Println(q.name)
}
}
}
func queueInspect(cmd *cobra.Command, args []string) {
inspector := createInspector()
for i, qname := range args {
if i > 0 {
fmt.Printf("\n%s\n", separator)
}
fmt.Println()
stats, err := inspector.CurrentStats(qname)
if err != nil {
fmt.Printf("error: %v\n", err)
continue
}
printQueueStats(stats)
}
}
func printQueueStats(s *asynq.QueueStats) {
bold := color.New(color.Bold)
bold.Println("Queue Info")
fmt.Printf("Name: %s\n", s.Queue)
fmt.Printf("Size: %d\n", s.Size)
fmt.Printf("Paused: %t\n\n", s.Paused)
bold.Println("Task Count by State")
printTable(
[]string{"active", "pending", "scheduled", "retry", "dead"},
func(w io.Writer, tmpl string) {
fmt.Fprintf(w, tmpl, s.Active, s.Pending, s.Scheduled, s.Retry, s.Dead)
},
)
fmt.Println()
bold.Printf("Daily Stats %s UTC\n", s.Timestamp.UTC().Format("2006-01-02"))
printTable(
[]string{"processed", "failed", "error rate"},
func(w io.Writer, tmpl string) {
var errRate string
if s.Processed == 0 {
errRate = "N/A"
} else {
errRate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
}
fmt.Fprintf(w, tmpl, s.Processed, s.Failed, errRate)
},
)
}
func queueHistory(cmd *cobra.Command, args []string) {
days, err := cmd.Flags().GetInt("days")
if err != nil {
fmt.Printf("error: Internal error: %v\n", err)
os.Exit(1)
}
inspector := createInspector()
for i, qname := range args {
if i > 0 {
fmt.Printf("\n%s\n", separator)
}
fmt.Printf("\nQueue: %s\n\n", qname)
stats, err := inspector.History(qname, days)
if err != nil {
fmt.Printf("error: %v\n", err)
continue
}
printDailyStats(stats)
}
}
func printDailyStats(stats []*asynq.DailyStats) {
printTable(
[]string{"date (UTC)", "processed", "failed", "error rate"},
func(w io.Writer, tmpl string) {
for _, s := range stats {
var errRate string
if s.Processed == 0 {
errRate = "N/A"
} else {
errRate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
}
fmt.Fprintf(w, tmpl, s.Date.Format("2006-01-02"), s.Processed, s.Failed, errRate)
}
},
)
}
func queuePause(cmd *cobra.Command, args []string) {
inspector := createInspector()
for _, qname := range args {
err := inspector.PauseQueue(qname)
if err != nil {
fmt.Println(err)
continue
}
fmt.Printf("Successfully paused queue %q\n", qname)
}
}
func queueUnpause(cmd *cobra.Command, args []string) {
inspector := createInspector()
for _, qname := range args {
err := inspector.UnpauseQueue(qname)
if err != nil {
fmt.Println(err)
continue
}
fmt.Printf("Successfully unpaused queue %q\n", qname)
}
}
func queueRemove(cmd *cobra.Command, args []string) {
// TODO: Use inspector once RemoveQueue become public API.
force, err := cmd.Flags().GetBool("force")
if err != nil {
fmt.Printf("error: Internal error: %v\n", err)
os.Exit(1)
}
r := createRDB()
for _, qname := range args {
err = r.RemoveQueue(qname, force)
if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq queue rm --force %s'\n", err, qname)
continue
}
fmt.Printf("error: %v\n", err)
continue
}
fmt.Printf("Successfully removed queue %q\n", qname)
}
}

View File

@@ -1,54 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// rmqCmd represents the rmq command
var rmqCmd = &cobra.Command{
Use: "rmq [queue name]",
Short: "Removes the specified queue",
Long: `Rmq (asynq rmq) will remove the specified queue.
By default, it will remove the queue only if it's empty.
Use --force option to override this behavior.
Example: asynq rmq low -> Removes "low" queue`,
Args: cobra.ExactValidArgs(1),
Run: rmq,
}
var rmqForce bool
func init() {
rootCmd.AddCommand(rmqCmd)
rmqCmd.Flags().BoolVarP(&rmqForce, "force", "f", false, "remove the queue regardless of its size")
}
func rmq(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.RemoveQueue(args[0], rmqForce)
if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq rmq --force %s'\n", err, args[0])
os.Exit(1)
}
fmt.Printf("error: %v", err)
os.Exit(1)
}
fmt.Printf("Successfully removed queue %q\n", args[0])
}

View File

@@ -5,13 +5,17 @@
package cmd package cmd
import ( import (
"crypto/tls"
"fmt" "fmt"
"io" "io"
"os" "os"
"strings" "strings"
"text/tabwriter" "text/tabwriter"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
homedir "github.com/mitchellh/go-homedir" homedir "github.com/mitchellh/go-homedir"
@@ -20,10 +24,16 @@ import (
var cfgFile string var cfgFile string
// Flags // Global flag variables
var uri string var (
var db int uri string
var password string db int
password string
useRedisCluster bool
clusterAddrs string
tlsServerName string
)
// rootCmd represents the base command when called without any subcommands // rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
@@ -62,9 +72,19 @@ func init() {
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI") rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)") rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server") rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
rootCmd.PersistentFlags().BoolVar(&useRedisCluster, "cluster", false, "connect to redis cluster")
rootCmd.PersistentFlags().StringVar(&clusterAddrs, "cluster_addrs",
"127.0.0.1:7000,127.0.0.1:7001,127.0.0.1:7002,127.0.0.1:7003,127.0.0.1:7004,127.0.0.1:7005",
"list of comma-separated redis server addresses")
rootCmd.PersistentFlags().StringVar(&tlsServerName, "tls_server",
"", "server name for TLS validation")
// Bind flags with config.
viper.BindPFlag("uri", rootCmd.PersistentFlags().Lookup("uri")) viper.BindPFlag("uri", rootCmd.PersistentFlags().Lookup("uri"))
viper.BindPFlag("db", rootCmd.PersistentFlags().Lookup("db")) viper.BindPFlag("db", rootCmd.PersistentFlags().Lookup("db"))
viper.BindPFlag("password", rootCmd.PersistentFlags().Lookup("password")) viper.BindPFlag("password", rootCmd.PersistentFlags().Lookup("password"))
viper.BindPFlag("cluster", rootCmd.PersistentFlags().Lookup("cluster"))
viper.BindPFlag("cluster_addrs", rootCmd.PersistentFlags().Lookup("cluster_addrs"))
viper.BindPFlag("tls_server", rootCmd.PersistentFlags().Lookup("tls_server"))
} }
// initConfig reads in config file and ENV variables if set. // initConfig reads in config file and ENV variables if set.
@@ -93,6 +113,56 @@ func initConfig() {
} }
} }
// createRDB creates a RDB instance using flag values and returns it.
func createRDB() *rdb.RDB {
var c redis.UniversalClient
if useRedisCluster {
addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
c = redis.NewClusterClient(&redis.ClusterOptions{
Addrs: addrs,
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
})
} else {
c = redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
})
}
return rdb.NewRDB(c)
}
// createRDB creates a Inspector instance using flag values and returns it.
func createInspector() *asynq.Inspector {
var connOpt asynq.RedisConnOpt
if useRedisCluster {
addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
connOpt = asynq.RedisClusterClientOpt{
Addrs: addrs,
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
}
} else {
connOpt = asynq.RedisClientOpt{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
TLSConfig: getTLSConfig(),
}
}
return asynq.NewInspector(connOpt)
}
func getTLSConfig() *tls.Config {
tlsServer := viper.GetString("tls_server")
if tlsServer == "" {
return nil
}
return &tls.Config{ServerName: tlsServer}
}
// printTable is a helper function to print data in table format. // printTable is a helper function to print data in table format.
// //
// cols is a list of headers and printRow specifies how to print rows. // cols is a list of headers and printRow specifies how to print rows.

View File

@@ -12,18 +12,24 @@ import (
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
) )
// serversCmd represents the servers command func init() {
var serversCmd = &cobra.Command{ rootCmd.AddCommand(serverCmd)
Use: "servers", serverCmd.AddCommand(serverListCmd)
Short: "Shows all running worker servers", }
Long: `Servers (asynq servers) will show all running worker servers
pulling tasks from the specified redis instance. var serverCmd = &cobra.Command{
Use: "server",
Short: "Manage servers",
}
var serverListCmd = &cobra.Command{
Use: "ls",
Short: "List servers",
Long: `Server list (asynq server ls) shows all running worker servers
pulling tasks from the given redis instance.
The command shows the following for each server: The command shows the following for each server:
* Host and PID of the process in which the server is running * Host and PID of the process in which the server is running
@@ -34,20 +40,11 @@ The command shows the following for each server:
A "running" server is pulling tasks from queues and processing them. A "running" server is pulling tasks from queues and processing them.
A "quiet" server is no longer pulling new tasks from queues`, A "quiet" server is no longer pulling new tasks from queues`,
Args: cobra.NoArgs, Run: serverList,
Run: servers,
} }
func init() { func serverList(cmd *cobra.Command, args []string) {
rootCmd.AddCommand(serversCmd) r := createRDB()
}
func servers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
servers, err := r.ListServers() servers, err := r.ListServers()
if err != nil { if err != nil {
@@ -81,12 +78,6 @@ func servers(cmd *cobra.Command, args []string) {
printTable(cols, printRows) printTable(cols, printRows)
} }
// timeAgo takes a time and returns a string of the format "<duration> ago".
func timeAgo(since time.Time) string {
d := time.Since(since).Round(time.Second)
return fmt.Sprintf("%v ago", d)
}
func formatQueues(qmap map[string]int) string { func formatQueues(qmap map[string]int) string {
// sort queues by priority and name // sort queues by priority and name
type queue struct { type queue struct {
@@ -116,3 +107,9 @@ func formatQueues(qmap map[string]int) string {
} }
return b.String() return b.String()
} }
// timeAgo takes a time and returns a string of the format "<duration> ago".
func timeAgo(since time.Time) string {
d := time.Since(since).Round(time.Second)
return fmt.Sprintf("%v ago", d)
}

View File

@@ -6,15 +6,16 @@ package cmd
import ( import (
"fmt" "fmt"
"io"
"os" "os"
"strconv" "strconv"
"strings" "strings"
"text/tabwriter" "text/tabwriter"
"time"
"github.com/go-redis/redis/v7" "github.com/fatih/color"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
) )
// statsCmd represents the stats command // statsCmd represents the stats command
@@ -51,57 +52,93 @@ func init() {
// statsCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle") // statsCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
} }
type AggregateStats struct {
Active int
Pending int
Scheduled int
Retry int
Dead int
Processed int
Failed int
Timestamp time.Time
}
func stats(cmd *cobra.Command, args []string) { func stats(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{ r := createRDB()
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
stats, err := r.CurrentStats() queues, err := r.AllQueues()
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
info, err := r.RedisInfo()
var aggStats AggregateStats
var stats []*rdb.Stats
for _, qname := range queues {
s, err := r.CurrentStats(qname)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
aggStats.Active += s.Active
aggStats.Pending += s.Pending
aggStats.Scheduled += s.Scheduled
aggStats.Retry += s.Retry
aggStats.Dead += s.Dead
aggStats.Processed += s.Processed
aggStats.Failed += s.Failed
aggStats.Timestamp = s.Timestamp
stats = append(stats, s)
}
var info map[string]string
if useRedisCluster {
info, err = r.RedisClusterInfo()
} else {
info, err = r.RedisInfo()
}
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
fmt.Println("STATES") bold := color.New(color.Bold)
printStates(stats) bold.Println("Task Count by State")
printStatsByState(&aggStats)
fmt.Println() fmt.Println()
fmt.Println("QUEUES") bold.Println("Task Count by Queue")
printQueues(stats.Queues) printStatsByQueue(stats)
fmt.Println() fmt.Println()
fmt.Printf("STATS FOR %s UTC\n", stats.Timestamp.UTC().Format("2006-01-02")) bold.Printf("Daily Stats %s UTC\n", aggStats.Timestamp.UTC().Format("2006-01-02"))
printStats(stats) printSuccessFailureStats(&aggStats)
fmt.Println() fmt.Println()
fmt.Println("REDIS INFO") if useRedisCluster {
printInfo(info) bold.Println("Redis Cluster Info")
printClusterInfo(info)
} else {
bold.Println("Redis Info")
printInfo(info)
}
fmt.Println() fmt.Println()
} }
func printStates(s *rdb.Stats) { func printStatsByState(s *AggregateStats) {
format := strings.Repeat("%v\t", 5) + "\n" format := strings.Repeat("%v\t", 5) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "InProgress", "Enqueued", "Scheduled", "Retry", "Dead") fmt.Fprintf(tw, format, "active", "pending", "scheduled", "retry", "dead")
fmt.Fprintf(tw, format, "----------", "--------", "---------", "-----", "----") fmt.Fprintf(tw, format, "----------", "--------", "---------", "-----", "----")
fmt.Fprintf(tw, format, s.InProgress, s.Enqueued, s.Scheduled, s.Retry, s.Dead) fmt.Fprintf(tw, format, s.Active, s.Pending, s.Scheduled, s.Retry, s.Dead)
tw.Flush() tw.Flush()
} }
func printQueues(queues []*rdb.Queue) { func printStatsByQueue(stats []*rdb.Stats) {
var headers, seps, counts []string var headers, seps, counts []string
for _, q := range queues { for _, s := range stats {
title := queueTitle(q) title := queueTitle(s)
headers = append(headers, title) headers = append(headers, title)
seps = append(seps, strings.Repeat("-", len(title))) seps = append(seps, strings.Repeat("-", len(title)))
counts = append(counts, strconv.Itoa(q.Size)) counts = append(counts, strconv.Itoa(s.Size))
} }
format := strings.Repeat("%v\t", len(headers)) + "\n" format := strings.Repeat("%v\t", len(headers)) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
@@ -111,19 +148,19 @@ func printQueues(queues []*rdb.Queue) {
tw.Flush() tw.Flush()
} }
func queueTitle(q *rdb.Queue) string { func queueTitle(s *rdb.Stats) string {
var b strings.Builder var b strings.Builder
b.WriteString(strings.Title(q.Name)) b.WriteString(s.Queue)
if q.Paused { if s.Paused {
b.WriteString(" (Paused)") b.WriteString(" (paused)")
} }
return b.String() return b.String()
} }
func printStats(s *rdb.Stats) { func printSuccessFailureStats(s *AggregateStats) {
format := strings.Repeat("%v\t", 3) + "\n" format := strings.Repeat("%v\t", 3) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "Processed", "Failed", "Error Rate") fmt.Fprintf(tw, format, "processed", "failed", "error rate")
fmt.Fprintf(tw, format, "---------", "------", "----------") fmt.Fprintf(tw, format, "---------", "------", "----------")
var errrate string var errrate string
if s.Processed == 0 { if s.Processed == 0 {
@@ -138,7 +175,7 @@ func printStats(s *rdb.Stats) {
func printInfo(info map[string]string) { func printInfo(info map[string]string) {
format := strings.Repeat("%v\t", 5) + "\n" format := strings.Repeat("%v\t", 5) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "Version", "Uptime", "Connections", "Memory Usage", "Peak Memory Usage") fmt.Fprintf(tw, format, "version", "uptime", "connections", "memory usage", "peak memory usage")
fmt.Fprintf(tw, format, "-------", "------", "-----------", "------------", "-----------------") fmt.Fprintf(tw, format, "-------", "------", "-----------", "------------", "-----------------")
fmt.Fprintf(tw, format, fmt.Fprintf(tw, format,
info["redis_version"], info["redis_version"],
@@ -150,6 +187,19 @@ func printInfo(info map[string]string) {
tw.Flush() tw.Flush()
} }
func printClusterInfo(info map[string]string) {
printTable(
[]string{"State", "Known Nodes", "Cluster Size"},
func(w io.Writer, tmpl string) {
fmt.Fprintf(w, tmpl,
strings.ToUpper(info["cluster_state"]),
info["cluster_known_nodes"],
info["cluster_size"],
)
},
)
}
func toInterfaceSlice(strs []string) []interface{} { func toInterfaceSlice(strs []string) []interface{} {
var res []interface{} var res []interface{}
for _, s := range strs { for _, s := range strs {

463
tools/asynq/cmd/task.go Normal file
View File

@@ -0,0 +1,463 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"time"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(taskCmd)
taskCmd.AddCommand(taskListCmd)
taskListCmd.Flags().StringP("queue", "q", "", "queue to inspect")
taskListCmd.Flags().StringP("state", "s", "", "state of the tasks to inspect")
taskListCmd.Flags().Int("page", 1, "page number")
taskListCmd.Flags().Int("size", 30, "page size")
taskListCmd.MarkFlagRequired("queue")
taskListCmd.MarkFlagRequired("state")
taskCmd.AddCommand(taskCancelCmd)
taskCmd.AddCommand(taskKillCmd)
taskKillCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskKillCmd.Flags().StringP("key", "k", "", "key of the task")
taskKillCmd.MarkFlagRequired("queue")
taskKillCmd.MarkFlagRequired("key")
taskCmd.AddCommand(taskDeleteCmd)
taskDeleteCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskDeleteCmd.Flags().StringP("key", "k", "", "key of the task")
taskDeleteCmd.MarkFlagRequired("queue")
taskDeleteCmd.MarkFlagRequired("key")
taskCmd.AddCommand(taskRunCmd)
taskRunCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs")
taskRunCmd.Flags().StringP("key", "k", "", "key of the task")
taskRunCmd.MarkFlagRequired("queue")
taskRunCmd.MarkFlagRequired("key")
taskCmd.AddCommand(taskKillAllCmd)
taskKillAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
taskKillAllCmd.Flags().StringP("state", "s", "", "state of the tasks")
taskKillAllCmd.MarkFlagRequired("queue")
taskKillAllCmd.MarkFlagRequired("state")
taskCmd.AddCommand(taskDeleteAllCmd)
taskDeleteAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
taskDeleteAllCmd.Flags().StringP("state", "s", "", "state of the tasks")
taskDeleteAllCmd.MarkFlagRequired("queue")
taskDeleteAllCmd.MarkFlagRequired("state")
taskCmd.AddCommand(taskRunAllCmd)
taskRunAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong")
taskRunAllCmd.Flags().StringP("state", "s", "", "state of the tasks")
taskRunAllCmd.MarkFlagRequired("queue")
taskRunAllCmd.MarkFlagRequired("state")
}
var taskCmd = &cobra.Command{
Use: "task",
Short: "Manage tasks",
}
var taskListCmd = &cobra.Command{
Use: "ls --queue=QUEUE --state=STATE",
Short: "List tasks",
Long: `List tasks of the given state from the specified queue.
The value for the state flag should be one of:
- active
- pending
- scheduled
- retry
- dead
List opeartion paginates the result set.
By default, the command fetches the first 30 tasks.
Use --page and --size flags to specify the page number and size.
Example:
To list pending tasks from "default" queue, run
asynq task ls --queue=default --state=pending
To list the tasks from the second page, run
asynq task ls --queue=default --state=pending --page=1`,
Run: taskList,
}
var taskCancelCmd = &cobra.Command{
Use: "cancel TASK_ID [TASK_ID...]",
Short: "Cancel one or more active tasks",
Args: cobra.MinimumNArgs(1),
Run: taskCancel,
}
var taskKillCmd = &cobra.Command{
Use: "kill --queue=QUEUE --key=KEY",
Short: "Kill a task with the given key",
Args: cobra.NoArgs,
Run: taskKill,
}
var taskDeleteCmd = &cobra.Command{
Use: "delete --queue=QUEUE --key=KEY",
Short: "Delete a task with the given key",
Args: cobra.NoArgs,
Run: taskDelete,
}
var taskRunCmd = &cobra.Command{
Use: "run --queue=QUEUE --key=KEY",
Short: "Run a task with the given key",
Args: cobra.NoArgs,
Run: taskRun,
}
var taskKillAllCmd = &cobra.Command{
Use: "kill-all --queue=QUEUE --state=STATE",
Short: "Kill all tasks in the given state",
Args: cobra.NoArgs,
Run: taskKillAll,
}
var taskDeleteAllCmd = &cobra.Command{
Use: "delete-all --queue=QUEUE --key=KEY",
Short: "Delete all tasks in the given state",
Args: cobra.NoArgs,
Run: taskDeleteAll,
}
var taskRunAllCmd = &cobra.Command{
Use: "run-all --queue=QUEUE --key=KEY",
Short: "Run all tasks in the given state",
Args: cobra.NoArgs,
Run: taskRunAll,
}
func taskList(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
pageNum, err := cmd.Flags().GetInt("page")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
pageSize, err := cmd.Flags().GetInt("size")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
switch state {
case "active":
listActiveTasks(qname, pageNum, pageSize)
case "pending":
listPendingTasks(qname, pageNum, pageSize)
case "scheduled":
listScheduledTasks(qname, pageNum, pageSize)
case "retry":
listRetryTasks(qname, pageNum, pageSize)
case "dead":
listDeadTasks(qname, pageNum, pageSize)
default:
fmt.Printf("error: state=%q is not supported\n", state)
os.Exit(1)
}
}
func listActiveTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListActiveTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No active tasks in %q queue\n", qname)
return
}
printTable(
[]string{"ID", "Type", "Payload"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload)
}
},
)
}
func listPendingTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListPendingTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No pending tasks in %q queue\n", qname)
return
}
printTable(
[]string{"ID", "Type", "Payload"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, t.Payload)
}
},
)
}
func listScheduledTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListScheduledTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No scheduled tasks in %q queue\n", qname)
return
}
printTable(
[]string{"Key", "Type", "Payload", "Process In"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
processIn := fmt.Sprintf("%.0f seconds",
t.NextProcessAt.Sub(time.Now()).Seconds())
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, processIn)
}
},
)
}
func listRetryTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListRetryTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No retry tasks in %q queue\n", qname)
return
}
printTable(
[]string{"Key", "Type", "Payload", "Next Retry", "Last Error", "Retried", "Max Retry"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
var nextRetry string
if d := t.NextProcessAt.Sub(time.Now()); d > 0 {
nextRetry = fmt.Sprintf("in %v", d.Round(time.Second))
} else {
nextRetry = "right now"
}
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, nextRetry, t.ErrorMsg, t.Retried, t.MaxRetry)
}
},
)
}
func listDeadTasks(qname string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListDeadTasks(qname, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No dead tasks in %q queue\n", qname)
return
}
printTable(
[]string{"Key", "Type", "Payload", "Last Failed", "Last Error"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.Key(), t.Type, t.Payload, t.LastFailedAt, t.ErrorMsg)
}
})
}
func taskCancel(cmd *cobra.Command, args []string) {
r := createRDB()
for _, id := range args {
err := r.PublishCancelation(id)
if err != nil {
fmt.Printf("error: could not send cancelation signal: %v\n", err)
continue
}
fmt.Printf("Sent cancelation signal for task %s\n", id)
}
}
func taskKill(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
key, err := cmd.Flags().GetString("key")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
err = i.KillTaskByKey(qname, key)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Println("task transitioned to dead state")
}
func taskDelete(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
key, err := cmd.Flags().GetString("key")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
err = i.DeleteTaskByKey(qname, key)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Println("task deleted")
}
func taskRun(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
key, err := cmd.Flags().GetString("key")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
err = i.RunTaskByKey(qname, key)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Println("task transitioned to pending state")
}
func taskKillAll(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
var n int
switch state {
case "scheduled":
n, err = i.KillAllScheduledTasks(qname)
case "retry":
n, err = i.KillAllRetryTasks(qname)
default:
fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1)
}
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("%d tasks transitioned to dead state\n", n)
}
func taskDeleteAll(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
var n int
switch state {
case "scheduled":
n, err = i.DeleteAllScheduledTasks(qname)
case "retry":
n, err = i.DeleteAllRetryTasks(qname)
case "dead":
n, err = i.DeleteAllDeadTasks(qname)
default:
fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1)
}
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("%d tasks deleted\n", n)
}
func taskRunAll(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
state, err := cmd.Flags().GetString("state")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
i := createInspector()
var n int
switch state {
case "scheduled":
n, err = i.RunAllScheduledTasks(qname)
case "retry":
n, err = i.RunAllRetryTasks(qname)
case "dead":
n, err = i.RunAllDeadTasks(qname)
default:
fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1)
}
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("%d tasks transitioned to pending state\n", n)
}

View File

@@ -1,46 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// unpauseCmd represents the unpause command
var unpauseCmd = &cobra.Command{
Use: "unpause [queue name]",
Short: "Unpauses the specified queue",
Long: `Unpause (asynq unpause) will unpause the specified queue.
Asynq servers will process tasks from unpaused/resumed queues.
Example: asynq unpause default -> Resume the "default" queue`,
Args: cobra.ExactValidArgs(1),
Run: unpause,
}
func init() {
rootCmd.AddCommand(unpauseCmd)
}
func unpause(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
err := r.Unpause(args[0])
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully resumed queue %q\n", args[0])
}

View File

@@ -1,75 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// workersCmd represents the workers command
var workersCmd = &cobra.Command{
Use: "workers",
Short: "Shows all running workers information",
Long: `Workers (asynq workers) will show all running workers information.
The command shows the following for each worker:
* Process in which the worker is running
* ID of the task worker is processing
* Type of the task worker is processing
* Payload of the task worker is processing
* Queue that the task was pulled from.
* Time the worker started processing the task`,
Args: cobra.NoArgs,
Run: workers,
}
func init() {
rootCmd.AddCommand(workersCmd)
}
func workers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
workers, err := r.ListWorkers()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(workers) == 0 {
fmt.Println("No workers")
return
}
// sort by started timestamp or ID.
sort.Slice(workers, func(i, j int) bool {
x, y := workers[i], workers[j]
if x.Started != y.Started {
return x.Started.Before(y.Started)
}
return x.ID < y.ID
})
cols := []string{"Process", "ID", "Type", "Payload", "Queue", "Started"}
printRows := func(w io.Writer, tmpl string) {
for _, wk := range workers {
fmt.Fprintf(w, tmpl,
fmt.Sprintf("%s:%d", wk.Host, wk.PID), wk.ID, wk.Type, wk.Payload, wk.Queue, timeAgo(wk.Started))
}
}
printTable(cols, printRows)
}

View File

@@ -3,13 +3,17 @@ module github.com/hibiken/asynq/tools
go 1.13 go 1.13
require ( require (
github.com/go-redis/redis/v7 v7.2.0 github.com/coreos/go-etcd v2.0.0+incompatible // indirect
github.com/cpuguy83/go-md2man v1.0.10 // indirect
github.com/fatih/color v1.9.0
github.com/go-redis/redis/v7 v7.4.0
github.com/google/uuid v1.1.1 github.com/google/uuid v1.1.1
github.com/hibiken/asynq v0.4.0 github.com/hibiken/asynq v0.4.0
github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-homedir v1.1.0
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.3.1
github.com/spf13/cobra v0.0.5 github.com/spf13/cobra v1.0.0
github.com/spf13/viper v1.6.2 github.com/spf13/viper v1.6.2
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8 // indirect
) )
replace github.com/hibiken/asynq => ./.. replace github.com/hibiken/asynq => ./..

View File

@@ -16,10 +16,13 @@ github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3Ee
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
@@ -28,6 +31,8 @@ github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs= github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg= github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
@@ -72,6 +77,11 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4= github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.11 h1:FxPOTFNqGkuDUGi3H/qkUbQO4ZiBa2brKq5r0l8TGeM=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
@@ -98,8 +108,12 @@ github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
@@ -114,11 +128,14 @@ github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s= github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk= github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg= github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E= github.com/spf13/viper v1.6.2 h1:7aKfF+e8/k68gda3LOjo5RxiUqddoFxVq4BKBPrxk5E=
github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k= github.com/spf13/viper v1.6.2/go.mod h1:t3iDnF5Jlj76alVNuyFBk5oUMCvsrkbvZK0WQdfDi5k=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@@ -160,7 +177,9 @@ golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=