2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-10-20 21:26:14 +08:00

Compare commits

..

41 Commits

Author SHA1 Message Date
Ken Hibino
9884d5f2fa v0.8.2 2020-05-03 16:55:34 -07:00
Ken Hibino
826f1ecff4 Update docs 2020-05-03 16:54:39 -07:00
Ken Hibino
24f2b64c6c Make sure to invoke CancelFunc in all cases 2020-05-03 15:58:23 -07:00
Ken Hibino
1c1474c55c Add tests to simulate cases where server cannot talk to redis 2020-05-02 07:05:26 -07:00
Ken Hibino
5161b9368a Clean up tests 2020-05-02 07:05:26 -07:00
Ken Hibino
0c998a8e17 Add test for signal handling 2020-04-28 06:56:05 -07:00
Ken Hibino
49160f2536 v0.8.1 2020-04-27 06:49:12 -07:00
Ken Hibino
e33d297d8e Add SetDefaultOptions method to Client 2020-04-27 06:45:13 -07:00
Ken Hibino
eb8ced6bdd Add ParseRedisURI helper function 2020-04-25 13:06:20 -07:00
Ken Hibino
789a9fd711 Update readme 2020-04-20 07:52:26 -07:00
Ken Hibino
5924cdac33 Add example tests 2020-04-19 11:36:43 -07:00
Ken Hibino
442c9275a0 v0.8.0 2020-04-19 09:08:20 -07:00
Ken Hibino
a0865df33c Change default concurrency to the number of CPUs 2020-04-19 08:51:17 -07:00
Ken Hibino
431a96a1f7 Update changelog 2020-04-19 08:51:17 -07:00
Ken Hibino
74e5582cfc Update readme 2020-04-19 08:51:17 -07:00
Ken Hibino
bf542a781c Add failure test for heartbeater 2020-04-19 08:51:17 -07:00
Ken Hibino
7c7f8e5f30 Move Broker interface to base package 2020-04-19 08:51:17 -07:00
Ken Hibino
46ab4417dd Add test to simulate situation where redis is down 2020-04-19 08:51:17 -07:00
Ken Hibino
f8a94fb839 Define broker interface 2020-04-19 08:51:17 -07:00
Ken Hibino
42453280f4 Fix subscriber to not panic when it cannot establish pubsub channel on
startup
2020-04-19 08:51:17 -07:00
Ken Hibino
4ec2dc9e47 Minor reorganization in tests 2020-04-19 08:51:17 -07:00
Ken Hibino
45933eb6b0 Reword doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
4df372b369 Allow user to configure shutdown timeout 2020-04-19 08:51:17 -07:00
Ken Hibino
c688b8f4f9 Fix test for base package 2020-04-19 08:51:17 -07:00
Ken Hibino
eef2f5f3cb Add test cases for server error 2020-04-19 08:51:17 -07:00
Ken Hibino
239ef27a6e Update doc comments 2020-04-19 08:51:17 -07:00
Ken Hibino
24da281aa7 Update docs with new APIs 2020-04-19 08:51:17 -07:00
Ken Hibino
b086e88a47 Rename ps command to servers 2020-04-19 08:51:17 -07:00
Ken Hibino
cf61911a49 Update all reference to asynqmon to Asynq CLI 2020-04-19 08:51:17 -07:00
Ken Hibino
aafd8a5b74 Rename internal ProcessState to ServerState 2020-04-19 08:51:17 -07:00
Ken Hibino
4f11e52558 Rename CLI to asynq 2020-04-19 08:51:17 -07:00
Ken Hibino
b14c73809e Refactor server state 2020-04-19 08:51:17 -07:00
Ken Hibino
779065c269 Export Start, Stop and Quiet method on Server type 2020-04-19 08:51:17 -07:00
Ken Hibino
f9842ba914 Rename Background to Server 2020-04-19 08:51:17 -07:00
Ken Hibino
022dc29701 Add overview section in readme 2020-04-11 17:08:31 -07:00
Ken Hibino
40d1889ba0 Highlight stability and compatibility section in readme 2020-04-11 09:30:00 -07:00
Ken Hibino
7e96e893fe (fix): Change log messages depending on signals being handled 2020-04-10 08:56:01 -07:00
Ken Hibino
84b0c76c8b v0.7.1 2020-04-05 14:56:06 -07:00
Ken Hibino
60b887b8e3 Fix singnal handling for different systems 2020-04-05 14:37:23 -07:00
Ken Hibino
7864bea55c Update readme
Add features section
2020-03-28 08:44:06 -07:00
Apos Spanos
47220554ca Correct typo 2020-03-23 13:47:05 -07:00
51 changed files with 1798 additions and 712 deletions

6
.gitignore vendored
View File

@@ -15,7 +15,7 @@
/examples /examples
# Ignore command binary # Ignore command binary
/tools/asynqmon/asynqmon /tools/asynq/asynq
# Ignore asynqmon config file # Ignore asynq config file
.asynqmon.* .asynq.*

View File

@@ -7,6 +7,41 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [0.8.2] - 2020-05-03
### Fixed
- [Fixed cancelfunc leak](https://github.com/hibiken/asynq/pull/145)
## [0.8.1] - 2020-04-27
### Added
- `ParseRedisURI` helper function is added to create a `RedisConnOpt` from a URI string.
- `SetDefaultOptions` method is added to `Client`.
## [0.8.0] - 2020-04-19
### Changed
- `Background` type is renamed to `Server`.
- To upgrade from the previous version, Update `NewBackground` to `NewServer` and pass `Config` by value.
- CLI is renamed to `asynq`.
- To upgrade the CLI to the latest version run `go get -u github.com/hibiken/tools/asynq`
- The `ps` command in CLI is renamed to `servers`
- `Concurrency` defaults to the number of CPUs when unset or set to a negative value.
### Added
- `ShutdownTimeout` field is added to `Config` to speicfy timeout duration used during graceful shutdown.
- New `Server` type exposes `Start`, `Stop`, and `Quiet` as well as `Run`.
## [0.7.1] - 2020-04-05
### Fixed
- Fixed signal handling for windows.
## [0.7.0] - 2020-03-22 ## [0.7.0] - 2020-03-22
### Changed ### Changed

114
README.md
View File

@@ -7,12 +7,41 @@
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community) [![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq) [![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq)
Asynq is a simple Go library for queueing tasks and processing them in the background with workers. ## Overview
It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before v1.0.0 release. Asynq is a Go library for queueing tasks and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
![Task Queue Diagram](/docs/assets/task-queue.png) Highlevel overview of how Asynq works:
- Client puts task on a queue
- Server pulls task off queues and starts a worker goroutine for each task
- Tasks are processed concurrently by multiple workers
Task queues are used as a mechanism to distribute work across multiple machines.
A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
![Task Queue Diagram](/docs/assets/overview.png)
## Stability and Compatibility
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users (Feedback on APIs are appreciated!). The public API could change without a major version update before v1.0.0 release.
**Status**: The library is currently undergoing heavy development with frequent, breaking API changes.
## Features
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
- Scheduling of tasks
- Durability since tasks are written to Redis
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#strict-priority-queues)
- Low latency to add a task since writes are fast in Redis
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
- [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive)
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for HA
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Quickstart ## Quickstart
@@ -22,7 +51,7 @@ First, make sure you are running a Redis server locally.
$ redis-server $ redis-server
``` ```
Next, write a package that encapslates task creation and task handling. Next, write a package that encapsulates task creation and task handling.
```go ```go
package tasks package tasks
@@ -33,13 +62,15 @@ import (
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
) )
// A list of background task types. // A list of task types.
const ( const (
EmailDelivery = "email:deliver" EmailDelivery = "email:deliver"
ImageProcessing = "image:process" ImageProcessing = "image:process"
) )
//--------------------------------------------
// Write function NewXXXTask to create a task. // Write function NewXXXTask to create a task.
//--------------------------------------------
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task { func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID} payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
@@ -51,8 +82,13 @@ func NewImageProcessingTask(src, dst string) *asynq.Task {
return asynq.NewTask(ImageProcessing, payload) return asynq.NewTask(ImageProcessing, payload)
} }
//-------------------------------------------------------------
// Write function HandleXXXTask to handle the given task. // Write function HandleXXXTask to handle the given task.
// NOTE: It satisfies the asynq.HandlerFunc interface. // NOTE: It satisfies the asynq.HandlerFunc interface.
//
// Handler doesn't need to be a function. You can define a type
// that satisfies asynq.Handler interface. See example below.
//-------------------------------------------------------------
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error { func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
userID, err := t.Payload.GetInt("user_id") userID, err := t.Payload.GetInt("user_id")
@@ -68,7 +104,12 @@ func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
return nil return nil
} }
func HandleImageProcessingTask(ctx context.Context, t *asynq.Task) error { type ImageProcesser struct {
// ... fields for struct
}
// ImageProcessor implements asynq.Handler.
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
src, err := t.Payload.GetString("src") src, err := t.Payload.GetString("src")
if err != nil { if err != nil {
return err return err
@@ -81,10 +122,14 @@ func HandleImageProcessingTask(ctx context.Context, t *asynq.Task) error {
// Image processing logic ... // Image processing logic ...
return nil return nil
} }
func NewImageProcessor() *ImageProcessor {
// ... return an instance
}
``` ```
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to enqueue tasks to the task queue. In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
A task will be processed by a background worker as soon as the task gets enqueued. A task will be processed asynchronously by a background worker as soon as the task gets enqueued.
Scheduled tasks will be stored in Redis and will be enqueued at the specified time. Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
```go ```go
@@ -100,10 +145,13 @@ import (
const redisAddr = "127.0.0.1:6379" const redisAddr = "127.0.0.1:6379"
func main() { func main() {
r := &asynq.RedisClientOpt{Addr: redisAddr} r := asynq.RedisClientOpt{Addr: redisAddr}
c := asynq.NewClient(r) c := asynq.NewClient(r)
// ----------------------------------------------------
// Example 1: Enqueue task to be processed immediately. // Example 1: Enqueue task to be processed immediately.
// Use (*Client).Enqueue method.
// ----------------------------------------------------
t := tasks.NewEmailDeliveryTask(42, "some:template:id") t := tasks.NewEmailDeliveryTask(42, "some:template:id")
err := c.Enqueue(t) err := c.Enqueue(t)
@@ -112,7 +160,10 @@ func main() {
} }
// ----------------------------------------------------------
// Example 2: Schedule task to be processed in the future. // Example 2: Schedule task to be processed in the future.
// Use (*Client).EnqueueIn or (*Client).EnqueueAt.
// ----------------------------------------------------------
t = tasks.NewEmailDeliveryTask(42, "other:template:id") t = tasks.NewEmailDeliveryTask(42, "other:template:id")
err = c.EnqueueIn(24*time.Hour, t) err = c.EnqueueIn(24*time.Hour, t)
@@ -121,19 +172,34 @@ func main() {
} }
// Example 3: Pass options to tune task processing behavior. // --------------------------------------------------------------------------
// Options include MaxRetry, Queue, Timeout, Deadline, etc. // Example 3: Set options to tune task processing behavior.
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
// --------------------------------------------------------------------------
c.SetDefaultOptions(tasks.ImageProcessing, asynq.MaxRetry(10), asynq.Timeout(time.Minute))
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url") t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
err = c.Enqueue(t, asynq.MaxRetry(10), asynq.Queue("critical"), asynq.Timeout(time.Minute)) err = c.Enqueue(t)
if err != nil {
log.Fatal("could not enqueue task: %v", err)
}
// --------------------------------------------------------------------------
// Example 4: Pass options to tune task processing behavior at enqueue time.
// Options passed at enqueue time override default ones, if any.
// --------------------------------------------------------------------------
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
if err != nil { if err != nil {
log.Fatal("could not enqueue task: %v", err) log.Fatal("could not enqueue task: %v", err)
} }
} }
``` ```
Next, create a binary to process these tasks in the background. Next, create a worker server to process these tasks in the background.
To start the background workers, use [`Background`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Background) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks. To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler. You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
@@ -141,6 +207,8 @@ You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?
package main package main
import ( import (
"log"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"your/app/package/tasks" "your/app/package/tasks"
) )
@@ -148,9 +216,9 @@ import (
const redisAddr = "127.0.0.1:6379" const redisAddr = "127.0.0.1:6379"
func main() { func main() {
r := &asynq.RedisClientOpt{Addr: redisAddr} r := asynq.RedisClientOpt{Addr: redisAddr}
bg := asynq.NewBackground(r, &asynq.Config{ srv := asynq.NewServer(r, asynq.Config{
// Specify how many concurrent workers to use // Specify how many concurrent workers to use
Concurrency: 10, Concurrency: 10,
// Optionally specify multiple queues with different priority. // Optionally specify multiple queues with different priority.
@@ -165,10 +233,12 @@ func main() {
// mux maps a type to a handler // mux maps a type to a handler
mux := asynq.NewServeMux() mux := asynq.NewServeMux()
mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask) mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask)
mux.HandleFunc(tasks.ImageProcessing, tasks.HandleImageProcessingTask) mux.Handle(tasks.ImageProcessing, tasks.NewImageProcessor())
// ...register other handlers... // ...register other handlers...
bg.Run(mux) if err := srv.Run(mux); err != nil {
log.Fatalf("could not run server: %v", err)
}
} }
``` ```
@@ -184,7 +254,7 @@ Here's an example of running the `stats` command.
![Gif](/docs/assets/demo.gif) ![Gif](/docs/assets/demo.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynqmon/README.md). For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
## Installation ## Installation
@@ -197,7 +267,7 @@ go get -u github.com/hibiken/asynq
To install the CLI tool, run the following command: To install the CLI tool, run the following command:
```sh ```sh
go get -u github.com/hibiken/asynq/tools/asynqmon go get -u github.com/hibiken/asynq/tools/asynq
``` ```
## Requirements ## Requirements
@@ -216,7 +286,7 @@ Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI - [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library. - [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynqmon CLI is built with cobra - [Cobra](https://github.com/spf13/cobra) : Asynq CLI is built with cobra
## License ## License

View File

@@ -7,6 +7,9 @@ package asynq
import ( import (
"crypto/tls" "crypto/tls"
"fmt" "fmt"
"net/url"
"strconv"
"strings"
"github.com/go-redis/redis/v7" "github.com/go-redis/redis/v7"
) )
@@ -94,6 +97,79 @@ type RedisFailoverClientOpt struct {
TLSConfig *tls.Config TLSConfig *tls.Config
} }
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
// It returns a non-nil error if uri cannot be parsed.
//
// Three URI schemes are supported, which are redis:, redis-socket:, and redis-sentinel:.
// Supported formats are:
// redis://[:password@]host[:port][/dbnumber]
// redis-socket://[:password@]path[?db=dbnumber]
// redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName]
func ParseRedisURI(uri string) (RedisConnOpt, error) {
u, err := url.Parse(uri)
if err != nil {
return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err)
}
switch u.Scheme {
case "redis":
return parseRedisURI(u)
case "redis-socket":
return parseRedisSocketURI(u)
case "redis-sentinel":
return parseRedisSentinelURI(u)
default:
return nil, fmt.Errorf("asynq: unsupported uri scheme: %q", u.Scheme)
}
}
func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
var db int
var err error
if len(u.Path) > 0 {
xs := strings.Split(strings.Trim(u.Path, "/"), "/")
db, err = strconv.Atoi(xs[0])
if err != nil {
return nil, fmt.Errorf("asynq: could not parse redis uri: database number should be the first segment of the path")
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Addr: u.Host, DB: db, Password: password}, nil
}
func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) {
const errPrefix = "asynq: could not parse redis socket uri"
if len(u.Path) == 0 {
return nil, fmt.Errorf("%s: path does not exist", errPrefix)
}
q := u.Query()
var db int
var err error
if n := q.Get("db"); n != "" {
db, err = strconv.Atoi(n)
if err != nil {
return nil, fmt.Errorf("%s: query param `db` should be a number", errPrefix)
}
}
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Network: "unix", Addr: u.Path, DB: db, Password: password}, nil
}
func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
addrs := strings.Split(u.Host, ",")
master := u.Query().Get("master")
var password string
if v, ok := u.User.Password(); ok {
password = v
}
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, Password: password}, nil
}
// createRedisClient returns a redis client given a redis connection configuration. // createRedisClient returns a redis client given a redis connection configuration.
// //
// Passing an unexpected type as a RedisConnOpt argument will cause panic. // Passing an unexpected type as a RedisConnOpt argument will cause panic.

View File

@@ -44,3 +44,106 @@ var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
}) })
return out return out
}) })
func TestParseRedisURI(t *testing.T) {
tests := []struct {
uri string
want RedisConnOpt
}{
{
"redis://localhost:6379",
RedisClientOpt{Addr: "localhost:6379"},
},
{
"redis://localhost:6379/3",
RedisClientOpt{Addr: "localhost:6379", DB: 3},
},
{
"redis://:mypassword@localhost:6379",
RedisClientOpt{Addr: "localhost:6379", Password: "mypassword"},
},
{
"redis://:mypassword@127.0.0.1:6379/11",
RedisClientOpt{Addr: "127.0.0.1:6379", Password: "mypassword", DB: 11},
},
{
"redis-socket:///var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock"},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword"},
},
{
"redis-socket:///var/run/redis/redis.sock?db=7",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", DB: 7},
},
{
"redis-socket://:mypassword@/var/run/redis/redis.sock?db=12",
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword", DB: 12},
},
{
"redis-sentinel://localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
},
},
{
"redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
Password: "mypassword",
},
},
}
for _, tc := range tests {
got, err := ParseRedisURI(tc.uri)
if err != nil {
t.Errorf("ParseRedisURI(%q) returned an error: %v", tc.uri, err)
continue
}
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff)
}
}
}
func TestParseRedisURIErrors(t *testing.T) {
tests := []struct {
desc string
uri string
}{
{
"unsupported scheme",
"rdb://localhost:6379",
},
{
"missing scheme",
"localhost:6379",
},
{
"multiple db numbers",
"redis://localhost:6379/1,2,3",
},
{
"missing path for socket connection",
"redis-socket://?db=one",
},
{
"non integer for db numbers for socket",
"redis-socket:///some/path/to/redis?db=one",
},
}
for _, tc := range tests {
_, err := ParseRedisURI(tc.uri)
if err == nil {
t.Errorf("%s: ParseRedisURI(%q) succeeded for malformed input, want error",
tc.desc, tc.uri)
}
}
}

View File

@@ -1,128 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"go.uber.org/goleak"
)
func TestBackground(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
client := NewClient(r)
bg := NewBackground(r, &Config{
Concurrency: 10,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
bg.start(HandlerFunc(h))
err := client.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
err = client.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
bg.stop()
}
func TestGCD(t *testing.T) {
tests := []struct {
input []int
want int
}{
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
got := gcd(tc.input...)
if got != tc.want {
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
}
}
}
func TestNormalizeQueueCfg(t *testing.T) {
tests := []struct {
input map[string]int
want map[string]int
}{
{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
}
for _, tc := range tests {
got := normalizeQueueCfg(tc.input)
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
tc.input, got, tc.want, diff)
}
}
}

View File

@@ -24,7 +24,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
DB: redisDB, DB: redisDB,
} }
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration { RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second return time.Second
@@ -46,11 +46,11 @@ func BenchmarkEndToEndSimple(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }
@@ -67,7 +67,7 @@ func BenchmarkEndToEnd(b *testing.B) {
DB: redisDB, DB: redisDB,
} }
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
RetryDelayFunc: func(n int, err error, t *Task) time.Duration { RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
return time.Second return time.Second
@@ -99,11 +99,11 @@ func BenchmarkEndToEnd(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }
@@ -124,7 +124,7 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
DB: redisDB, DB: redisDB,
} }
client := NewClient(redis) client := NewClient(redis)
bg := NewBackground(redis, &Config{ srv := NewServer(redis, Config{
Concurrency: 10, Concurrency: 10,
Queues: map[string]int{ Queues: map[string]int{
"high": 6, "high": 6,
@@ -160,11 +160,11 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
bg.start(HandlerFunc(handler)) srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
bg.stop() srv.Stop()
b.StartTimer() // end teardown b.StartTimer() // end teardown
} }
} }

View File

@@ -9,6 +9,7 @@ import (
"fmt" "fmt"
"sort" "sort"
"strings" "strings"
"sync"
"time" "time"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
@@ -23,13 +24,18 @@ import (
// //
// Clients are safe for concurrent use by multiple goroutines. // Clients are safe for concurrent use by multiple goroutines.
type Client struct { type Client struct {
mu sync.Mutex
opts map[string][]Option
rdb *rdb.RDB rdb *rdb.RDB
} }
// NewClient and returns a new Client given a redis connection option. // NewClient and returns a new Client given a redis connection option.
func NewClient(r RedisConnOpt) *Client { func NewClient(r RedisConnOpt) *Client {
rdb := rdb.NewRDB(createRedisClient(r)) rdb := rdb.NewRDB(createRedisClient(r))
return &Client{rdb} return &Client{
opts: make(map[string][]Option),
rdb: rdb,
}
} }
// Option specifies the task processing behavior. // Option specifies the task processing behavior.
@@ -159,10 +165,19 @@ func serializePayload(payload map[string]interface{}) string {
return b.String() return b.String()
} }
const ( // Default max retry count used if nothing is specified.
// Max retry count by default const defaultMaxRetry = 25
defaultMaxRetry = 25
) // SetDefaultOptions sets options to be used for a given task type.
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
//
// Default options can be overridden by options passed at enqueue time.
func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
c.mu.Lock()
defer c.mu.Unlock()
c.opts[taskType] = opts
}
// EnqueueAt schedules task to be enqueued at the specified time. // EnqueueAt schedules task to be enqueued at the specified time.
// //
@@ -171,6 +186,35 @@ const (
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error { func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
return c.enqueueAt(t, task, opts...)
}
// Enqueue enqueues task to be processed immediately.
//
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error {
return c.enqueueAt(time.Now(), task, opts...)
}
// EnqueueIn schedules task to be enqueued after the specified delay.
//
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
return c.enqueueAt(time.Now().Add(d), task, opts...)
}
func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) error {
c.mu.Lock()
defer c.mu.Unlock()
if defaults, ok := c.opts[task.Type]; ok {
opts = append(defaults, opts...)
}
opt := composeOptions(opts...) opt := composeOptions(opts...)
msg := &base.TaskMessage{ msg := &base.TaskMessage{
ID: xid.New(), ID: xid.New(),
@@ -194,26 +238,6 @@ func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
return err return err
} }
// Enqueue enqueues task to be processed immediately.
//
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now(), task, opts...)
}
// EnqueueIn schedules task to be enqueued after the specified delay.
//
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
return c.EnqueueAt(time.Now().Add(d), task, opts...)
}
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error { func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {
if uniqueTTL > 0 { if uniqueTTL > 0 {
return c.rdb.EnqueueUnique(msg, uniqueTTL) return c.rdb.EnqueueUnique(msg, uniqueTTL)

View File

@@ -15,6 +15,11 @@ import (
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
func TestClientEnqueueAt(t *testing.T) { func TestClientEnqueueAt(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(RedisClientOpt{ client := NewClient(RedisClientOpt{
@@ -27,9 +32,6 @@ func TestClientEnqueueAt(t *testing.T) {
var ( var (
now = time.Now() now = time.Now()
oneHourLater = now.Add(time.Hour) oneHourLater = now.Add(time.Hour)
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
) )
tests := []struct { tests := []struct {
@@ -113,11 +115,6 @@ func TestClientEnqueue(t *testing.T) {
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
@@ -287,11 +284,6 @@ func TestClientEnqueueIn(t *testing.T) {
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}) task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
var (
noTimeout = time.Duration(0).String()
noDeadline = time.Time{}.Format(time.RFC3339)
)
tests := []struct { tests := []struct {
desc string desc string
task *Task task *Task
@@ -364,6 +356,86 @@ func TestClientEnqueueIn(t *testing.T) {
} }
} }
func TestClientDefaultOptions(t *testing.T) {
r := setup(t)
tests := []struct {
desc string
defaultOpts []Option // options set at the client level.
opts []Option // options used at enqueue time.
task *Task
queue string // queue that the message should go into.
want *base.TaskMessage
}{
{
desc: "With queue routing option",
defaultOpts: []Option{Queue("feed")},
opts: []Option{},
task: NewTask("feed:import", nil),
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: defaultMaxRetry,
Queue: "feed",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
{
desc: "With multiple options",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{},
task: NewTask("feed:import", nil),
queue: "feed",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "feed",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
{
desc: "With overriding options at enqueue time",
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
opts: []Option{Queue("critical")},
task: NewTask("feed:import", nil),
queue: "critical",
want: &base.TaskMessage{
Type: "feed:import",
Payload: nil,
Retry: 5,
Queue: "critical",
Timeout: noTimeout,
Deadline: noDeadline,
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...)
err := c.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Fatal(err)
}
enqueued := h.GetEnqueuedMessages(t, r, tc.queue)
if len(enqueued) != 1 {
t.Errorf("%s;\nexpected queue %q to have one message; got %d messages in the queue.",
tc.desc, tc.queue, len(enqueued))
continue
}
got := enqueued[0]
if diff := cmp.Diff(tc.want, got, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s;\nmismatch found in enqueued task message; (-want,+got)\n%s",
tc.desc, diff)
}
}
}
func TestUniqueKey(t *testing.T) { func TestUniqueKey(t *testing.T) {
tests := []struct { tests := []struct {
desc string desc string

12
doc.go
View File

@@ -14,7 +14,7 @@ specify the options using one of RedisConnOpt types.
DB: 3, DB: 3,
} }
The Client is used to register a task to be processed at the specified time. The Client is used to enqueue a task to be processed at the specified time.
Task is created with two parameters: its type and payload. Task is created with two parameters: its type and payload.
@@ -27,18 +27,18 @@ Task is created with two parameters: its type and payload.
// Enqueue the task to be processed immediately. // Enqueue the task to be processed immediately.
err := client.Enqueue(t) err := client.Enqueue(t)
// Schedule the task to be processed in one minute. // Schedule the task to be processed after one minute.
err = client.EnqueueIn(time.Minute, t) err = client.EnqueueIn(time.Minute, t)
The Background is used to run the background task processing with a given The Server is used to run the background task processing with a given
handler. handler.
bg := asynq.NewBackground(redis, &asynq.Config{ srv := asynq.NewServer(redis, asynq.Config{
Concurrency: 10, Concurrency: 10,
}) })
bg.Run(handler) srv.Run(handler)
Handler is an interface with one method ProcessTask which Handler is an interface type with a method which
takes a task and returns an error. Handler should return nil if takes a task and returns an error. Handler should return nil if
the processing is successful, otherwise return a non-nil error. the processing is successful, otherwise return a non-nil error.
If handler panics or returns a non-nil error, the task will be retried in the future. If handler panics or returns a non-nil error, the task will be retried in the future.

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

Before

Width:  |  Height:  |  Size: 582 KiB

After

Width:  |  Height:  |  Size: 582 KiB

View File

Before

Width:  |  Height:  |  Size: 1.5 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
docs/assets/overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

95
example_test.go Normal file
View File

@@ -0,0 +1,95 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq_test
import (
"fmt"
"log"
"os"
"os/signal"
"github.com/hibiken/asynq"
"golang.org/x/sys/unix"
)
func ExampleServer_Run() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
// Run blocks and waits for os signal to terminate the program.
if err := srv.Run(h); err != nil {
log.Fatal(err)
}
}
func ExampleServer_Stop() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs // wait for termination signal
srv.Stop()
}
func ExampleServer_Quiet() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
)
h := asynq.NewServeMux()
// ... Register handlers
if err := srv.Start(h); err != nil {
log.Fatal(err)
}
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
// Handle SIGTERM, SIGINT to exit the program.
// Handle SIGTSTP to stop processing new tasks.
for {
s := <-sigs
if s == unix.SIGTSTP {
srv.Quiet() // stop processing new tasks
continue
}
break
}
srv.Stop()
}
func ExampleParseRedisURI() {
rconn, err := asynq.ParseRedisURI("redis://localhost:6379/10")
if err != nil {
log.Fatal(err)
}
r, ok := rconn.(asynq.RedisClientOpt)
if !ok {
log.Fatal("unexpected type")
}
fmt.Println(r.Addr)
fmt.Println(r.DB)
// Output:
// localhost:6379
// 10
}

2
go.mod
View File

@@ -8,7 +8,7 @@ require (
github.com/rs/xid v1.2.1 github.com/rs/xid v1.2.1
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.3.1
go.uber.org/goleak v0.10.0 go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e // indirect golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/yaml.v2 v2.2.7 // indirect gopkg.in/yaml.v2 v2.2.7 // indirect
) )

View File

@@ -9,16 +9,15 @@ import (
"time" "time"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
) )
// heartbeater is responsible for writing process info to redis periodically to // heartbeater is responsible for writing process info to redis periodically to
// indicate that the background worker process is up. // indicate that the background worker process is up.
type heartbeater struct { type heartbeater struct {
logger Logger logger Logger
rdb *rdb.RDB broker base.Broker
ps *base.ProcessState ss *base.ServerState
// channel to communicate back to the long running "heartbeater" goroutine. // channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{} done chan struct{}
@@ -27,11 +26,11 @@ type heartbeater struct {
interval time.Duration interval time.Duration
} }
func newHeartbeater(l Logger, rdb *rdb.RDB, ps *base.ProcessState, interval time.Duration) *heartbeater { func newHeartbeater(l Logger, b base.Broker, ss *base.ServerState, interval time.Duration) *heartbeater {
return &heartbeater{ return &heartbeater{
logger: l, logger: l,
rdb: rdb, broker: b,
ps: ps, ss: ss,
done: make(chan struct{}), done: make(chan struct{}),
interval: interval, interval: interval,
} }
@@ -44,8 +43,8 @@ func (h *heartbeater) terminate() {
} }
func (h *heartbeater) start(wg *sync.WaitGroup) { func (h *heartbeater) start(wg *sync.WaitGroup) {
h.ps.SetStarted(time.Now()) h.ss.SetStarted(time.Now())
h.ps.SetStatus(base.StatusRunning) h.ss.SetStatus(base.StatusRunning)
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
@@ -53,7 +52,7 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
for { for {
select { select {
case <-h.done: case <-h.done:
h.rdb.ClearProcessState(h.ps) h.broker.ClearServerState(h.ss)
h.logger.Info("Heartbeater done") h.logger.Info("Heartbeater done")
return return
case <-time.After(h.interval): case <-time.After(h.interval):
@@ -66,7 +65,7 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
func (h *heartbeater) beat() { func (h *heartbeater) beat() {
// Note: Set TTL to be long enough so that it won't expire before we write again // Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed. // and short enough to expire quickly once the process is shut down or killed.
err := h.rdb.WriteProcessState(h.ps, h.interval*2) err := h.broker.WriteServerState(h.ss, h.interval*2)
if err != nil { if err != nil {
h.logger.Error("could not write heartbeat data: %v", err) h.logger.Error("could not write heartbeat data: %v", err)
} }

View File

@@ -14,6 +14,7 @@ import (
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
) )
func TestHeartbeater(t *testing.T) { func TestHeartbeater(t *testing.T) {
@@ -31,17 +32,18 @@ func TestHeartbeater(t *testing.T) {
} }
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond) timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{}) ignoreOpt := cmpopts.IgnoreUnexported(base.ServerInfo{})
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
state := base.NewProcessState(tc.host, tc.pid, tc.concurrency, tc.queues, false) state := base.NewServerState(tc.host, tc.pid, tc.concurrency, tc.queues, false)
hb := newHeartbeater(testLogger, rdbClient, state, tc.interval) hb := newHeartbeater(testLogger, rdbClient, state, tc.interval)
var wg sync.WaitGroup var wg sync.WaitGroup
hb.start(&wg) hb.start(&wg)
want := &base.ProcessInfo{ want := &base.ServerInfo{
Host: tc.host, Host: tc.host,
PID: tc.pid, PID: tc.pid,
Queues: tc.queues, Queues: tc.queues,
@@ -53,21 +55,21 @@ func TestHeartbeater(t *testing.T) {
// allow for heartbeater to write to redis // allow for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval * 2)
ps, err := rdbClient.ListProcesses() ss, err := rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("could not read server info from redis: %v", err)
hb.terminate() hb.terminate()
continue continue
} }
if len(ps) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps)) t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
hb.terminate() hb.terminate()
continue continue
} }
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.terminate()
continue continue
} }
@@ -79,21 +81,21 @@ func TestHeartbeater(t *testing.T) {
time.Sleep(tc.interval * 2) time.Sleep(tc.interval * 2)
want.Status = "stopped" want.Status = "stopped"
ps, err = rdbClient.ListProcesses() ss, err = rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("could not read process status from redis: %v", err)
hb.terminate() hb.terminate()
continue continue
} }
if len(ps) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps)) t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
hb.terminate() hb.terminate()
continue continue
} }
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff) t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate() hb.terminate()
continue continue
} }
@@ -101,3 +103,26 @@ func TestHeartbeater(t *testing.T) {
hb.terminate() hb.terminate()
} }
} }
func TestHeartbeaterWithRedisDown(t *testing.T) {
// Make sure that heartbeater goroutine doesn't panic
// if it cannot connect to redis.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
ss := base.NewServerState("localhost", 1234, 10, map[string]int{"default": 1}, false)
hb := newHeartbeater(testLogger, testBroker, ss, time.Second)
testBroker.Sleep()
var wg sync.WaitGroup
hb.start(&wg)
// wait for heartbeater to try writing data to redis
time.Sleep(2 * time.Second)
hb.terminate()
}

View File

@@ -41,9 +41,9 @@ var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) [
return out return out
}) })
// SortProcessInfoOpt is a cmp.Option to sort base.ProcessInfo for comparing slice of process info. // SortServerInfoOpt is a cmp.Option to sort base.ServerInfo for comparing slice of process info.
var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.ProcessInfo) []*base.ProcessInfo { var SortServerInfoOpt = cmp.Transformer("SortServerInfo", func(in []*base.ServerInfo) []*base.ServerInfo {
out := append([]*base.ProcessInfo(nil), in...) // Copy input to avoid mutating it out := append([]*base.ServerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool { sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host { if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host return out[i].Host < out[j].Host

View File

@@ -12,6 +12,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v7"
"github.com/rs/xid" "github.com/rs/xid"
) )
@@ -20,10 +21,10 @@ const DefaultQueueName = "default"
// Redis keys // Redis keys
const ( const (
AllProcesses = "asynq:ps" // ZSET AllServers = "asynq:servers" // ZSET
psPrefix = "asynq:ps:" // STRING - asynq:ps:<host>:<pid> serversPrefix = "asynq:servers:" // STRING - asynq:ps:<host>:<pid>:<serverid>
AllWorkers = "asynq:workers" // ZSET AllWorkers = "asynq:workers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid> workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>:<serverid>
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd> processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd> failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname> QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
@@ -51,14 +52,14 @@ func FailureKey(t time.Time) string {
return failurePrefix + t.UTC().Format("2006-01-02") return failurePrefix + t.UTC().Format("2006-01-02")
} }
// ProcessInfoKey returns a redis key for process info. // ServerInfoKey returns a redis key for process info.
func ProcessInfoKey(hostname string, pid int) string { func ServerInfoKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d", psPrefix, hostname, pid) return fmt.Sprintf("%s%s:%d:%s", serversPrefix, hostname, pid, sid)
} }
// WorkersKey returns a redis key for the workers given hostname and pid. // WorkersKey returns a redis key for the workers given hostname, pid, and server ID.
func WorkersKey(hostname string, pid int) string { func WorkersKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d", workersPrefix, hostname, pid) return fmt.Sprintf("%s%s:%d:%s", workersPrefix, hostname, pid, sid)
} }
// TaskMessage is the internal representation of a task with additional metadata fields. // TaskMessage is the internal representation of a task with additional metadata fields.
@@ -104,42 +105,47 @@ type TaskMessage struct {
UniqueKey string UniqueKey string
} }
// ProcessState holds process level information. // ServerState holds process level information.
// //
// ProcessStates are safe for concurrent use by multiple goroutines. // ServerStates are safe for concurrent use by multiple goroutines.
type ProcessState struct { type ServerState struct {
mu sync.Mutex // guards all data fields mu sync.Mutex // guards all data fields
id xid.ID
concurrency int concurrency int
queues map[string]int queues map[string]int
strictPriority bool strictPriority bool
pid int pid int
host string host string
status PStatus status ServerStatus
started time.Time started time.Time
workers map[string]*workerStats workers map[string]*workerStats
} }
// PStatus represents status of a process. // ServerStatus represents status of a server.
type PStatus int type ServerStatus int
const ( const (
// StatusIdle indicates process is in idle state. // StatusIdle indicates the server is in idle state.
StatusIdle PStatus = iota StatusIdle ServerStatus = iota
// StatusRunning indicates process is up and processing tasks. // StatusRunning indicates the servier is up and processing tasks.
StatusRunning StatusRunning
// StatusStopped indicates process is up but not processing new tasks. // StatusQuiet indicates the server is up but not processing new tasks.
StatusQuiet
// StatusStopped indicates the server server has been stopped.
StatusStopped StatusStopped
) )
var statuses = []string{ var statuses = []string{
"idle", "idle",
"running", "running",
"quiet",
"stopped", "stopped",
} }
func (s PStatus) String() string { func (s ServerStatus) String() string {
if StatusIdle <= s && s <= StatusStopped { if StatusIdle <= s && s <= StatusStopped {
return statuses[s] return statuses[s]
} }
@@ -151,11 +157,12 @@ type workerStats struct {
started time.Time started time.Time
} }
// NewProcessState returns a new instance of ProcessState. // NewServerState returns a new instance of ServerState.
func NewProcessState(host string, pid, concurrency int, queues map[string]int, strict bool) *ProcessState { func NewServerState(host string, pid, concurrency int, queues map[string]int, strict bool) *ServerState {
return &ProcessState{ return &ServerState{
host: host, host: host,
pid: pid, pid: pid,
id: xid.New(),
concurrency: concurrency, concurrency: concurrency,
queues: cloneQueueConfig(queues), queues: cloneQueueConfig(queues),
strictPriority: strict, strictPriority: strict,
@@ -164,59 +171,67 @@ func NewProcessState(host string, pid, concurrency int, queues map[string]int, s
} }
} }
// SetStatus updates the state of process. // SetStatus updates the status of server.
func (ps *ProcessState) SetStatus(status PStatus) { func (ss *ServerState) SetStatus(status ServerStatus) {
ps.mu.Lock() ss.mu.Lock()
defer ps.mu.Unlock() defer ss.mu.Unlock()
ps.status = status ss.status = status
}
// Status returns the status of server.
func (ss *ServerState) Status() ServerStatus {
ss.mu.Lock()
defer ss.mu.Unlock()
return ss.status
} }
// SetStarted records when the process started processing. // SetStarted records when the process started processing.
func (ps *ProcessState) SetStarted(t time.Time) { func (ss *ServerState) SetStarted(t time.Time) {
ps.mu.Lock() ss.mu.Lock()
defer ps.mu.Unlock() defer ss.mu.Unlock()
ps.started = t ss.started = t
} }
// AddWorkerStats records when a worker started and which task it's processing. // AddWorkerStats records when a worker started and which task it's processing.
func (ps *ProcessState) AddWorkerStats(msg *TaskMessage, started time.Time) { func (ss *ServerState) AddWorkerStats(msg *TaskMessage, started time.Time) {
ps.mu.Lock() ss.mu.Lock()
defer ps.mu.Unlock() defer ss.mu.Unlock()
ps.workers[msg.ID.String()] = &workerStats{msg, started} ss.workers[msg.ID.String()] = &workerStats{msg, started}
} }
// DeleteWorkerStats removes a worker's entry from the process state. // DeleteWorkerStats removes a worker's entry from the process state.
func (ps *ProcessState) DeleteWorkerStats(msg *TaskMessage) { func (ss *ServerState) DeleteWorkerStats(msg *TaskMessage) {
ps.mu.Lock() ss.mu.Lock()
defer ps.mu.Unlock() defer ss.mu.Unlock()
delete(ps.workers, msg.ID.String()) delete(ss.workers, msg.ID.String())
} }
// Get returns current state of process as a ProcessInfo. // GetInfo returns current state of server as a ServerInfo.
func (ps *ProcessState) Get() *ProcessInfo { func (ss *ServerState) GetInfo() *ServerInfo {
ps.mu.Lock() ss.mu.Lock()
defer ps.mu.Unlock() defer ss.mu.Unlock()
return &ProcessInfo{ return &ServerInfo{
Host: ps.host, Host: ss.host,
PID: ps.pid, PID: ss.pid,
Concurrency: ps.concurrency, ServerID: ss.id.String(),
Queues: cloneQueueConfig(ps.queues), Concurrency: ss.concurrency,
StrictPriority: ps.strictPriority, Queues: cloneQueueConfig(ss.queues),
Status: ps.status.String(), StrictPriority: ss.strictPriority,
Started: ps.started, Status: ss.status.String(),
ActiveWorkerCount: len(ps.workers), Started: ss.started,
ActiveWorkerCount: len(ss.workers),
} }
} }
// GetWorkers returns a list of currently running workers' info. // GetWorkers returns a list of currently running workers' info.
func (ps *ProcessState) GetWorkers() []*WorkerInfo { func (ss *ServerState) GetWorkers() []*WorkerInfo {
ps.mu.Lock() ss.mu.Lock()
defer ps.mu.Unlock() defer ss.mu.Unlock()
var res []*WorkerInfo var res []*WorkerInfo
for _, w := range ps.workers { for _, w := range ss.workers {
res = append(res, &WorkerInfo{ res = append(res, &WorkerInfo{
Host: ps.host, Host: ss.host,
PID: ps.pid, PID: ss.pid,
ID: w.msg.ID, ID: w.msg.ID,
Type: w.msg.Type, Type: w.msg.Type,
Queue: w.msg.Queue, Queue: w.msg.Queue,
@@ -243,10 +258,11 @@ func clonePayload(payload map[string]interface{}) map[string]interface{} {
return res return res
} }
// ProcessInfo holds information about a running background worker process. // ServerInfo holds information about a running server.
type ProcessInfo struct { type ServerInfo struct {
Host string Host string
PID int PID int
ServerID string
Concurrency int Concurrency int
Queues map[string]int Queues map[string]int
StrictPriority bool StrictPriority bool
@@ -313,3 +329,25 @@ func (c *Cancelations) GetAll() []context.CancelFunc {
} }
return res return res
} }
// Broker is a message broker that supports operations to manage task queues.
//
// See rdb.RDB as a reference implementation.
type Broker interface {
Enqueue(msg *TaskMessage) error
EnqueueUnique(msg *TaskMessage, ttl time.Duration) error
Dequeue(qnames ...string) (*TaskMessage, error)
Done(msg *TaskMessage) error
Requeue(msg *TaskMessage) error
Schedule(msg *TaskMessage, processAt time.Time) error
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
Kill(msg *TaskMessage, errMsg string) error
RequeueAll() (int64, error)
CheckAndEnqueue(qnames ...string) error
WriteServerState(ss *ServerState, ttl time.Duration) error
ClearServerState(ss *ServerState) error
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers
PublishCancelation(id string) error
Close() error
}

View File

@@ -12,6 +12,7 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/rs/xid" "github.com/rs/xid"
) )
@@ -67,20 +68,22 @@ func TestFailureKey(t *testing.T) {
} }
} }
func TestProcessInfoKey(t *testing.T) { func TestServerInfoKey(t *testing.T) {
tests := []struct { tests := []struct {
hostname string hostname string
pid int pid int
sid string
want string want string
}{ }{
{"localhost", 9876, "asynq:ps:localhost:9876"}, {"localhost", 9876, "server123", "asynq:servers:localhost:9876:server123"},
{"127.0.0.1", 1234, "asynq:ps:127.0.0.1:1234"}, {"127.0.0.1", 1234, "server987", "asynq:servers:127.0.0.1:1234:server987"},
} }
for _, tc := range tests { for _, tc := range tests {
got := ProcessInfoKey(tc.hostname, tc.pid) got := ServerInfoKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want { if got != tc.want {
t.Errorf("ProcessInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want) t.Errorf("ServerInfoKey(%q, %d, %q) = %q, want %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
} }
} }
} }
@@ -89,24 +92,26 @@ func TestWorkersKey(t *testing.T) {
tests := []struct { tests := []struct {
hostname string hostname string
pid int pid int
sid string
want string want string
}{ }{
{"localhost", 9876, "asynq:workers:localhost:9876"}, {"localhost", 9876, "server1", "asynq:workers:localhost:9876:server1"},
{"127.0.0.1", 1234, "asynq:workers:127.0.0.1:1234"}, {"127.0.0.1", 1234, "server2", "asynq:workers:127.0.0.1:1234:server2"},
} }
for _, tc := range tests { for _, tc := range tests {
got := WorkersKey(tc.hostname, tc.pid) got := WorkersKey(tc.hostname, tc.pid, tc.sid)
if got != tc.want { if got != tc.want {
t.Errorf("WorkersKey(%q, %d) = %q, want = %q", tc.hostname, tc.pid, got, tc.want) t.Errorf("WorkersKey(%q, %d, %q) = %q, want = %q",
tc.hostname, tc.pid, tc.sid, got, tc.want)
} }
} }
} }
// Test for process state being accessed by multiple goroutines. // Test for server state being accessed by multiple goroutines.
// Run with -race flag to check for data race. // Run with -race flag to check for data race.
func TestProcessStateConcurrentAccess(t *testing.T) { func TestServerStateConcurrentAccess(t *testing.T) {
ps := NewProcessState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false) ss := NewServerState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false)
var wg sync.WaitGroup var wg sync.WaitGroup
started := time.Now() started := time.Now()
msgs := []*TaskMessage{ msgs := []*TaskMessage{
@@ -119,18 +124,21 @@ func TestProcessStateConcurrentAccess(t *testing.T) {
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
ps.SetStarted(started) ss.SetStarted(started)
ps.SetStatus(StatusRunning) ss.SetStatus(StatusRunning)
if status := ss.Status(); status != StatusRunning {
t.Errorf("(*ServerState).Status() = %v, want %v", status, StatusRunning)
}
}() }()
// Simulate processor starting worker goroutines. // Simulate processor starting worker goroutines.
for _, msg := range msgs { for _, msg := range msgs {
wg.Add(1) wg.Add(1)
ps.AddWorkerStats(msg, time.Now()) ss.AddWorkerStats(msg, time.Now())
go func(msg *TaskMessage) { go func(msg *TaskMessage) {
defer wg.Done() defer wg.Done()
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond) time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
ps.DeleteWorkerStats(msg) ss.DeleteWorkerStats(msg)
}(msg) }(msg)
} }
@@ -139,15 +147,15 @@ func TestProcessStateConcurrentAccess(t *testing.T) {
go func() { go func() {
wg.Done() wg.Done()
for i := 0; i < 5; i++ { for i := 0; i < 5; i++ {
ps.Get() ss.GetInfo()
ps.GetWorkers() ss.GetWorkers()
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond) time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
} }
}() }()
wg.Wait() wg.Wait()
want := &ProcessInfo{ want := &ServerInfo{
Host: "127.0.0.1", Host: "127.0.0.1",
PID: 1234, PID: 1234,
Concurrency: 10, Concurrency: 10,
@@ -158,9 +166,9 @@ func TestProcessStateConcurrentAccess(t *testing.T) {
ActiveWorkerCount: 0, ActiveWorkerCount: 0,
} }
got := ps.Get() got := ss.GetInfo()
if diff := cmp.Diff(want, got); diff != "" { if diff := cmp.Diff(want, got, cmpopts.IgnoreFields(ServerInfo{}, "ServerID")); diff != "" {
t.Errorf("(*ProcessState).Get() = %+v, want %+v; (-want,+got)\n%s", t.Errorf("(*ServerState).GetInfo() = %+v, want %+v; (-want,+got)\n%s",
got, want, diff) got, want, diff)
} }
} }

View File

@@ -759,23 +759,23 @@ func (r *RDB) RemoveQueue(qname string, force bool) error {
} }
// Note: Script also removes stale keys. // Note: Script also removes stale keys.
var listProcessesCmd = redis.NewScript(` var listServersCmd = redis.NewScript(`
local res = {} local res = {}
local now = tonumber(ARGV[1]) local now = tonumber(ARGV[1])
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf") local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
for _, key in ipairs(keys) do for _, key in ipairs(keys) do
local ps = redis.call("GET", key) local s = redis.call("GET", key)
if ps then if s then
table.insert(res, ps) table.insert(res, s)
end end
end end
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1) redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
return res`) return res`)
// ListProcesses returns the list of process statuses. // ListServers returns the list of server info.
func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) { func (r *RDB) ListServers() ([]*base.ServerInfo, error) {
res, err := listProcessesCmd.Run(r.client, res, err := listServersCmd.Run(r.client,
[]string{base.AllProcesses}, time.Now().UTC().Unix()).Result() []string{base.AllServers}, time.Now().UTC().Unix()).Result()
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -783,16 +783,16 @@ func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
var processes []*base.ProcessInfo var servers []*base.ServerInfo
for _, s := range data { for _, s := range data {
var ps base.ProcessInfo var info base.ServerInfo
err := json.Unmarshal([]byte(s), &ps) err := json.Unmarshal([]byte(s), &info)
if err != nil { if err != nil {
continue // skip bad data continue // skip bad data
} }
processes = append(processes, &ps) servers = append(servers, &info)
} }
return processes, nil return servers, nil
} }
// Note: Script also removes stale keys. // Note: Script also removes stale keys.

View File

@@ -2051,14 +2051,14 @@ func TestRemoveQueueError(t *testing.T) {
} }
} }
func TestListProcesses(t *testing.T) { func TestListServers(t *testing.T) {
r := setup(t) r := setup(t)
started1 := time.Now().Add(-time.Hour) started1 := time.Now().Add(-time.Hour)
ps1 := base.NewProcessState("do.droplet1", 1234, 10, map[string]int{"default": 1}, false) ss1 := base.NewServerState("do.droplet1", 1234, 10, map[string]int{"default": 1}, false)
ps1.SetStarted(started1) ss1.SetStarted(started1)
ps1.SetStatus(base.StatusRunning) ss1.SetStatus(base.StatusRunning)
info1 := &base.ProcessInfo{ info1 := &base.ServerInfo{
Concurrency: 10, Concurrency: 10,
Queues: map[string]int{"default": 1}, Queues: map[string]int{"default": 1},
Host: "do.droplet1", Host: "do.droplet1",
@@ -2069,11 +2069,11 @@ func TestListProcesses(t *testing.T) {
} }
started2 := time.Now().Add(-2 * time.Hour) started2 := time.Now().Add(-2 * time.Hour)
ps2 := base.NewProcessState("do.droplet2", 9876, 20, map[string]int{"email": 1}, false) ss2 := base.NewServerState("do.droplet2", 9876, 20, map[string]int{"email": 1}, false)
ps2.SetStarted(started2) ss2.SetStarted(started2)
ps2.SetStatus(base.StatusStopped) ss2.SetStatus(base.StatusStopped)
ps2.AddWorkerStats(h.NewTaskMessage("send_email", nil), time.Now()) ss2.AddWorkerStats(h.NewTaskMessage("send_email", nil), time.Now())
info2 := &base.ProcessInfo{ info2 := &base.ServerInfo{
Concurrency: 20, Concurrency: 20,
Queues: map[string]int{"email": 1}, Queues: map[string]int{"email": 1},
Host: "do.droplet2", Host: "do.droplet2",
@@ -2084,41 +2084,42 @@ func TestListProcesses(t *testing.T) {
} }
tests := []struct { tests := []struct {
processes []*base.ProcessState serverStates []*base.ServerState
want []*base.ProcessInfo want []*base.ServerInfo
}{ }{
{ {
processes: []*base.ProcessState{}, serverStates: []*base.ServerState{},
want: []*base.ProcessInfo{}, want: []*base.ServerInfo{},
}, },
{ {
processes: []*base.ProcessState{ps1}, serverStates: []*base.ServerState{ss1},
want: []*base.ProcessInfo{info1}, want: []*base.ServerInfo{info1},
}, },
{ {
processes: []*base.ProcessState{ps1, ps2}, serverStates: []*base.ServerState{ss1, ss2},
want: []*base.ProcessInfo{info1, info2}, want: []*base.ServerInfo{info1, info2},
}, },
} }
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{}) ignoreOpt := cmpopts.IgnoreUnexported(base.ServerInfo{})
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r.client) h.FlushDB(t, r.client)
for _, ps := range tc.processes { for _, ss := range tc.serverStates {
if err := r.WriteProcessState(ps, 5*time.Second); err != nil { if err := r.WriteServerState(ss, 5*time.Second); err != nil {
t.Fatal(err) t.Fatal(err)
} }
} }
got, err := r.ListProcesses() got, err := r.ListServers()
if err != nil { if err != nil {
t.Errorf("r.ListProcesses returned an error: %v", err) t.Errorf("r.ListServers returned an error: %v", err)
} }
if diff := cmp.Diff(tc.want, got, h.SortProcessInfoOpt, ignoreOpt); diff != "" { if diff := cmp.Diff(tc.want, got, h.SortServerInfoOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("r.ListProcesses returned %v, want %v; (-want,+got)\n%s", t.Errorf("r.ListServers returned %v, want %v; (-want,+got)\n%s",
got, tc.processes, diff) got, tc.serverStates, diff)
} }
} }
} }
@@ -2164,15 +2165,15 @@ func TestListWorkers(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r.client) h.FlushDB(t, r.client)
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false) ss := base.NewServerState(host, pid, 10, map[string]int{"default": 1}, false)
for _, w := range tc.workers { for _, w := range tc.workers {
ps.AddWorkerStats(w.msg, w.started) ss.AddWorkerStats(w.msg, w.started)
} }
err := r.WriteProcessState(ps, time.Minute) err := r.WriteServerState(ss, time.Minute)
if err != nil { if err != nil {
t.Errorf("could not write process state to redis: %v", err) t.Errorf("could not write server state to redis: %v", err)
continue continue
} }

View File

@@ -463,9 +463,9 @@ func (r *RDB) forwardSingle(src, dst string) error {
[]string{src, dst}, now).Err() []string{src, dst}, now).Err()
} }
// KEYS[1] -> asynq:ps:<host:pid> // KEYS[1] -> asynq:servers:<host:pid:sid>
// KEYS[2] -> asynq:ps // KEYS[2] -> asynq:servers
// KEYS[3] -> asynq:workers<host:pid> // KEYS[3] -> asynq:workers<host:pid:sid>
// keys[4] -> asynq:workers // keys[4] -> asynq:workers
// ARGV[1] -> expiration time // ARGV[1] -> expiration time
// ARGV[2] -> TTL in seconds // ARGV[2] -> TTL in seconds
@@ -484,16 +484,16 @@ redis.call("EXPIRE", KEYS[3], ARGV[2])
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3]) redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// WriteProcessState writes process state data to redis with expiration set to the value ttl. // WriteServerState writes server state data to redis with expiration set to the value ttl.
func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error { func (r *RDB) WriteServerState(ss *base.ServerState, ttl time.Duration) error {
info := ps.Get() info := ss.GetInfo()
bytes, err := json.Marshal(info) bytes, err := json.Marshal(info)
if err != nil { if err != nil {
return err return err
} }
var args []interface{} // args to the lua script var args []interface{} // args to the lua script
exp := time.Now().Add(ttl).UTC() exp := time.Now().Add(ttl).UTC()
workers := ps.GetWorkers() workers := ss.GetWorkers()
args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes) args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes)
for _, w := range workers { for _, w := range workers {
bytes, err := json.Marshal(w) bytes, err := json.Marshal(w)
@@ -502,17 +502,17 @@ func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error
} }
args = append(args, w.ID.String(), bytes) args = append(args, w.ID.String(), bytes)
} }
pkey := base.ProcessInfoKey(info.Host, info.PID) skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
wkey := base.WorkersKey(info.Host, info.PID) wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
return writeProcessInfoCmd.Run(r.client, return writeProcessInfoCmd.Run(r.client,
[]string{pkey, base.AllProcesses, wkey, base.AllWorkers}, []string{skey, base.AllServers, wkey, base.AllWorkers},
args...).Err() args...).Err()
} }
// KEYS[1] -> asynq:ps // KEYS[1] -> asynq:servers
// KEYS[2] -> asynq:ps:<host:pid> // KEYS[2] -> asynq:servers:<host:pid:sid>
// KEYS[3] -> asynq:workers // KEYS[3] -> asynq:workers
// KEYS[4] -> asynq:workers<host:pid> // KEYS[4] -> asynq:workers<host:pid:sid>
var clearProcessInfoCmd = redis.NewScript(` var clearProcessInfoCmd = redis.NewScript(`
redis.call("ZREM", KEYS[1], KEYS[2]) redis.call("ZREM", KEYS[1], KEYS[2])
redis.call("DEL", KEYS[2]) redis.call("DEL", KEYS[2])
@@ -520,14 +520,14 @@ redis.call("ZREM", KEYS[3], KEYS[4])
redis.call("DEL", KEYS[4]) redis.call("DEL", KEYS[4])
return redis.status_reply("OK")`) return redis.status_reply("OK")`)
// ClearProcessState deletes process state data from redis. // ClearServerState deletes server state data from redis.
func (r *RDB) ClearProcessState(ps *base.ProcessState) error { func (r *RDB) ClearServerState(ss *base.ServerState) error {
info := ps.Get() info := ss.GetInfo()
host, pid := info.Host, info.PID host, pid, id := info.Host, info.PID, info.ServerID
pkey := base.ProcessInfoKey(host, pid) skey := base.ServerInfoKey(host, pid, id)
wkey := base.WorkersKey(host, pid) wkey := base.WorkersKey(host, pid, id)
return clearProcessInfoCmd.Run(r.client, return clearProcessInfoCmd.Run(r.client,
[]string{base.AllProcesses, pkey, base.AllWorkers, wkey}).Err() []string{base.AllServers, skey, base.AllWorkers, wkey}).Err()
} }
// CancelationPubSub returns a pubsub for cancelation messages. // CancelationPubSub returns a pubsub for cancelation messages.

View File

@@ -862,60 +862,61 @@ func TestCheckAndEnqueue(t *testing.T) {
} }
} }
func TestWriteProcessState(t *testing.T) { func TestWriteServerState(t *testing.T) {
r := setup(t) r := setup(t)
host, pid := "localhost", 98765
queues := map[string]int{"default": 2, "email": 5, "low": 1} queues := map[string]int{"default": 2, "email": 5, "low": 1}
started := time.Now() started := time.Now()
ps := base.NewProcessState(host, pid, 10, queues, false) ss := base.NewServerState("localhost", 4242, 10, queues, false)
ps.SetStarted(started) ss.SetStarted(started)
ps.SetStatus(base.StatusRunning) ss.SetStatus(base.StatusRunning)
ttl := 5 * time.Second ttl := 5 * time.Second
h.FlushDB(t, r.client) h.FlushDB(t, r.client)
err := r.WriteProcessState(ps, ttl) err := r.WriteServerState(ss, ttl)
if err != nil { if err != nil {
t.Errorf("r.WriteProcessState returned an error: %v", err) t.Errorf("r.WriteServerState returned an error: %v", err)
} }
// Check ProcessInfo was written correctly // Check ServerInfo was written correctly
pkey := base.ProcessInfoKey(host, pid) info := ss.GetInfo()
data := r.client.Get(pkey).Val() skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
var got base.ProcessInfo data := r.client.Get(skey).Val()
var got base.ServerInfo
err = json.Unmarshal([]byte(data), &got) err = json.Unmarshal([]byte(data), &got)
if err != nil { if err != nil {
t.Fatalf("could not decode json: %v", err) t.Fatalf("could not decode json: %v", err)
} }
want := base.ProcessInfo{ want := base.ServerInfo{
Host: "localhost", Host: info.Host,
PID: 98765, PID: info.PID,
Concurrency: 10, Concurrency: info.Concurrency,
Queues: map[string]int{"default": 2, "email": 5, "low": 1}, Queues: map[string]int{"default": 2, "email": 5, "low": 1},
StrictPriority: false, StrictPriority: false,
Status: "running", Status: "running",
Started: started, Started: started,
ActiveWorkerCount: 0, ActiveWorkerCount: 0,
} }
if diff := cmp.Diff(want, got); diff != "" { ignoreOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
t.Errorf("persisted ProcessInfo was %v, want %v; (-want,+got)\n%s", if diff := cmp.Diff(want, got, ignoreOpt); diff != "" {
t.Errorf("persisted ServerInfo was %v, want %v; (-want,+got)\n%s",
got, want, diff) got, want, diff)
} }
// Check ProcessInfo TTL was set correctly // Check ServerInfo TTL was set correctly
gotTTL := r.client.TTL(pkey).Val() gotTTL := r.client.TTL(skey).Val()
if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL of %q was %v, want %v", pkey, gotTTL, ttl) t.Errorf("TTL of %q was %v, want %v", skey, gotTTL, ttl)
} }
// Check ProcessInfo key was added to the set correctly // Check ServerInfo key was added to the set correctly
gotProcesses := r.client.ZRange(base.AllProcesses, 0, -1).Val() gotProcesses := r.client.ZRange(base.AllServers, 0, -1).Val()
wantProcesses := []string{pkey} wantProcesses := []string{skey}
if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" { if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcesses, wantProcesses) t.Errorf("%q contained %v, want %v", base.AllServers, gotProcesses, wantProcesses)
} }
// Check WorkersInfo was written correctly // Check WorkersInfo was written correctly
wkey := base.WorkersKey(host, pid) wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
workerExist := r.client.Exists(wkey).Val() workerExist := r.client.Exists(wkey).Val()
if workerExist != 0 { if workerExist != 0 {
t.Errorf("%q key exists", wkey) t.Errorf("%q key exists", wkey)
@@ -928,9 +929,8 @@ func TestWriteProcessState(t *testing.T) {
} }
} }
func TestWriteProcessStateWithWorkers(t *testing.T) { func TestWriteServerStateWithWorkers(t *testing.T) {
r := setup(t) r := setup(t)
host, pid := "localhost", 98765
queues := map[string]int{"default": 2, "email": 5, "low": 1} queues := map[string]int{"default": 2, "email": 5, "low": 1}
concurrency := 10 concurrency := 10
@@ -939,31 +939,33 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
w2Started := time.Now().Add(-time.Second) w2Started := time.Now().Add(-time.Second)
msg1 := h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "123"}) msg1 := h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "123"})
msg2 := h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/imgfile"}) msg2 := h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/imgfile"})
ps := base.NewProcessState(host, pid, concurrency, queues, false) ss := base.NewServerState("127.0.01", 4242, concurrency, queues, false)
ps.SetStarted(started) ss.SetStarted(started)
ps.SetStatus(base.StatusRunning) ss.SetStatus(base.StatusRunning)
ps.AddWorkerStats(msg1, w1Started) ss.AddWorkerStats(msg1, w1Started)
ps.AddWorkerStats(msg2, w2Started) ss.AddWorkerStats(msg2, w2Started)
ttl := 5 * time.Second ttl := 5 * time.Second
h.FlushDB(t, r.client) h.FlushDB(t, r.client)
err := r.WriteProcessState(ps, ttl) err := r.WriteServerState(ss, ttl)
if err != nil { if err != nil {
t.Errorf("r.WriteProcessState returned an error: %v", err) t.Errorf("r.WriteServerState returned an error: %v", err)
} }
// Check ProcessInfo was written correctly // Check ServerInfo was written correctly
pkey := base.ProcessInfoKey(host, pid) info := ss.GetInfo()
data := r.client.Get(pkey).Val() skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
var got base.ProcessInfo data := r.client.Get(skey).Val()
var got base.ServerInfo
err = json.Unmarshal([]byte(data), &got) err = json.Unmarshal([]byte(data), &got)
if err != nil { if err != nil {
t.Fatalf("could not decode json: %v", err) t.Fatalf("could not decode json: %v", err)
} }
want := base.ProcessInfo{ want := base.ServerInfo{
Host: host, Host: info.Host,
PID: pid, PID: info.PID,
ServerID: info.ServerID,
Concurrency: concurrency, Concurrency: concurrency,
Queues: queues, Queues: queues,
StrictPriority: false, StrictPriority: false,
@@ -972,23 +974,23 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
ActiveWorkerCount: 2, ActiveWorkerCount: 2,
} }
if diff := cmp.Diff(want, got); diff != "" { if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("persisted ProcessInfo was %v, want %v; (-want,+got)\n%s", t.Errorf("persisted ServerInfo was %v, want %v; (-want,+got)\n%s",
got, want, diff) got, want, diff)
} }
// Check ProcessInfo TTL was set correctly // Check ServerInfo TTL was set correctly
gotTTL := r.client.TTL(pkey).Val() gotTTL := r.client.TTL(skey).Val()
if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL of %q was %v, want %v", pkey, gotTTL, ttl) t.Errorf("TTL of %q was %v, want %v", skey, gotTTL, ttl)
} }
// Check ProcessInfo key was added to the set correctly // Check ServerInfo key was added to the set correctly
gotProcesses := r.client.ZRange(base.AllProcesses, 0, -1).Val() gotProcesses := r.client.ZRange(base.AllServers, 0, -1).Val()
wantProcesses := []string{pkey} wantProcesses := []string{skey}
if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" { if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcesses, wantProcesses) t.Errorf("%q contained %v, want %v", base.AllServers, gotProcesses, wantProcesses)
} }
// Check WorkersInfo was written correctly // Check WorkersInfo was written correctly
wkey := base.WorkersKey(host, pid) wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
wdata := r.client.HGetAll(wkey).Val() wdata := r.client.HGetAll(wkey).Val()
if len(wdata) != 2 { if len(wdata) != 2 {
t.Fatalf("HGETALL %q returned a hash of size %d, want 2", wkey, len(wdata)) t.Fatalf("HGETALL %q returned a hash of size %d, want 2", wkey, len(wdata))
@@ -1003,8 +1005,8 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
} }
wantWorkers := map[string]*base.WorkerInfo{ wantWorkers := map[string]*base.WorkerInfo{
msg1.ID.String(): { msg1.ID.String(): {
Host: host, Host: info.Host,
PID: pid, PID: info.PID,
ID: msg1.ID, ID: msg1.ID,
Type: msg1.Type, Type: msg1.Type,
Queue: msg1.Queue, Queue: msg1.Queue,
@@ -1012,8 +1014,8 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
Started: w1Started, Started: w1Started,
}, },
msg2.ID.String(): { msg2.ID.String(): {
Host: host, Host: info.Host,
PID: pid, PID: info.PID,
ID: msg2.ID, ID: msg2.ID,
Type: msg2.Type, Type: msg2.Type,
Queue: msg2.Queue, Queue: msg2.Queue,
@@ -1039,27 +1041,28 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
} }
} }
func TestClearProcessState(t *testing.T) { func TestClearServerState(t *testing.T) {
r := setup(t) r := setup(t)
host, pid := "127.0.0.1", 1234 ss := base.NewServerState("127.0.01", 4242, 10, map[string]int{"default": 1}, false)
info := ss.GetInfo()
h.FlushDB(t, r.client) h.FlushDB(t, r.client)
pkey := base.ProcessInfoKey(host, pid) skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
wkey := base.WorkersKey(host, pid) wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
otherPKey := base.ProcessInfoKey("otherhost", 12345) otherSKey := base.ServerInfoKey("otherhost", 12345, "server98")
otherWKey := base.WorkersKey("otherhost", 12345) otherWKey := base.WorkersKey("otherhost", 12345, "server98")
// Populate the keys. // Populate the keys.
if err := r.client.Set(pkey, "process-info", 0).Err(); err != nil { if err := r.client.Set(skey, "process-info", 0).Err(); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := r.client.HSet(wkey, "worker-key", "worker-info").Err(); err != nil { if err := r.client.HSet(wkey, "worker-key", "worker-info").Err(); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := r.client.ZAdd(base.AllProcesses, &redis.Z{Member: pkey}).Err(); err != nil { if err := r.client.ZAdd(base.AllServers, &redis.Z{Member: skey}).Err(); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := r.client.ZAdd(base.AllProcesses, &redis.Z{Member: otherPKey}).Err(); err != nil { if err := r.client.ZAdd(base.AllServers, &redis.Z{Member: otherSKey}).Err(); err != nil {
t.Fatal(err) t.Fatal(err)
} }
if err := r.client.ZAdd(base.AllWorkers, &redis.Z{Member: wkey}).Err(); err != nil { if err := r.client.ZAdd(base.AllWorkers, &redis.Z{Member: wkey}).Err(); err != nil {
@@ -1069,24 +1072,22 @@ func TestClearProcessState(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false) err := r.ClearServerState(ss)
err := r.ClearProcessState(ps)
if err != nil { if err != nil {
t.Fatalf("(*RDB).ClearProcessState failed: %v", err) t.Fatalf("(*RDB).ClearServerState failed: %v", err)
} }
// Check all keys are cleared // Check all keys are cleared
if r.client.Exists(pkey).Val() != 0 { if r.client.Exists(skey).Val() != 0 {
t.Errorf("Redis key %q exists", pkey) t.Errorf("Redis key %q exists", skey)
} }
if r.client.Exists(wkey).Val() != 0 { if r.client.Exists(wkey).Val() != 0 {
t.Errorf("Redis key %q exists", wkey) t.Errorf("Redis key %q exists", wkey)
} }
gotProcessKeys := r.client.ZRange(base.AllProcesses, 0, -1).Val() gotProcessKeys := r.client.ZRange(base.AllServers, 0, -1).Val()
wantProcessKeys := []string{otherPKey} wantProcessKeys := []string{otherSKey}
if diff := cmp.Diff(wantProcessKeys, gotProcessKeys); diff != "" { if diff := cmp.Diff(wantProcessKeys, gotProcessKeys); diff != "" {
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcessKeys, wantProcessKeys) t.Errorf("%q contained %v, want %v", base.AllServers, gotProcessKeys, wantProcessKeys)
} }
gotWorkerKeys := r.client.ZRange(base.AllWorkers, 0, -1).Val() gotWorkerKeys := r.client.ZRange(base.AllWorkers, 0, -1).Val()
wantWorkerKeys := []string{otherWKey} wantWorkerKeys := []string{otherWKey}

View File

@@ -0,0 +1,187 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package testbroker exports a broker implementation that should be used in package testing.
package testbroker
import (
"errors"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
)
var errRedisDown = errors.New("asynqtest: redis is down")
// TestBroker is a broker implementation which enables
// to simulate Redis failure in tests.
type TestBroker struct {
mu sync.Mutex
sleeping bool
// real broker
real base.Broker
}
func NewTestBroker(b base.Broker) *TestBroker {
return &TestBroker{real: b}
}
func (tb *TestBroker) Sleep() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = true
}
func (tb *TestBroker) Wakeup() {
tb.mu.Lock()
defer tb.mu.Unlock()
tb.sleeping = false
}
func (tb *TestBroker) Enqueue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Enqueue(msg)
}
func (tb *TestBroker) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.EnqueueUnique(msg, ttl)
}
func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.Dequeue(qnames...)
}
func (tb *TestBroker) Done(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Done(msg)
}
func (tb *TestBroker) Requeue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Requeue(msg)
}
func (tb *TestBroker) Schedule(msg *base.TaskMessage, processAt time.Time) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Schedule(msg, processAt)
}
func (tb *TestBroker) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ScheduleUnique(msg, processAt, ttl)
}
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Retry(msg, processAt, errMsg)
}
func (tb *TestBroker) Kill(msg *base.TaskMessage, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Kill(msg, errMsg)
}
func (tb *TestBroker) RequeueAll() (int64, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return 0, errRedisDown
}
return tb.real.RequeueAll()
}
func (tb *TestBroker) CheckAndEnqueue(qnames ...string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.CheckAndEnqueue()
}
func (tb *TestBroker) WriteServerState(ss *base.ServerState, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.WriteServerState(ss, ttl)
}
func (tb *TestBroker) ClearServerState(ss *base.ServerState) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ClearServerState(ss)
}
func (tb *TestBroker) CancelationPubSub() (*redis.PubSub, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.CancelationPubSub()
}
func (tb *TestBroker) PublishCancelation(id string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.PublishCancelation(id)
}
func (tb *TestBroker) Close() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Close()
}

View File

@@ -19,9 +19,9 @@ import (
type processor struct { type processor struct {
logger Logger logger Logger
rdb *rdb.RDB broker base.Broker
ps *base.ProcessState ss *base.ServerState
handler Handler handler Handler
@@ -34,6 +34,8 @@ type processor struct {
errHandler ErrorHandler errHandler ErrorHandler
shutdownTimeout time.Duration
// channel via which to send sync requests to syncer. // channel via which to send sync requests to syncer.
syncRequestCh chan<- *syncRequest syncRequestCh chan<- *syncRequest
@@ -61,30 +63,40 @@ type processor struct {
type retryDelayFunc func(n int, err error, task *Task) time.Duration type retryDelayFunc func(n int, err error, task *Task) time.Duration
type newProcessorParams struct {
logger Logger
broker base.Broker
ss *base.ServerState
retryDelayFunc retryDelayFunc
syncCh chan<- *syncRequest
cancelations *base.Cancelations
errHandler ErrorHandler
shutdownTimeout time.Duration
}
// newProcessor constructs a new processor. // newProcessor constructs a new processor.
func newProcessor(l Logger, r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc, func newProcessor(params newProcessorParams) *processor {
syncCh chan<- *syncRequest, c *base.Cancelations, errHandler ErrorHandler) *processor { info := params.ss.GetInfo()
info := ps.Get()
qcfg := normalizeQueueCfg(info.Queues) qcfg := normalizeQueueCfg(info.Queues)
orderedQueues := []string(nil) orderedQueues := []string(nil)
if info.StrictPriority { if info.StrictPriority {
orderedQueues = sortByPriority(qcfg) orderedQueues = sortByPriority(qcfg)
} }
return &processor{ return &processor{
logger: l, logger: params.logger,
rdb: r, broker: params.broker,
ps: ps, ss: params.ss,
queueConfig: qcfg, queueConfig: qcfg,
orderedQueues: orderedQueues, orderedQueues: orderedQueues,
retryDelayFunc: fn, retryDelayFunc: params.retryDelayFunc,
syncRequestCh: syncCh, syncRequestCh: params.syncCh,
cancelations: c, cancelations: params.cancelations,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1), errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, info.Concurrency), sema: make(chan struct{}, info.Concurrency),
done: make(chan struct{}), done: make(chan struct{}),
abort: make(chan struct{}), abort: make(chan struct{}),
quit: make(chan struct{}), quit: make(chan struct{}),
errHandler: errHandler, errHandler: params.errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }), handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
} }
} }
@@ -106,9 +118,7 @@ func (p *processor) stop() {
func (p *processor) terminate() { func (p *processor) terminate() {
p.stop() p.stop()
// IDEA: Allow user to customize this timeout value. time.AfterFunc(p.shutdownTimeout, func() { close(p.quit) })
const timeout = 8 * time.Second
time.AfterFunc(timeout, func() { close(p.quit) })
p.logger.Info("Waiting for all workers to finish...") p.logger.Info("Waiting for all workers to finish...")
// send cancellation signal to all in-progress task handlers // send cancellation signal to all in-progress task handlers
@@ -147,8 +157,8 @@ func (p *processor) start(wg *sync.WaitGroup) {
// process the task. // process the task.
func (p *processor) exec() { func (p *processor) exec() {
qnames := p.queues() qnames := p.queues()
msg, err := p.rdb.Dequeue(qnames...) msg, err := p.broker.Dequeue(qnames...)
if err == rdb.ErrNoProcessableTask { if err == rdb.ErrNoProcessableTask { // TODO: Need to decouple this error from rdb to support other brokers
// queues are empty, this is a normal behavior. // queues are empty, this is a normal behavior.
if len(p.queueConfig) > 1 { if len(p.queueConfig) > 1 {
// sleep to avoid slamming redis and let scheduler move tasks into queues. // sleep to avoid slamming redis and let scheduler move tasks into queues.
@@ -171,21 +181,23 @@ func (p *processor) exec() {
p.requeue(msg) p.requeue(msg)
return return
case p.sema <- struct{}{}: // acquire token case p.sema <- struct{}{}: // acquire token
p.ps.AddWorkerStats(msg, time.Now()) p.ss.AddWorkerStats(msg, time.Now())
go func() { go func() {
defer func() { defer func() {
p.ps.DeleteWorkerStats(msg) p.ss.DeleteWorkerStats(msg)
<-p.sema /* release token */ <-p.sema /* release token */
}() }()
ctx, cancel := createContext(msg)
p.cancelations.Add(msg.ID.String(), cancel)
defer func() {
cancel()
p.cancelations.Delete(msg.ID.String())
}()
resCh := make(chan error, 1) resCh := make(chan error, 1)
task := NewTask(msg.Type, msg.Payload) task := NewTask(msg.Type, msg.Payload)
ctx, cancel := createContext(msg) go func() { resCh <- perform(ctx, task, p.handler) }()
p.cancelations.Add(msg.ID.String(), cancel)
go func() {
resCh <- perform(ctx, task, p.handler)
p.cancelations.Delete(msg.ID.String())
}()
select { select {
case <-p.quit: case <-p.quit:
@@ -217,7 +229,7 @@ func (p *processor) exec() {
// restore moves all tasks from "in-progress" back to queue // restore moves all tasks from "in-progress" back to queue
// to restore all unfinished tasks. // to restore all unfinished tasks.
func (p *processor) restore() { func (p *processor) restore() {
n, err := p.rdb.RequeueAll() n, err := p.broker.RequeueAll()
if err != nil { if err != nil {
p.logger.Error("Could not restore unfinished tasks: %v", err) p.logger.Error("Could not restore unfinished tasks: %v", err)
} }
@@ -227,20 +239,20 @@ func (p *processor) restore() {
} }
func (p *processor) requeue(msg *base.TaskMessage) { func (p *processor) requeue(msg *base.TaskMessage) {
err := p.rdb.Requeue(msg) err := p.broker.Requeue(msg)
if err != nil { if err != nil {
p.logger.Error("Could not push task id=%s back to queue: %v", msg.ID, err) p.logger.Error("Could not push task id=%s back to queue: %v", msg.ID, err)
} }
} }
func (p *processor) markAsDone(msg *base.TaskMessage) { func (p *processor) markAsDone(msg *base.TaskMessage) {
err := p.rdb.Done(msg) err := p.broker.Done(msg)
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue) errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue)
p.logger.Warn("%s; Will retry syncing", errMsg) p.logger.Warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Done(msg) return p.broker.Done(msg)
}, },
errMsg: errMsg, errMsg: errMsg,
} }
@@ -250,13 +262,13 @@ func (p *processor) markAsDone(msg *base.TaskMessage) {
func (p *processor) retry(msg *base.TaskMessage, e error) { func (p *processor) retry(msg *base.TaskMessage, e error) {
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload)) d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(d) retryAt := time.Now().Add(d)
err := p.rdb.Retry(msg, retryAt, e.Error()) err := p.broker.Retry(msg, retryAt, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue)
p.logger.Warn("%s; Will retry syncing", errMsg) p.logger.Warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Retry(msg, retryAt, e.Error()) return p.broker.Retry(msg, retryAt, e.Error())
}, },
errMsg: errMsg, errMsg: errMsg,
} }
@@ -265,13 +277,13 @@ func (p *processor) retry(msg *base.TaskMessage, e error) {
func (p *processor) kill(msg *base.TaskMessage, e error) { func (p *processor) kill(msg *base.TaskMessage, e error) {
p.logger.Warn("Retry exhausted for task id=%s", msg.ID) p.logger.Warn("Retry exhausted for task id=%s", msg.ID)
err := p.rdb.Kill(msg, e.Error()) err := p.broker.Kill(msg, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue)
p.logger.Warn("%s; Will retry syncing", errMsg) p.logger.Warn("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.rdb.Kill(msg, e.Error()) return p.broker.Kill(msg, e.Error())
}, },
errMsg: errMsg, errMsg: errMsg,
} }

View File

@@ -37,19 +37,16 @@ func TestProcessorSuccess(t *testing.T) {
tests := []struct { tests := []struct {
enqueued []*base.TaskMessage // initial default queue state enqueued []*base.TaskMessage // initial default queue state
incoming []*base.TaskMessage // tasks to be enqueued during run incoming []*base.TaskMessage // tasks to be enqueued during run
wait time.Duration // wait duration between starting and stopping processor for this test case
wantProcessed []*Task // tasks to be processed at the end wantProcessed []*Task // tasks to be processed at the end
}{ }{
{ {
enqueued: []*base.TaskMessage{m1}, enqueued: []*base.TaskMessage{m1},
incoming: []*base.TaskMessage{m2, m3, m4}, incoming: []*base.TaskMessage{m2, m3, m4},
wait: time.Second,
wantProcessed: []*Task{t1, t2, t3, t4}, wantProcessed: []*Task{t1, t2, t3, t4},
}, },
{ {
enqueued: []*base.TaskMessage{}, enqueued: []*base.TaskMessage{},
incoming: []*base.TaskMessage{m1}, incoming: []*base.TaskMessage{m1},
wait: time.Second,
wantProcessed: []*Task{t1}, wantProcessed: []*Task{t1},
}, },
} }
@@ -67,13 +64,20 @@ func TestProcessorSuccess(t *testing.T) {
processed = append(processed, task) processed = append(processed, task)
return nil return nil
} }
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false) ss := base.NewServerState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations() p := newProcessor(newProcessorParams{
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil) logger: testLogger,
broker: rdbClient,
ss: ss,
retryDelayFunc: defaultDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
})
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
for _, msg := range tc.incoming { for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg) err := rdbClient.Enqueue(msg)
if err != nil { if err != nil {
@@ -81,7 +85,7 @@ func TestProcessorSuccess(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
} }
time.Sleep(tc.wait) time.Sleep(time.Second) // wait for one second to allow all enqueued tasks to be processed.
p.terminate() p.terminate()
if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" { if diff := cmp.Diff(tc.wantProcessed, processed, sortTaskOpt, cmp.AllowUnexported(Payload{})); diff != "" {
@@ -165,13 +169,20 @@ func TestProcessorRetry(t *testing.T) {
defer mu.Unlock() defer mu.Unlock()
n++ n++
} }
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false) ss := base.NewServerState("localhost", 1234, 10, defaultQueueConfig, false)
cancelations := base.NewCancelations() p := newProcessor(newProcessorParams{
p := newProcessor(testLogger, rdbClient, ps, delayFunc, nil, cancelations, ErrorHandlerFunc(errHandler)) logger: testLogger,
broker: rdbClient,
ss: ss,
retryDelayFunc: delayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
errHandler: ErrorHandlerFunc(errHandler),
shutdownTimeout: defaultShutdownTimeout,
})
p.handler = tc.handler p.handler = tc.handler
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
for _, msg := range tc.incoming { for _, msg := range tc.incoming {
err := rdbClient.Enqueue(msg) err := rdbClient.Enqueue(msg)
if err != nil { if err != nil {
@@ -182,7 +193,7 @@ func TestProcessorRetry(t *testing.T) {
time.Sleep(tc.wait) time.Sleep(tc.wait)
p.terminate() p.terminate()
cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to second difference in zset score cmpOpt := cmpopts.EquateApprox(0, float64(time.Second)) // allow up to a second difference in zset score
gotRetry := h.GetRetryEntries(t, r) gotRetry := h.GetRetryEntries(t, r)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" { if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortZSetEntryOpt, cmpOpt); diff != "" {
t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff) t.Errorf("mismatch found in %q after running processor; (-want, +got)\n%s", base.RetryQueue, diff)
@@ -231,9 +242,17 @@ func TestProcessorQueues(t *testing.T) {
} }
for _, tc := range tests { for _, tc := range tests {
cancelations := base.NewCancelations() ss := base.NewServerState("localhost", 1234, 10, tc.queueCfg, false)
ps := base.NewProcessState("localhost", 1234, 10, tc.queueCfg, false) p := newProcessor(newProcessorParams{
p := newProcessor(testLogger, nil, ps, defaultDelayFunc, nil, cancelations, nil) logger: testLogger,
broker: nil,
ss: ss,
retryDelayFunc: defaultDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
})
got := p.queues() got := p.queues()
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" { if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s", t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s",
@@ -299,13 +318,20 @@ func TestProcessorWithStrictPriority(t *testing.T) {
"low": 1, "low": 1,
} }
// Note: Set concurrency to 1 to make sure tasks are processed one at a time. // Note: Set concurrency to 1 to make sure tasks are processed one at a time.
cancelations := base.NewCancelations() ss := base.NewServerState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/)
ps := base.NewProcessState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/) p := newProcessor(newProcessorParams{
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil) logger: testLogger,
broker: rdbClient,
ss: ss,
retryDelayFunc: defaultDelayFunc,
syncCh: nil,
cancelations: base.NewCancelations(),
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
})
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
var wg sync.WaitGroup p.start(&sync.WaitGroup{})
p.start(&wg)
time.Sleep(tc.wait) time.Sleep(tc.wait)
p.terminate() p.terminate()
@@ -446,3 +472,83 @@ func TestCreateContextWithoutTimeRestrictions(t *testing.T) {
t.Error("ctx.Done() blocked, want it to be non-blocking") t.Error("ctx.Done() blocked, want it to be non-blocking")
} }
} }
func TestGCD(t *testing.T) {
tests := []struct {
input []int
want int
}{
{[]int{6, 2, 12}, 2},
{[]int{3, 3, 3}, 3},
{[]int{6, 3, 1}, 1},
{[]int{1}, 1},
{[]int{1, 0, 2}, 1},
{[]int{8, 0, 4}, 4},
{[]int{9, 12, 18, 30}, 3},
}
for _, tc := range tests {
got := gcd(tc.input...)
if got != tc.want {
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
}
}
}
func TestNormalizeQueueCfg(t *testing.T) {
tests := []struct {
input map[string]int
want map[string]int
}{
{
input: map[string]int{
"high": 100,
"default": 20,
"low": 5,
},
want: map[string]int{
"high": 20,
"default": 4,
"low": 1,
},
},
{
input: map[string]int{
"default": 10,
},
want: map[string]int{
"default": 1,
},
},
{
input: map[string]int{
"critical": 5,
"default": 1,
},
want: map[string]int{
"critical": 5,
"default": 1,
},
},
{
input: map[string]int{
"critical": 6,
"default": 3,
"low": 0,
},
want: map[string]int{
"critical": 2,
"default": 1,
"low": 0,
},
},
}
for _, tc := range tests {
got := normalizeQueueCfg(tc.input)
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
tc.input, got, tc.want, diff)
}
}
}

View File

@@ -8,12 +8,12 @@ import (
"sync" "sync"
"time" "time"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/base"
) )
type scheduler struct { type scheduler struct {
logger Logger logger Logger
rdb *rdb.RDB broker base.Broker
// channel to communicate back to the long running "scheduler" goroutine. // channel to communicate back to the long running "scheduler" goroutine.
done chan struct{} done chan struct{}
@@ -25,14 +25,14 @@ type scheduler struct {
qnames []string qnames []string
} }
func newScheduler(l Logger, r *rdb.RDB, avgInterval time.Duration, qcfg map[string]int) *scheduler { func newScheduler(l Logger, b base.Broker, avgInterval time.Duration, qcfg map[string]int) *scheduler {
var qnames []string var qnames []string
for q := range qcfg { for q := range qcfg {
qnames = append(qnames, q) qnames = append(qnames, q)
} }
return &scheduler{ return &scheduler{
logger: l, logger: l,
rdb: r, broker: b,
done: make(chan struct{}), done: make(chan struct{}),
avgInterval: avgInterval, avgInterval: avgInterval,
qnames: qnames, qnames: qnames,
@@ -63,7 +63,7 @@ func (s *scheduler) start(wg *sync.WaitGroup) {
} }
func (s *scheduler) exec() { func (s *scheduler) exec() {
if err := s.rdb.CheckAndEnqueue(s.qnames...); err != nil { if err := s.broker.CheckAndEnqueue(s.qnames...); err != nil {
s.logger.Error("Could not enqueue scheduled tasks: %v", err) s.logger.Error("Could not enqueue scheduled tasks: %v", err)
} }
} }

View File

@@ -6,13 +6,13 @@ package asynq
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"math" "math"
"math/rand" "math/rand"
"os" "os"
"os/signal" "runtime"
"sync" "sync"
"syscall"
"time" "time"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
@@ -20,29 +20,27 @@ import (
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
) )
// Background is responsible for managing the background-task processing. // Server is responsible for managing the background-task processing.
// //
// Background manages task queues to process tasks. // Server pulls tasks off queues and processes them.
// If the processing of a task is unsuccessful, background will // If the processing of a task is unsuccessful, server will
// schedule it for a retry until either the task gets processed successfully // schedule it for a retry.
// or it exhausts its max retry count. // A task will be retried until either the task gets processed successfully
// or until it reaches its max retry count.
// //
// Once a task exhausts its retries, it will be moved to the "dead" queue and // If a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met // will be kept in the queue for some time until a certain condition is met
// (e.g., queue size reaches a certain limit, or the task has been in the // (e.g., queue size reaches a certain limit, or the task has been in the
// queue for a certain amount of time). // queue for a certain amount of time).
type Background struct { type Server struct {
mu sync.Mutex ss *base.ServerState
running bool
ps *base.ProcessState
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
logger Logger logger Logger
rdb *rdb.RDB broker base.Broker
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
scheduler *scheduler scheduler *scheduler
processor *processor processor *processor
syncer *syncer syncer *syncer
@@ -50,11 +48,12 @@ type Background struct {
subscriber *subscriber subscriber *subscriber
} }
// Config specifies the background-task processing behavior. // Config specifies the server's background-task processing behavior.
type Config struct { type Config struct {
// Maximum number of concurrent processing of tasks. // Maximum number of concurrent processing of tasks.
// //
// If set to a zero or negative value, NewBackground will overwrite the value to one. // If set to a zero or negative value, NewServer will overwrite the value
// to the number of CPUs usable by the currennt process.
Concurrency int Concurrency int
// Function to calculate retry delay for a failed task. // Function to calculate retry delay for a failed task.
@@ -69,7 +68,7 @@ type Config struct {
// List of queues to process with given priority value. Keys are the names of the // List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value. // queues and values are associated priority value.
// //
// If set to nil or not specified, the background will process only the "default" queue. // If set to nil or not specified, the server will process only the "default" queue.
// //
// Priority is treated as follows to avoid starving low priority queues. // Priority is treated as follows to avoid starving low priority queues.
// //
@@ -108,10 +107,16 @@ type Config struct {
// ErrorHandler: asynq.ErrorHandlerFunc(reportError) // ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler ErrorHandler ErrorHandler
// Logger specifies the logger used by the background instance. // Logger specifies the logger used by the server instance.
// //
// If unset, default logger is used. // If unset, default logger is used.
Logger Logger Logger Logger
// ShutdownTimeout specifies the duration to wait to let workers finish their tasks
// before forcing them to abort when stopping the server.
//
// If unset or zero, default timeout of 8 seconds is used.
ShutdownTimeout time.Duration
} }
// An ErrorHandler handles errors returned by the task handler. // An ErrorHandler handles errors returned by the task handler.
@@ -158,12 +163,14 @@ var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1, base.DefaultQueueName: 1,
} }
// NewBackground returns a new Background given a redis connection option const defaultShutdownTimeout = 8 * time.Second
// NewServer returns a new Server given a redis connection option
// and background processing configuration. // and background processing configuration.
func NewBackground(r RedisConnOpt, cfg *Config) *Background { func NewServer(r RedisConnOpt, cfg Config) *Server {
n := cfg.Concurrency n := cfg.Concurrency
if n < 1 { if n < 1 {
n = 1 n = runtime.NumCPU()
} }
delayFunc := cfg.RetryDelayFunc delayFunc := cfg.RetryDelayFunc
if delayFunc == nil { if delayFunc == nil {
@@ -182,6 +189,10 @@ func NewBackground(r RedisConnOpt, cfg *Config) *Background {
if logger == nil { if logger == nil {
logger = log.NewLogger(os.Stderr) logger = log.NewLogger(os.Stderr)
} }
shutdownTimeout := cfg.ShutdownTimeout
if shutdownTimeout == 0 {
shutdownTimeout = defaultShutdownTimeout
}
host, err := os.Hostname() host, err := os.Hostname()
if err != nil { if err != nil {
@@ -190,18 +201,27 @@ func NewBackground(r RedisConnOpt, cfg *Config) *Background {
pid := os.Getpid() pid := os.Getpid()
rdb := rdb.NewRDB(createRedisClient(r)) rdb := rdb.NewRDB(createRedisClient(r))
ps := base.NewProcessState(host, pid, n, queues, cfg.StrictPriority) ss := base.NewServerState(host, pid, n, queues, cfg.StrictPriority)
syncCh := make(chan *syncRequest) syncCh := make(chan *syncRequest)
cancels := base.NewCancelations() cancels := base.NewCancelations()
syncer := newSyncer(logger, syncCh, 5*time.Second) syncer := newSyncer(logger, syncCh, 5*time.Second)
heartbeater := newHeartbeater(logger, rdb, ps, 5*time.Second) heartbeater := newHeartbeater(logger, rdb, ss, 5*time.Second)
scheduler := newScheduler(logger, rdb, 5*time.Second, queues) scheduler := newScheduler(logger, rdb, 5*time.Second, queues)
processor := newProcessor(logger, rdb, ps, delayFunc, syncCh, cancels, cfg.ErrorHandler)
subscriber := newSubscriber(logger, rdb, cancels) subscriber := newSubscriber(logger, rdb, cancels)
return &Background{ processor := newProcessor(newProcessorParams{
logger: logger, logger: logger,
rdb: rdb, broker: rdb,
ps: ps, ss: ss,
retryDelayFunc: delayFunc,
syncCh: syncCh,
cancelations: cancels,
errHandler: cfg.ErrorHandler,
shutdownTimeout: shutdownTimeout,
})
return &Server{
ss: ss,
logger: logger,
broker: rdb,
scheduler: scheduler, scheduler: scheduler,
processor: processor, processor: processor,
syncer: syncer, syncer: syncer,
@@ -232,82 +252,95 @@ func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task) return fn(ctx, task)
} }
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped.
var ErrServerStopped = errors.New("asynq: the server has been stopped")
// Run starts the background-task processing and blocks until // Run starts the background-task processing and blocks until
// an os signal to exit the program is received. Once it receives // an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all pending workers and other // a signal, it gracefully shuts down all active workers and other
// goroutines to process the tasks. // goroutines to process the tasks.
func (bg *Background) Run(handler Handler) { //
// Run returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Run(handler Handler) error {
if err := srv.Start(handler); err != nil {
return err
}
srv.waitForSignals()
srv.Stop()
return nil
}
// Start starts the worker server. Once the server has started,
// it pulls tasks off queues and starts a worker goroutine for each task.
// Tasks are processed concurrently by the workers up to the number of
// concurrency specified at the initialization time.
//
// Start returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
func (srv *Server) Start(handler Handler) error {
if handler == nil {
return fmt.Errorf("asynq: server cannot run with nil handler")
}
switch srv.ss.Status() {
case base.StatusRunning:
return fmt.Errorf("asynq: the server is already running")
case base.StatusStopped:
return ErrServerStopped
}
srv.ss.SetStatus(base.StatusRunning)
srv.processor.handler = handler
type prefixLogger interface { type prefixLogger interface {
SetPrefix(prefix string) SetPrefix(prefix string)
} }
// If logger supports setting prefix, then set prefix for log output. // If logger supports setting prefix, then set prefix for log output.
if l, ok := bg.logger.(prefixLogger); ok { if l, ok := srv.logger.(prefixLogger); ok {
l.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid())) l.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid()))
} }
bg.logger.Info("Starting processing") srv.logger.Info("Starting processing")
bg.start(handler) srv.heartbeater.start(&srv.wg)
defer bg.stop() srv.subscriber.start(&srv.wg)
srv.syncer.start(&srv.wg)
bg.logger.Info("Send signal TSTP to stop processing new tasks") srv.scheduler.start(&srv.wg)
bg.logger.Info("Send signal TERM or INT to terminate the process") srv.processor.start(&srv.wg)
return nil
// Wait for a signal to terminate.
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGTSTP)
for {
sig := <-sigs
if sig == syscall.SIGTSTP {
bg.processor.stop()
bg.ps.SetStatus(base.StatusStopped)
continue
}
break
}
fmt.Println()
bg.logger.Info("Starting graceful shutdown")
} }
// starts the background-task processing. // Stop stops the worker server.
func (bg *Background) start(handler Handler) { // It gracefully closes all active workers. The server will wait for
bg.mu.Lock() // active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
defer bg.mu.Unlock() // If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
if bg.running { func (srv *Server) Stop() {
return switch srv.ss.Status() {
} case base.StatusIdle, base.StatusStopped:
// server is not running, do nothing and return.
bg.running = true
bg.processor.handler = handler
bg.heartbeater.start(&bg.wg)
bg.subscriber.start(&bg.wg)
bg.syncer.start(&bg.wg)
bg.scheduler.start(&bg.wg)
bg.processor.start(&bg.wg)
}
// stops the background-task processing.
func (bg *Background) stop() {
bg.mu.Lock()
defer bg.mu.Unlock()
if !bg.running {
return return
} }
fmt.Println() // print newline for prettier log.
srv.logger.Info("Starting graceful shutdown")
// Note: The order of termination is important. // Note: The order of termination is important.
// Sender goroutines should be terminated before the receiver goroutines. // Sender goroutines should be terminated before the receiver goroutines.
//
// processor -> syncer (via syncCh) // processor -> syncer (via syncCh)
bg.scheduler.terminate() srv.scheduler.terminate()
bg.processor.terminate() srv.processor.terminate()
bg.syncer.terminate() srv.syncer.terminate()
bg.subscriber.terminate() srv.subscriber.terminate()
bg.heartbeater.terminate() srv.heartbeater.terminate()
bg.wg.Wait() srv.wg.Wait()
bg.rdb.Close() srv.broker.Close()
bg.running = false srv.ss.SetStatus(base.StatusStopped)
bg.logger.Info("Bye!") srv.logger.Info("Bye!")
}
// Quiet signals the server to stop pulling new tasks off queues.
// Quiet should be used before stopping the server.
func (srv *Server) Quiet() {
srv.processor.stop()
srv.ss.SetStatus(base.StatusQuiet)
} }

210
server_test.go Normal file
View File

@@ -0,0 +1,210 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"fmt"
"syscall"
"testing"
"time"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
"go.uber.org/goleak"
)
func TestServer(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
c := NewClient(r)
srv := NewServer(r, Config{
Concurrency: 10,
})
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
err = c.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
srv.Stop()
}
func TestServerRun(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
done := make(chan struct{})
// Make sure server exits when receiving TERM signal.
go func() {
time.Sleep(2 * time.Second)
syscall.Kill(syscall.Getpid(), syscall.SIGTERM)
done <- struct{}{}
}()
go func() {
select {
case <-time.After(10 * time.Second):
t.Fatal("server did not stop after receiving TERM signal")
case <-done:
}
}()
mux := NewServeMux()
if err := srv.Run(mux); err != nil {
t.Fatal(err)
}
}
func TestServerErrServerStopped(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
srv.Stop()
err := srv.Start(handler)
if err != ErrServerStopped {
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err)
}
}
func TestServerErrNilHandler(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
err := srv.Start(nil)
if err == nil {
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
srv.Stop()
}
}
func TestServerErrServerRunning(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
err := srv.Start(handler)
if err == nil {
t.Error("Calling (*Server).Start(handler) on already running server did not return error")
}
srv.Stop()
}
func TestServerWithRedisDown(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
testBroker.Sleep()
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
time.Sleep(3 * time.Second)
srv.Stop()
}
func TestServerWithFlakyBroker(t *testing.T) {
// Make sure that server does not panic and exit if redis is down.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: redisAddr, DB: redisDB}, Config{})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
h := func(ctx context.Context, task *Task) error {
// force task retry.
if task.Type == "bad_task" {
return fmt.Errorf("could not process %q", task.Type)
}
time.Sleep(2 * time.Second)
return nil
}
err := srv.Start(HandlerFunc(h))
if err != nil {
t.Fatal(err)
}
for i := 0; i < 10; i++ {
err := c.Enqueue(NewTask("enqueued", nil), MaxRetry(i))
if err != nil {
t.Fatal(err)
}
err = c.Enqueue(NewTask("bad_task", nil))
if err != nil {
t.Fatal(err)
}
err = c.EnqueueIn(time.Duration(i)*time.Second, NewTask("scheduled", nil))
if err != nil {
t.Fatal(err)
}
}
// simulate redis going down.
testBroker.Sleep()
time.Sleep(3 * time.Second)
// simulate redis comes back online.
testBroker.Wakeup()
time.Sleep(3 * time.Second)
srv.Stop()
}

30
signals_unix.go Normal file
View File

@@ -0,0 +1,30 @@
// +build linux bsd darwin
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/unix"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM, SIGINT, and SIGTSTP.
// SIGTERM and SIGINT will signal the process to exit.
// SIGTSTP will signal the process to stop processing new tasks.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TSTP to stop processing new tasks")
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
for {
sig := <-sigs
if sig == unix.SIGTSTP {
srv.Quiet()
continue
}
break
}
}

22
signals_windows.go Normal file
View File

@@ -0,0 +1,22 @@
// +build windows
package asynq
import (
"os"
"os/signal"
"golang.org/x/sys/windows"
)
// waitForSignals waits for signals and handles them.
// It handles SIGTERM and SIGINT.
// SIGTERM and SIGINT will signal the process to exit.
//
// Note: Currently SIGTSTP is not supported for windows build.
func (srv *Server) waitForSignals() {
srv.logger.Info("Send signal TERM or INT to terminate the process")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}

View File

@@ -6,28 +6,33 @@ package asynq
import ( import (
"sync" "sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
) )
type subscriber struct { type subscriber struct {
logger Logger logger Logger
rdb *rdb.RDB broker base.Broker
// channel to communicate back to the long running "subscriber" goroutine. // channel to communicate back to the long running "subscriber" goroutine.
done chan struct{} done chan struct{}
// cancelations hold cancel functions for all in-progress tasks. // cancelations hold cancel functions for all in-progress tasks.
cancelations *base.Cancelations cancelations *base.Cancelations
// time to wait before retrying to connect to redis.
retryTimeout time.Duration
} }
func newSubscriber(l Logger, rdb *rdb.RDB, cancelations *base.Cancelations) *subscriber { func newSubscriber(l Logger, b base.Broker, cancelations *base.Cancelations) *subscriber {
return &subscriber{ return &subscriber{
logger: l, logger: l,
rdb: rdb, broker: b,
done: make(chan struct{}), done: make(chan struct{}),
cancelations: cancelations, cancelations: cancelations,
retryTimeout: 5 * time.Second,
} }
} }
@@ -38,15 +43,29 @@ func (s *subscriber) terminate() {
} }
func (s *subscriber) start(wg *sync.WaitGroup) { func (s *subscriber) start(wg *sync.WaitGroup) {
pubsub, err := s.rdb.CancelationPubSub()
cancelCh := pubsub.Channel()
if err != nil {
s.logger.Error("cannot subscribe to cancelation channel: %v", err)
return
}
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
var (
pubsub *redis.PubSub
err error
)
// Try until successfully connect to Redis.
for {
pubsub, err = s.broker.CancelationPubSub()
if err != nil {
s.logger.Error("cannot subscribe to cancelation channel: %v", err)
select {
case <-time.After(s.retryTimeout):
continue
case <-s.done:
s.logger.Info("Subscriber done")
return
}
}
break
}
cancelCh := pubsub.Channel()
for { for {
select { select {
case <-s.done: case <-s.done:

View File

@@ -11,6 +11,7 @@ import (
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
) )
func TestSubscriber(t *testing.T) { func TestSubscriber(t *testing.T) {
@@ -40,13 +41,16 @@ func TestSubscriber(t *testing.T) {
subscriber := newSubscriber(testLogger, rdbClient, cancelations) subscriber := newSubscriber(testLogger, rdbClient, cancelations)
var wg sync.WaitGroup var wg sync.WaitGroup
subscriber.start(&wg) subscriber.start(&wg)
defer subscriber.terminate()
// wait for subscriber to establish connection to pubsub channel
time.Sleep(time.Second)
if err := rdbClient.PublishCancelation(tc.publishID); err != nil { if err := rdbClient.PublishCancelation(tc.publishID); err != nil {
subscriber.terminate()
t.Fatalf("could not publish cancelation message: %v", err) t.Fatalf("could not publish cancelation message: %v", err)
} }
// allow for redis to publish message // wait for redis to publish message
time.Sleep(time.Second) time.Sleep(time.Second)
mu.Lock() mu.Lock()
@@ -58,7 +62,53 @@ func TestSubscriber(t *testing.T) {
} }
} }
mu.Unlock() mu.Unlock()
}
}
subscriber.terminate() func TestSubscriberWithRedisDown(t *testing.T) {
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
} }
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
cancelations := base.NewCancelations()
subscriber := newSubscriber(testLogger, testBroker, cancelations)
subscriber.retryTimeout = 1 * time.Second // set shorter retry timeout for testing purpose.
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
var wg sync.WaitGroup
subscriber.start(&wg)
defer subscriber.terminate()
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.
testBroker.Wakeup() // simulate a situation where redis server is back online.
time.Sleep(2 * time.Second) // allow subscriber to establish pubsub channel.
const id = "test"
var (
mu sync.Mutex
called bool
)
cancelations.Add(id, func() {
mu.Lock()
defer mu.Unlock()
called = true
})
if err := r.PublishCancelation(id); err != nil {
t.Fatalf("could not publish cancelation message: %v", err)
}
time.Sleep(time.Second) // wait for redis to publish message.
mu.Lock()
if !called {
t.Errorf("cancel function was not called")
}
mu.Unlock()
} }

View File

@@ -1,6 +1,6 @@
# Asynqmon # Asynq CLI
Asynqmon is a command line tool to monitor the tasks managed by `asynq` package. Asynq CLI is a command line tool to monitor the tasks managed by `asynq` package.
## Table of Contents ## Table of Contents
@@ -8,7 +8,7 @@ Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
- [Quick Start](#quick-start) - [Quick Start](#quick-start)
- [Stats](#stats) - [Stats](#stats)
- [History](#history) - [History](#history)
- [Process Status](#process-status) - [Servers](#servers)
- [List](#list) - [List](#list)
- [Enqueue](#enqueue) - [Enqueue](#enqueue)
- [Delete](#delete) - [Delete](#delete)
@@ -20,19 +20,19 @@ Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
In order to use the tool, compile it using the following command: In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynqmon go get github.com/hibiken/asynq/tools/asynq
This will create the asynqmon executable under your `$GOPATH/bin` directory. This will create the asynq executable under your `$GOPATH/bin` directory.
## Quickstart ## Quickstart
The tool has a few commands to inspect the state of tasks and queues. The tool has a few commands to inspect the state of tasks and queues.
Run `asynqmon help` to see all the available commands. Run `asynq help` to see all the available commands.
Asynqmon needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application. Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
By default, Asynqmon will try to connect to a redis server running at `localhost:6379`. By default, CLI will try to connect to a redis server running at `localhost:6379`.
### Stats ### Stats
@@ -40,11 +40,11 @@ Stats command gives the overview of the current state of tasks and queues. You c
Example: Example:
watch -n 3 asynqmon stats watch -n 3 asynq stats
This will run `asynqmon stats` command every 3 seconds. This will run `asynq stats` command every 3 seconds.
![Gif](/docs/assets/asynqmon_stats.gif) ![Gif](/docs/assets/asynq_stats.gif)
### History ### History
@@ -54,19 +54,17 @@ By default, it shows the stats from the last 10 days. Use `--days` to specify th
Example: Example:
asynqmon history --days=30 asynq history --days=30
![Gif](/docs/assets/asynqmon_history.gif) ![Gif](/docs/assets/asynq_history.gif)
### Process Status ### Servers
PS (ProcessStatus) command shows the list of running worker processes. Servers command shows the list of running worker servers pulling tasks from the given redis instance.
Example: Example:
asynqmon ps asynq servers
![Gif](/docs/assets/asynqmon_ps.gif)
### List ### List
@@ -74,11 +72,11 @@ List command shows all tasks in the specified state in a table format
Example: Example:
asynqmon ls retry asynq ls retry
asynqmon ls scheduled asynq ls scheduled
asynqmon ls dead asynq ls dead
asynqmon ls enqueued:default asynq ls enqueued:default
asynqmon ls inprogress asynq ls inprogress
### Enqueue ### Enqueue
@@ -88,13 +86,13 @@ Command `enq` takes a task ID and moves the task to **Enqueued** state. You can
Example: Example:
asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g asynq enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state. Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example: Example:
asynqmon enqall retry asynq enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state. Running the above command will move all **Retry** tasks to **Enqueued** state.
@@ -106,13 +104,13 @@ Command `del` takes a task ID and deletes the task. You can obtain the task ID b
Example: Example:
asynqmon del r:1575732274:bnogo8gt6toe23vhef0g asynq del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state. Command `delall` deletes all tasks which are in the specified state.
Example: Example:
asynqmon delall retry asynq delall retry
Running the above command will delete all **Retry** tasks. Running the above command will delete all **Retry** tasks.
@@ -124,13 +122,13 @@ Command `kill` takes a task ID and kills the task. You can obtain the task ID by
Example: Example:
asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g asynq kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state. Command `killall` kills all tasks which are in the specified state.
Example: Example:
asynqmon killall retry asynq killall retry
Running the above command will move all **Retry** tasks to **Dead** state. Running the above command will move all **Retry** tasks to **Dead** state.
@@ -144,15 +142,15 @@ Handler implementation needs to be context aware in order to actually stop proce
Example: Example:
asynqmon cancel bnogo8gt6toe23vhef0g asynq cancel bnogo8gt6toe23vhef0g
## Config File ## Config File
You can use a config file to set default values for the flags. You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server. This is useful, for example when you have to connect to a remote redis server.
By default, `asynqmon` will try to read config file located in By default, `asynq` will try to read config file located in
`$HOME/.asynqmon.(yaml|json)`. You can specify the file location via `--config` flag. `$HOME/.asynq.(yaml|json)`. You can specify the file location via `--config` flag.
Config file example: Config file example:

View File

@@ -18,17 +18,17 @@ import (
var cancelCmd = &cobra.Command{ var cancelCmd = &cobra.Command{
Use: "cancel [task id]", Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task", Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynqmon cancel) will send a cancelation signal to the goroutine processing Long: `Cancel (asynq cancel) will send a cancelation signal to the goroutine processing
the specified task. the specified task.
The command takes one argument which specifies the task to cancel. The command takes one argument which specifies the task to cancel.
The task should be in in-progress state. The task should be in in-progress state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
Handler implementation needs to be context aware for cancelation signal to Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing. actually cancel the processing.
Example: asynqmon cancel bnogo8gt6toe23vhef0g`, Example: asynq cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: cancel, Run: cancel,
} }

View File

@@ -18,13 +18,13 @@ import (
var delCmd = &cobra.Command{ var delCmd = &cobra.Command{
Use: "del [task id]", Use: "del [task id]",
Short: "Deletes a task given an identifier", Short: "Deletes a task given an identifier",
Long: `Del (asynqmon del) will delete a task given an identifier. Long: `Del (asynq del) will delete a task given an identifier.
The command takes one argument which specifies the task to delete. The command takes one argument which specifies the task to delete.
The task should be in either scheduled, retry or dead state. The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`, Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: del, Run: del,
} }

View File

@@ -20,11 +20,11 @@ var delallValidArgs = []string{"scheduled", "retry", "dead"}
var delallCmd = &cobra.Command{ var delallCmd = &cobra.Command{
Use: "delall [state]", Use: "delall [state]",
Short: "Deletes all tasks in the specified state", Short: "Deletes all tasks in the specified state",
Long: `Delall (asynqmon delall) will delete all tasks in the specified state. Long: `Delall (asynq delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead". The argument should be one of "scheduled", "retry", or "dead".
Example: asynqmon delall dead -> Deletes all dead tasks`, Example: asynq delall dead -> Deletes all dead tasks`,
ValidArgs: delallValidArgs, ValidArgs: delallValidArgs,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: delall, Run: delall,
@@ -60,7 +60,7 @@ func delall(cmd *cobra.Command, args []string) {
case "dead": case "dead":
err = r.DeleteAllDeadTasks() err = r.DeleteAllDeadTasks()
default: default:
fmt.Printf("error: `asynqmon delall [state]` only accepts %v as the argument.\n", delallValidArgs) fmt.Printf("error: `asynq delall [state]` only accepts %v as the argument.\n", delallValidArgs)
os.Exit(1) os.Exit(1)
} }
if err != nil { if err != nil {

View File

@@ -18,16 +18,16 @@ import (
var enqCmd = &cobra.Command{ var enqCmd = &cobra.Command{
Use: "enq [task id]", Use: "enq [task id]",
Short: "Enqueues a task given an identifier", Short: "Enqueues a task given an identifier",
Long: `Enq (asynqmon enq) will enqueue a task given an identifier. Long: `Enq (asynq enq) will enqueue a task given an identifier.
The command takes one argument which specifies the task to enqueue. The command takes one argument which specifies the task to enqueue.
The task should be in either scheduled, retry or dead state. The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
The task enqueued by this command will be processed as soon as the task The task enqueued by this command will be processed as soon as the task
gets dequeued by a processor. gets dequeued by a processor.
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`, Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: enq, Run: enq,
} }

View File

@@ -20,14 +20,14 @@ var enqallValidArgs = []string{"scheduled", "retry", "dead"}
var enqallCmd = &cobra.Command{ var enqallCmd = &cobra.Command{
Use: "enqall [state]", Use: "enqall [state]",
Short: "Enqueues all tasks in the specified state", Short: "Enqueues all tasks in the specified state",
Long: `Enqall (asynqmon enqall) will enqueue all tasks in the specified state. Long: `Enqall (asynq enqall) will enqueue all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead". The argument should be one of "scheduled", "retry", or "dead".
The tasks enqueued by this command will be processed as soon as it The tasks enqueued by this command will be processed as soon as it
gets dequeued by a processor. gets dequeued by a processor.
Example: asynqmon enqall dead -> Enqueues all dead tasks`, Example: asynq enqall dead -> Enqueues all dead tasks`,
ValidArgs: enqallValidArgs, ValidArgs: enqallValidArgs,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: enqall, Run: enqall,
@@ -64,7 +64,7 @@ func enqall(cmd *cobra.Command, args []string) {
case "dead": case "dead":
n, err = r.EnqueueAllDeadTasks() n, err = r.EnqueueAllDeadTasks()
default: default:
fmt.Printf("error: `asynqmon enqall [state]` only accepts %v as the argument.\n", enqallValidArgs) fmt.Printf("error: `asynq enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
os.Exit(1) os.Exit(1)
} }
if err != nil { if err != nil {

View File

@@ -22,12 +22,12 @@ var days int
var historyCmd = &cobra.Command{ var historyCmd = &cobra.Command{
Use: "history", Use: "history",
Short: "Shows historical aggregate data", Short: "Shows historical aggregate data",
Long: `History (asynqmon history) will show the number of processed and failed tasks Long: `History (asynq history) will show the number of processed and failed tasks
from the last x days. from the last x days.
By default, it will show the data from the last 10 days. By default, it will show the data from the last 10 days.
Example: asynqmon history -x=30 -> Shows stats from the last 30 days`, Example: asynq history -x=30 -> Shows stats from the last 30 days`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: history, Run: history,
} }

View File

@@ -18,13 +18,13 @@ import (
var killCmd = &cobra.Command{ var killCmd = &cobra.Command{
Use: "kill [task id]", Use: "kill [task id]",
Short: "Kills a task given an identifier", Short: "Kills a task given an identifier",
Long: `Kill (asynqmon kill) will put a task in dead state given an identifier. Long: `Kill (asynq kill) will put a task in dead state given an identifier.
The command takes one argument which specifies the task to kill. The command takes one argument which specifies the task to kill.
The task should be in either scheduled or retry state. The task should be in either scheduled or retry state.
Identifier for a task should be obtained by running "asynqmon ls" command. Identifier for a task should be obtained by running "asynq ls" command.
Example: asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g`, Example: asynq kill r:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
Run: kill, Run: kill,
} }

View File

@@ -20,11 +20,11 @@ var killallValidArgs = []string{"scheduled", "retry"}
var killallCmd = &cobra.Command{ var killallCmd = &cobra.Command{
Use: "killall [state]", Use: "killall [state]",
Short: "Kills all tasks in the specified state", Short: "Kills all tasks in the specified state",
Long: `Killall (asynqmon killall) will update all tasks from the specified state to dead state. Long: `Killall (asynq killall) will update all tasks from the specified state to dead state.
The argument should be either "scheduled" or "retry". The argument should be either "scheduled" or "retry".
Example: asynqmon killall retry -> Update all retry tasks to dead tasks`, Example: asynq killall retry -> Update all retry tasks to dead tasks`,
ValidArgs: killallValidArgs, ValidArgs: killallValidArgs,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: killall, Run: killall,
@@ -59,7 +59,7 @@ func killall(cmd *cobra.Command, args []string) {
case "retry": case "retry":
n, err = r.KillAllRetryTasks() n, err = r.KillAllRetryTasks()
default: default:
fmt.Printf("error: `asynqmon killall [state]` only accepts %v as the argument.\n", killallValidArgs) fmt.Printf("error: `asynq killall [state]` only accepts %v as the argument.\n", killallValidArgs)
os.Exit(1) os.Exit(1)
} }
if err != nil { if err != nil {

View File

@@ -25,19 +25,19 @@ var lsValidArgs = []string{"enqueued", "inprogress", "scheduled", "retry", "dead
var lsCmd = &cobra.Command{ var lsCmd = &cobra.Command{
Use: "ls [state]", Use: "ls [state]",
Short: "Lists tasks in the specified state", Short: "Lists tasks in the specified state",
Long: `Ls (asynqmon ls) will list all tasks in the specified state in a table format. Long: `Ls (asynq ls) will list all tasks in the specified state in a table format.
The command takes one argument which specifies the state of tasks. The command takes one argument which specifies the state of tasks.
The argument value should be one of "enqueued", "inprogress", "scheduled", The argument value should be one of "enqueued", "inprogress", "scheduled",
"retry", or "dead". "retry", or "dead".
Example: Example:
asynqmon ls dead -> Lists all tasks in dead state asynq ls dead -> Lists all tasks in dead state
Enqueued tasks requires a queue name after ":" Enqueued tasks requires a queue name after ":"
Example: Example:
asynqmon ls enqueued:default -> List tasks from default queue asynq ls enqueued:default -> List tasks from default queue
asynqmon ls enqueued:critical -> List tasks from critical queue asynq ls enqueued:critical -> List tasks from critical queue
`, `,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: ls, Run: ls,
@@ -72,7 +72,7 @@ func ls(cmd *cobra.Command, args []string) {
switch parts[0] { switch parts[0] {
case "enqueued": case "enqueued":
if len(parts) != 2 { if len(parts) != 2 {
fmt.Printf("error: Missing queue name\n`asynqmon ls enqueued:[queue name]`\n") fmt.Printf("error: Missing queue name\n`asynq ls enqueued:[queue name]`\n")
os.Exit(1) os.Exit(1)
} }
listEnqueued(r, parts[1]) listEnqueued(r, parts[1])
@@ -85,7 +85,7 @@ func ls(cmd *cobra.Command, args []string) {
case "dead": case "dead":
listDead(r) listDead(r)
default: default:
fmt.Printf("error: `asynqmon ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs) fmt.Printf("error: `asynq ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
os.Exit(1) os.Exit(1)
} }
} }

View File

@@ -18,11 +18,11 @@ import (
var rmqCmd = &cobra.Command{ var rmqCmd = &cobra.Command{
Use: "rmq [queue name]", Use: "rmq [queue name]",
Short: "Removes the specified queue", Short: "Removes the specified queue",
Long: `Rmq (asynqmon rmq) will remove the specified queue. Long: `Rmq (asynq rmq) will remove the specified queue.
By default, it will remove the queue only if it's empty. By default, it will remove the queue only if it's empty.
Use --force option to override this behavior. Use --force option to override this behavior.
Example: asynqmon rmq low -> Removes "low" queue`, Example: asynq rmq low -> Removes "low" queue`,
Args: cobra.ExactValidArgs(1), Args: cobra.ExactValidArgs(1),
Run: rmq, Run: rmq,
} }
@@ -44,7 +44,7 @@ func rmq(cmd *cobra.Command, args []string) {
err := r.RemoveQueue(args[0], rmqForce) err := r.RemoveQueue(args[0], rmqForce)
if err != nil { if err != nil {
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok { if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynqmon rmq --force %s'\n", err, args[0]) fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq rmq --force %s'\n", err, args[0])
os.Exit(1) os.Exit(1)
} }
fmt.Printf("error: %v", err) fmt.Printf("error: %v", err)

View File

@@ -26,9 +26,9 @@ var password string
// rootCmd represents the base command when called without any subcommands // rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
Use: "asynqmon", Use: "asynq",
Short: "A monitoring tool for asynq queues", Short: "A monitoring tool for asynq queues",
Long: `Asynqmon is a montoring CLI to inspect tasks and queues managed by asynq.`, Long: `Asynq is a montoring CLI to inspect tasks and queues managed by asynq.`,
} }
// Execute adds all child commands to the root command and sets flags appropriately. // Execute adds all child commands to the root command and sets flags appropriately.
@@ -43,7 +43,7 @@ func Execute() {
func init() { func init() {
cobra.OnInitialize(initConfig) cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynqmon.yaml)") rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynq.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI") rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)") rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server") rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
@@ -65,9 +65,9 @@ func initConfig() {
os.Exit(1) os.Exit(1)
} }
// Search config in home directory with name ".asynqmon" (without extension). // Search config in home directory with name ".asynq" (without extension).
viper.AddConfigPath(home) viper.AddConfigPath(home)
viper.SetConfigName(".asynqmon") viper.SetConfigName(".asynq")
} }
viper.AutomaticEnv() // read in environment variables that match viper.AutomaticEnv() // read in environment variables that match

View File

@@ -18,64 +18,64 @@ import (
"github.com/spf13/viper" "github.com/spf13/viper"
) )
// psCmd represents the ps command // serversCmd represents the servers command
var psCmd = &cobra.Command{ var serversCmd = &cobra.Command{
Use: "ps", Use: "servers",
Short: "Shows all background worker processes", Short: "Shows all running worker servers",
Long: `Ps (asynqmon ps) will show all background worker processes Long: `Servers (asynq servers) will show all running worker servers
backed by the specified redis instance. pulling tasks from the specified redis instance.
The command shows the following for each process: The command shows the following for each server:
* Host and PID of the process * Host and PID of the process in which the server is running
* Number of active workers out of worker pool * Number of active workers out of worker pool
* Queue configuration * Queue configuration
* State of the worker process ("running" | "stopped") * State of the worker server ("running" | "quiet")
* Time the process was started * Time the server was started
A "running" process is processing tasks in queues. A "running" server is pulling tasks from queues and processing them.
A "stopped" process is no longer processing new tasks.`, A "quiet" server is no longer pulling new tasks from queues`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: ps, Run: servers,
} }
func init() { func init() {
rootCmd.AddCommand(psCmd) rootCmd.AddCommand(serversCmd)
} }
func ps(cmd *cobra.Command, args []string) { func servers(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{ r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"), Addr: viper.GetString("uri"),
DB: viper.GetInt("db"), DB: viper.GetInt("db"),
Password: viper.GetString("password"), Password: viper.GetString("password"),
})) }))
processes, err := r.ListProcesses() servers, err := r.ListServers()
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
if len(processes) == 0 { if len(servers) == 0 {
fmt.Println("No processes") fmt.Println("No running servers")
return return
} }
// sort by hostname and pid // sort by hostname and pid
sort.Slice(processes, func(i, j int) bool { sort.Slice(servers, func(i, j int) bool {
x, y := processes[i], processes[j] x, y := servers[i], servers[j]
if x.Host != y.Host { if x.Host != y.Host {
return x.Host < y.Host return x.Host < y.Host
} }
return x.PID < y.PID return x.PID < y.PID
}) })
// print processes // print server info
cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"} cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"}
printRows := func(w io.Writer, tmpl string) { printRows := func(w io.Writer, tmpl string) {
for _, ps := range processes { for _, info := range servers {
fmt.Fprintf(w, tmpl, fmt.Fprintf(w, tmpl,
ps.Host, ps.PID, ps.Status, info.Host, info.PID, info.Status,
fmt.Sprintf("%d/%d", ps.ActiveWorkerCount, ps.Concurrency), fmt.Sprintf("%d/%d", info.ActiveWorkerCount, info.Concurrency),
formatQueues(ps.Queues), timeAgo(ps.Started)) formatQueues(info.Queues), timeAgo(info.Started))
} }
} }
printTable(cols, printRows) printTable(cols, printRows)

View File

@@ -33,7 +33,7 @@ Specifically, the command shows the following:
To monitor the tasks continuously, it's recommended that you run this To monitor the tasks continuously, it's recommended that you run this
command in conjunction with the watch command. command in conjunction with the watch command.
Example: watch -n 3 asynqmon stats -> Shows current state of tasks every three seconds`, Example: watch -n 3 asynq stats -> Shows current state of tasks every three seconds`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: stats, Run: stats,
} }

View File

@@ -20,7 +20,7 @@ import (
var workersCmd = &cobra.Command{ var workersCmd = &cobra.Command{
Use: "workers", Use: "workers",
Short: "Shows all running workers information", Short: "Shows all running workers information",
Long: `Workers (asynqmon workers) will show all running workers information. Long: `Workers (asynq workers) will show all running workers information.
The command shows the following for each worker: The command shows the following for each worker:
* Process in which the worker is running * Process in which the worker is running

View File

@@ -4,7 +4,7 @@
package main package main
import "github.com/hibiken/asynq/tools/asynqmon/cmd" import "github.com/hibiken/asynq/tools/asynq/cmd"
func main() { func main() {
cmd.Execute() cmd.Execute()