Compare commits
35 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
49160f2536 | ||
|
e33d297d8e | ||
|
eb8ced6bdd | ||
|
789a9fd711 | ||
|
5924cdac33 | ||
|
442c9275a0 | ||
|
a0865df33c | ||
|
431a96a1f7 | ||
|
74e5582cfc | ||
|
bf542a781c | ||
|
7c7f8e5f30 | ||
|
46ab4417dd | ||
|
f8a94fb839 | ||
|
42453280f4 | ||
|
4ec2dc9e47 | ||
|
45933eb6b0 | ||
|
4df372b369 | ||
|
c688b8f4f9 | ||
|
eef2f5f3cb | ||
|
239ef27a6e | ||
|
24da281aa7 | ||
|
b086e88a47 | ||
|
cf61911a49 | ||
|
aafd8a5b74 | ||
|
4f11e52558 | ||
|
b14c73809e | ||
|
779065c269 | ||
|
f9842ba914 | ||
|
022dc29701 | ||
|
40d1889ba0 | ||
|
7e96e893fe | ||
|
84b0c76c8b | ||
|
60b887b8e3 | ||
|
7864bea55c | ||
|
47220554ca |
6
.gitignore
vendored
@@ -15,7 +15,7 @@
|
||||
/examples
|
||||
|
||||
# Ignore command binary
|
||||
/tools/asynqmon/asynqmon
|
||||
/tools/asynq/asynq
|
||||
|
||||
# Ignore asynqmon config file
|
||||
.asynqmon.*
|
||||
# Ignore asynq config file
|
||||
.asynq.*
|
29
CHANGELOG.md
@@ -7,6 +7,35 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [0.8.1] - 2020-04-27
|
||||
|
||||
### Added
|
||||
|
||||
- `ParseRedisURI` helper function is added to create a `RedisConnOpt` from a URI string.
|
||||
- `SetDefaultOptions` method is added to `Client`.
|
||||
|
||||
## [0.8.0] - 2020-04-19
|
||||
|
||||
### Changed
|
||||
|
||||
- `Background` type is renamed to `Server`.
|
||||
- To upgrade from the previous version, Update `NewBackground` to `NewServer` and pass `Config` by value.
|
||||
- CLI is renamed to `asynq`.
|
||||
- To upgrade the CLI to the latest version run `go get -u github.com/hibiken/tools/asynq`
|
||||
- The `ps` command in CLI is renamed to `servers`
|
||||
- `Concurrency` defaults to the number of CPUs when unset or set to a negative value.
|
||||
|
||||
### Added
|
||||
|
||||
- `ShutdownTimeout` field is added to `Config` to speicfy timeout duration used during graceful shutdown.
|
||||
- New `Server` type exposes `Start`, `Stop`, and `Quiet` as well as `Run`.
|
||||
|
||||
## [0.7.1] - 2020-04-05
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fixed signal handling for windows.
|
||||
|
||||
## [0.7.0] - 2020-03-22
|
||||
|
||||
### Changed
|
||||
|
114
README.md
@@ -7,12 +7,41 @@
|
||||
[](https://gitter.im/go-asynq/community)
|
||||
[](https://codecov.io/gh/hibiken/asynq)
|
||||
|
||||
Asynq is a simple Go library for queueing tasks and processing them in the background with workers.
|
||||
It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
|
||||
## Overview
|
||||
|
||||
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users. The public API could change without a major version update before v1.0.0 release.
|
||||
Asynq is a Go library for queueing tasks and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
|
||||
|
||||

|
||||
Highlevel overview of how Asynq works:
|
||||
|
||||
- Client puts task on a queue
|
||||
- Server pulls task off queues and starts a worker goroutine for each task
|
||||
- Tasks are processed concurrently by multiple workers
|
||||
|
||||
Task queues are used as a mechanism to distribute work across multiple machines.
|
||||
A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
|
||||
|
||||

|
||||
|
||||
## Stability and Compatibility
|
||||
|
||||
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users (Feedback on APIs are appreciated!). The public API could change without a major version update before v1.0.0 release.
|
||||
|
||||
**Status**: The library is currently undergoing heavy development with frequent, breaking API changes.
|
||||
|
||||
## Features
|
||||
|
||||
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
|
||||
- Scheduling of tasks
|
||||
- Durability since tasks are written to Redis
|
||||
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
|
||||
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
|
||||
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#strict-priority-queues)
|
||||
- Low latency to add a task since writes are fast in Redis
|
||||
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
|
||||
- Allow timeout and deadline per task
|
||||
- Flexible handler interface with support for middlewares
|
||||
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for HA
|
||||
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks
|
||||
|
||||
## Quickstart
|
||||
|
||||
@@ -22,7 +51,7 @@ First, make sure you are running a Redis server locally.
|
||||
$ redis-server
|
||||
```
|
||||
|
||||
Next, write a package that encapslates task creation and task handling.
|
||||
Next, write a package that encapsulates task creation and task handling.
|
||||
|
||||
```go
|
||||
package tasks
|
||||
@@ -33,13 +62,15 @@ import (
|
||||
"github.com/hibiken/asynq"
|
||||
)
|
||||
|
||||
// A list of background task types.
|
||||
// A list of task types.
|
||||
const (
|
||||
EmailDelivery = "email:deliver"
|
||||
ImageProcessing = "image:process"
|
||||
)
|
||||
|
||||
//--------------------------------------------
|
||||
// Write function NewXXXTask to create a task.
|
||||
//--------------------------------------------
|
||||
|
||||
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
|
||||
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
|
||||
@@ -51,8 +82,13 @@ func NewImageProcessingTask(src, dst string) *asynq.Task {
|
||||
return asynq.NewTask(ImageProcessing, payload)
|
||||
}
|
||||
|
||||
//-------------------------------------------------------------
|
||||
// Write function HandleXXXTask to handle the given task.
|
||||
// NOTE: It satisfies the asynq.HandlerFunc interface.
|
||||
//
|
||||
// Handler doesn't need to be a function. You can define a type
|
||||
// that satisfies asynq.Handler interface. See example below.
|
||||
//-------------------------------------------------------------
|
||||
|
||||
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
|
||||
userID, err := t.Payload.GetInt("user_id")
|
||||
@@ -68,7 +104,12 @@ func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func HandleImageProcessingTask(ctx context.Context, t *asynq.Task) error {
|
||||
type ImageProcesser struct {
|
||||
// ... fields for struct
|
||||
}
|
||||
|
||||
// ImageProcessor implements asynq.Handler.
|
||||
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
|
||||
src, err := t.Payload.GetString("src")
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -81,10 +122,14 @@ func HandleImageProcessingTask(ctx context.Context, t *asynq.Task) error {
|
||||
// Image processing logic ...
|
||||
return nil
|
||||
}
|
||||
|
||||
func NewImageProcessor() *ImageProcessor {
|
||||
// ... return an instance
|
||||
}
|
||||
```
|
||||
|
||||
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to enqueue tasks to the task queue.
|
||||
A task will be processed by a background worker as soon as the task gets enqueued.
|
||||
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
|
||||
A task will be processed asynchronously by a background worker as soon as the task gets enqueued.
|
||||
Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
|
||||
|
||||
```go
|
||||
@@ -100,10 +145,13 @@ import (
|
||||
const redisAddr = "127.0.0.1:6379"
|
||||
|
||||
func main() {
|
||||
r := &asynq.RedisClientOpt{Addr: redisAddr}
|
||||
r := asynq.RedisClientOpt{Addr: redisAddr}
|
||||
c := asynq.NewClient(r)
|
||||
|
||||
// ----------------------------------------------------
|
||||
// Example 1: Enqueue task to be processed immediately.
|
||||
// Use (*Client).Enqueue method.
|
||||
// ----------------------------------------------------
|
||||
|
||||
t := tasks.NewEmailDeliveryTask(42, "some:template:id")
|
||||
err := c.Enqueue(t)
|
||||
@@ -112,7 +160,10 @@ func main() {
|
||||
}
|
||||
|
||||
|
||||
// ----------------------------------------------------------
|
||||
// Example 2: Schedule task to be processed in the future.
|
||||
// Use (*Client).EnqueueIn or (*Client).EnqueueAt.
|
||||
// ----------------------------------------------------------
|
||||
|
||||
t = tasks.NewEmailDeliveryTask(42, "other:template:id")
|
||||
err = c.EnqueueIn(24*time.Hour, t)
|
||||
@@ -121,19 +172,34 @@ func main() {
|
||||
}
|
||||
|
||||
|
||||
// Example 3: Pass options to tune task processing behavior.
|
||||
// Options include MaxRetry, Queue, Timeout, Deadline, etc.
|
||||
// --------------------------------------------------------------------------
|
||||
// Example 3: Set options to tune task processing behavior.
|
||||
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
|
||||
// --------------------------------------------------------------------------
|
||||
|
||||
c.SetDefaultOptions(tasks.ImageProcessing, asynq.MaxRetry(10), asynq.Timeout(time.Minute))
|
||||
|
||||
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
|
||||
err = c.Enqueue(t, asynq.MaxRetry(10), asynq.Queue("critical"), asynq.Timeout(time.Minute))
|
||||
err = c.Enqueue(t)
|
||||
if err != nil {
|
||||
log.Fatal("could not enqueue task: %v", err)
|
||||
}
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
// Example 4: Pass options to tune task processing behavior at enqueue time.
|
||||
// Options passed at enqueue time override default ones, if any.
|
||||
// --------------------------------------------------------------------------
|
||||
|
||||
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
|
||||
err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
|
||||
if err != nil {
|
||||
log.Fatal("could not enqueue task: %v", err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Next, create a binary to process these tasks in the background.
|
||||
To start the background workers, use [`Background`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Background) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
|
||||
Next, create a worker server to process these tasks in the background.
|
||||
To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
|
||||
|
||||
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
|
||||
|
||||
@@ -141,6 +207,8 @@ You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/hibiken/asynq"
|
||||
"your/app/package/tasks"
|
||||
)
|
||||
@@ -148,9 +216,9 @@ import (
|
||||
const redisAddr = "127.0.0.1:6379"
|
||||
|
||||
func main() {
|
||||
r := &asynq.RedisClientOpt{Addr: redisAddr}
|
||||
r := asynq.RedisClientOpt{Addr: redisAddr}
|
||||
|
||||
bg := asynq.NewBackground(r, &asynq.Config{
|
||||
srv := asynq.NewServer(r, asynq.Config{
|
||||
// Specify how many concurrent workers to use
|
||||
Concurrency: 10,
|
||||
// Optionally specify multiple queues with different priority.
|
||||
@@ -165,10 +233,12 @@ func main() {
|
||||
// mux maps a type to a handler
|
||||
mux := asynq.NewServeMux()
|
||||
mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask)
|
||||
mux.HandleFunc(tasks.ImageProcessing, tasks.HandleImageProcessingTask)
|
||||
mux.Handle(tasks.ImageProcessing, tasks.NewImageProcessor())
|
||||
// ...register other handlers...
|
||||
|
||||
bg.Run(mux)
|
||||
if err := srv.Run(mux); err != nil {
|
||||
log.Fatalf("could not run server: %v", err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -184,7 +254,7 @@ Here's an example of running the `stats` command.
|
||||
|
||||

|
||||
|
||||
For details on how to use the tool, refer to the tool's [README](/tools/asynqmon/README.md).
|
||||
For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -197,7 +267,7 @@ go get -u github.com/hibiken/asynq
|
||||
To install the CLI tool, run the following command:
|
||||
|
||||
```sh
|
||||
go get -u github.com/hibiken/asynq/tools/asynqmon
|
||||
go get -u github.com/hibiken/asynq/tools/asynq
|
||||
```
|
||||
|
||||
## Requirements
|
||||
@@ -216,7 +286,7 @@ Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
|
||||
|
||||
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
|
||||
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
|
||||
- [Cobra](https://github.com/spf13/cobra) : Asynqmon CLI is built with cobra
|
||||
- [Cobra](https://github.com/spf13/cobra) : Asynq CLI is built with cobra
|
||||
|
||||
## License
|
||||
|
||||
|
76
asynq.go
@@ -7,6 +7,9 @@ package asynq
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
)
|
||||
@@ -94,6 +97,79 @@ type RedisFailoverClientOpt struct {
|
||||
TLSConfig *tls.Config
|
||||
}
|
||||
|
||||
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
|
||||
// It returns a non-nil error if uri cannot be parsed.
|
||||
//
|
||||
// Three URI schemes are supported, which are redis:, redis-socket:, and redis-sentinel:.
|
||||
// Supported formats are:
|
||||
// redis://[:password@]host[:port][/dbnumber]
|
||||
// redis-socket://[:password@]path[?db=dbnumber]
|
||||
// redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName]
|
||||
func ParseRedisURI(uri string) (RedisConnOpt, error) {
|
||||
u, err := url.Parse(uri)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err)
|
||||
}
|
||||
switch u.Scheme {
|
||||
case "redis":
|
||||
return parseRedisURI(u)
|
||||
case "redis-socket":
|
||||
return parseRedisSocketURI(u)
|
||||
case "redis-sentinel":
|
||||
return parseRedisSentinelURI(u)
|
||||
default:
|
||||
return nil, fmt.Errorf("asynq: unsupported uri scheme: %q", u.Scheme)
|
||||
}
|
||||
}
|
||||
|
||||
func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
|
||||
var db int
|
||||
var err error
|
||||
if len(u.Path) > 0 {
|
||||
xs := strings.Split(strings.Trim(u.Path, "/"), "/")
|
||||
db, err = strconv.Atoi(xs[0])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("asynq: could not parse redis uri: database number should be the first segment of the path")
|
||||
}
|
||||
}
|
||||
var password string
|
||||
if v, ok := u.User.Password(); ok {
|
||||
password = v
|
||||
}
|
||||
return RedisClientOpt{Addr: u.Host, DB: db, Password: password}, nil
|
||||
}
|
||||
|
||||
func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) {
|
||||
const errPrefix = "asynq: could not parse redis socket uri"
|
||||
if len(u.Path) == 0 {
|
||||
return nil, fmt.Errorf("%s: path does not exist", errPrefix)
|
||||
}
|
||||
q := u.Query()
|
||||
var db int
|
||||
var err error
|
||||
if n := q.Get("db"); n != "" {
|
||||
db, err = strconv.Atoi(n)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s: query param `db` should be a number", errPrefix)
|
||||
}
|
||||
}
|
||||
var password string
|
||||
if v, ok := u.User.Password(); ok {
|
||||
password = v
|
||||
}
|
||||
return RedisClientOpt{Network: "unix", Addr: u.Path, DB: db, Password: password}, nil
|
||||
}
|
||||
|
||||
func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
|
||||
addrs := strings.Split(u.Host, ",")
|
||||
master := u.Query().Get("master")
|
||||
var password string
|
||||
if v, ok := u.User.Password(); ok {
|
||||
password = v
|
||||
}
|
||||
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, Password: password}, nil
|
||||
}
|
||||
|
||||
// createRedisClient returns a redis client given a redis connection configuration.
|
||||
//
|
||||
// Passing an unexpected type as a RedisConnOpt argument will cause panic.
|
||||
|
103
asynq_test.go
@@ -44,3 +44,106 @@ var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
|
||||
})
|
||||
return out
|
||||
})
|
||||
|
||||
func TestParseRedisURI(t *testing.T) {
|
||||
tests := []struct {
|
||||
uri string
|
||||
want RedisConnOpt
|
||||
}{
|
||||
{
|
||||
"redis://localhost:6379",
|
||||
RedisClientOpt{Addr: "localhost:6379"},
|
||||
},
|
||||
{
|
||||
"redis://localhost:6379/3",
|
||||
RedisClientOpt{Addr: "localhost:6379", DB: 3},
|
||||
},
|
||||
{
|
||||
"redis://:mypassword@localhost:6379",
|
||||
RedisClientOpt{Addr: "localhost:6379", Password: "mypassword"},
|
||||
},
|
||||
{
|
||||
"redis://:mypassword@127.0.0.1:6379/11",
|
||||
RedisClientOpt{Addr: "127.0.0.1:6379", Password: "mypassword", DB: 11},
|
||||
},
|
||||
{
|
||||
"redis-socket:///var/run/redis/redis.sock",
|
||||
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock"},
|
||||
},
|
||||
{
|
||||
"redis-socket://:mypassword@/var/run/redis/redis.sock",
|
||||
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword"},
|
||||
},
|
||||
{
|
||||
"redis-socket:///var/run/redis/redis.sock?db=7",
|
||||
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", DB: 7},
|
||||
},
|
||||
{
|
||||
"redis-socket://:mypassword@/var/run/redis/redis.sock?db=12",
|
||||
RedisClientOpt{Network: "unix", Addr: "/var/run/redis/redis.sock", Password: "mypassword", DB: 12},
|
||||
},
|
||||
{
|
||||
"redis-sentinel://localhost:5000,localhost:5001,localhost:5002?master=mymaster",
|
||||
RedisFailoverClientOpt{
|
||||
MasterName: "mymaster",
|
||||
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
|
||||
},
|
||||
},
|
||||
{
|
||||
"redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster",
|
||||
RedisFailoverClientOpt{
|
||||
MasterName: "mymaster",
|
||||
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
|
||||
Password: "mypassword",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got, err := ParseRedisURI(tc.uri)
|
||||
if err != nil {
|
||||
t.Errorf("ParseRedisURI(%q) returned an error: %v", tc.uri, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(tc.want, got); diff != "" {
|
||||
t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRedisURIErrors(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
uri string
|
||||
}{
|
||||
{
|
||||
"unsupported scheme",
|
||||
"rdb://localhost:6379",
|
||||
},
|
||||
{
|
||||
"missing scheme",
|
||||
"localhost:6379",
|
||||
},
|
||||
{
|
||||
"multiple db numbers",
|
||||
"redis://localhost:6379/1,2,3",
|
||||
},
|
||||
{
|
||||
"missing path for socket connection",
|
||||
"redis-socket://?db=one",
|
||||
},
|
||||
{
|
||||
"non integer for db numbers for socket",
|
||||
"redis-socket:///some/path/to/redis?db=one",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
_, err := ParseRedisURI(tc.uri)
|
||||
if err == nil {
|
||||
t.Errorf("%s: ParseRedisURI(%q) succeeded for malformed input, want error",
|
||||
tc.desc, tc.uri)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@@ -1,128 +0,0 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"go.uber.org/goleak"
|
||||
)
|
||||
|
||||
func TestBackground(t *testing.T) {
|
||||
// https://github.com/go-redis/redis/issues/1029
|
||||
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
|
||||
defer goleak.VerifyNoLeaks(t, ignoreOpt)
|
||||
|
||||
r := &RedisClientOpt{
|
||||
Addr: "localhost:6379",
|
||||
DB: 15,
|
||||
}
|
||||
client := NewClient(r)
|
||||
bg := NewBackground(r, &Config{
|
||||
Concurrency: 10,
|
||||
})
|
||||
|
||||
// no-op handler
|
||||
h := func(ctx context.Context, task *Task) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
bg.start(HandlerFunc(h))
|
||||
|
||||
err := client.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
|
||||
if err != nil {
|
||||
t.Errorf("could not enqueue a task: %v", err)
|
||||
}
|
||||
|
||||
err = client.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
|
||||
if err != nil {
|
||||
t.Errorf("could not enqueue a task: %v", err)
|
||||
}
|
||||
|
||||
bg.stop()
|
||||
}
|
||||
|
||||
func TestGCD(t *testing.T) {
|
||||
tests := []struct {
|
||||
input []int
|
||||
want int
|
||||
}{
|
||||
{[]int{6, 2, 12}, 2},
|
||||
{[]int{3, 3, 3}, 3},
|
||||
{[]int{6, 3, 1}, 1},
|
||||
{[]int{1}, 1},
|
||||
{[]int{1, 0, 2}, 1},
|
||||
{[]int{8, 0, 4}, 4},
|
||||
{[]int{9, 12, 18, 30}, 3},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := gcd(tc.input...)
|
||||
if got != tc.want {
|
||||
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeQueueCfg(t *testing.T) {
|
||||
tests := []struct {
|
||||
input map[string]int
|
||||
want map[string]int
|
||||
}{
|
||||
{
|
||||
input: map[string]int{
|
||||
"high": 100,
|
||||
"default": 20,
|
||||
"low": 5,
|
||||
},
|
||||
want: map[string]int{
|
||||
"high": 20,
|
||||
"default": 4,
|
||||
"low": 1,
|
||||
},
|
||||
},
|
||||
{
|
||||
input: map[string]int{
|
||||
"default": 10,
|
||||
},
|
||||
want: map[string]int{
|
||||
"default": 1,
|
||||
},
|
||||
},
|
||||
{
|
||||
input: map[string]int{
|
||||
"critical": 5,
|
||||
"default": 1,
|
||||
},
|
||||
want: map[string]int{
|
||||
"critical": 5,
|
||||
"default": 1,
|
||||
},
|
||||
},
|
||||
{
|
||||
input: map[string]int{
|
||||
"critical": 6,
|
||||
"default": 3,
|
||||
"low": 0,
|
||||
},
|
||||
want: map[string]int{
|
||||
"critical": 2,
|
||||
"default": 1,
|
||||
"low": 0,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := normalizeQueueCfg(tc.input)
|
||||
if diff := cmp.Diff(tc.want, got); diff != "" {
|
||||
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
|
||||
tc.input, got, tc.want, diff)
|
||||
}
|
||||
}
|
||||
}
|
@@ -24,7 +24,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
|
||||
DB: redisDB,
|
||||
}
|
||||
client := NewClient(redis)
|
||||
bg := NewBackground(redis, &Config{
|
||||
srv := NewServer(redis, Config{
|
||||
Concurrency: 10,
|
||||
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
|
||||
return time.Second
|
||||
@@ -46,11 +46,11 @@ func BenchmarkEndToEndSimple(b *testing.B) {
|
||||
}
|
||||
b.StartTimer() // end setup
|
||||
|
||||
bg.start(HandlerFunc(handler))
|
||||
srv.Start(HandlerFunc(handler))
|
||||
wg.Wait()
|
||||
|
||||
b.StopTimer() // begin teardown
|
||||
bg.stop()
|
||||
srv.Stop()
|
||||
b.StartTimer() // end teardown
|
||||
}
|
||||
}
|
||||
@@ -67,7 +67,7 @@ func BenchmarkEndToEnd(b *testing.B) {
|
||||
DB: redisDB,
|
||||
}
|
||||
client := NewClient(redis)
|
||||
bg := NewBackground(redis, &Config{
|
||||
srv := NewServer(redis, Config{
|
||||
Concurrency: 10,
|
||||
RetryDelayFunc: func(n int, err error, t *Task) time.Duration {
|
||||
return time.Second
|
||||
@@ -99,11 +99,11 @@ func BenchmarkEndToEnd(b *testing.B) {
|
||||
}
|
||||
b.StartTimer() // end setup
|
||||
|
||||
bg.start(HandlerFunc(handler))
|
||||
srv.Start(HandlerFunc(handler))
|
||||
wg.Wait()
|
||||
|
||||
b.StopTimer() // begin teardown
|
||||
bg.stop()
|
||||
srv.Stop()
|
||||
b.StartTimer() // end teardown
|
||||
}
|
||||
}
|
||||
@@ -124,7 +124,7 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
|
||||
DB: redisDB,
|
||||
}
|
||||
client := NewClient(redis)
|
||||
bg := NewBackground(redis, &Config{
|
||||
srv := NewServer(redis, Config{
|
||||
Concurrency: 10,
|
||||
Queues: map[string]int{
|
||||
"high": 6,
|
||||
@@ -160,11 +160,11 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
|
||||
}
|
||||
b.StartTimer() // end setup
|
||||
|
||||
bg.start(HandlerFunc(handler))
|
||||
srv.Start(HandlerFunc(handler))
|
||||
wg.Wait()
|
||||
|
||||
b.StopTimer() // begin teardown
|
||||
bg.stop()
|
||||
srv.Stop()
|
||||
b.StartTimer() // end teardown
|
||||
}
|
||||
}
|
||||
|
74
client.go
@@ -9,6 +9,7 @@ import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
@@ -23,13 +24,18 @@ import (
|
||||
//
|
||||
// Clients are safe for concurrent use by multiple goroutines.
|
||||
type Client struct {
|
||||
mu sync.Mutex
|
||||
opts map[string][]Option
|
||||
rdb *rdb.RDB
|
||||
}
|
||||
|
||||
// NewClient and returns a new Client given a redis connection option.
|
||||
func NewClient(r RedisConnOpt) *Client {
|
||||
rdb := rdb.NewRDB(createRedisClient(r))
|
||||
return &Client{rdb}
|
||||
return &Client{
|
||||
opts: make(map[string][]Option),
|
||||
rdb: rdb,
|
||||
}
|
||||
}
|
||||
|
||||
// Option specifies the task processing behavior.
|
||||
@@ -159,10 +165,19 @@ func serializePayload(payload map[string]interface{}) string {
|
||||
return b.String()
|
||||
}
|
||||
|
||||
const (
|
||||
// Max retry count by default
|
||||
defaultMaxRetry = 25
|
||||
)
|
||||
// Default max retry count used if nothing is specified.
|
||||
const defaultMaxRetry = 25
|
||||
|
||||
// SetDefaultOptions sets options to be used for a given task type.
|
||||
// The argument opts specifies the behavior of task processing.
|
||||
// If there are conflicting Option values the last one overrides others.
|
||||
//
|
||||
// Default options can be overridden by options passed at enqueue time.
|
||||
func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.opts[taskType] = opts
|
||||
}
|
||||
|
||||
// EnqueueAt schedules task to be enqueued at the specified time.
|
||||
//
|
||||
@@ -171,6 +186,35 @@ const (
|
||||
// The argument opts specifies the behavior of task processing.
|
||||
// If there are conflicting Option values the last one overrides others.
|
||||
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
|
||||
return c.enqueueAt(t, task, opts...)
|
||||
}
|
||||
|
||||
// Enqueue enqueues task to be processed immediately.
|
||||
//
|
||||
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
|
||||
//
|
||||
// The argument opts specifies the behavior of task processing.
|
||||
// If there are conflicting Option values the last one overrides others.
|
||||
func (c *Client) Enqueue(task *Task, opts ...Option) error {
|
||||
return c.enqueueAt(time.Now(), task, opts...)
|
||||
}
|
||||
|
||||
// EnqueueIn schedules task to be enqueued after the specified delay.
|
||||
//
|
||||
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
|
||||
//
|
||||
// The argument opts specifies the behavior of task processing.
|
||||
// If there are conflicting Option values the last one overrides others.
|
||||
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
|
||||
return c.enqueueAt(time.Now().Add(d), task, opts...)
|
||||
}
|
||||
|
||||
func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
if defaults, ok := c.opts[task.Type]; ok {
|
||||
opts = append(defaults, opts...)
|
||||
}
|
||||
opt := composeOptions(opts...)
|
||||
msg := &base.TaskMessage{
|
||||
ID: xid.New(),
|
||||
@@ -194,26 +238,6 @@ func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Enqueue enqueues task to be processed immediately.
|
||||
//
|
||||
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
|
||||
//
|
||||
// The argument opts specifies the behavior of task processing.
|
||||
// If there are conflicting Option values the last one overrides others.
|
||||
func (c *Client) Enqueue(task *Task, opts ...Option) error {
|
||||
return c.EnqueueAt(time.Now(), task, opts...)
|
||||
}
|
||||
|
||||
// EnqueueIn schedules task to be enqueued after the specified delay.
|
||||
//
|
||||
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
|
||||
//
|
||||
// The argument opts specifies the behavior of task processing.
|
||||
// If there are conflicting Option values the last one overrides others.
|
||||
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
|
||||
return c.EnqueueAt(time.Now().Add(d), task, opts...)
|
||||
}
|
||||
|
||||
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {
|
||||
if uniqueTTL > 0 {
|
||||
return c.rdb.EnqueueUnique(msg, uniqueTTL)
|
||||
|
@@ -15,6 +15,11 @@ import (
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
)
|
||||
|
||||
var (
|
||||
noTimeout = time.Duration(0).String()
|
||||
noDeadline = time.Time{}.Format(time.RFC3339)
|
||||
)
|
||||
|
||||
func TestClientEnqueueAt(t *testing.T) {
|
||||
r := setup(t)
|
||||
client := NewClient(RedisClientOpt{
|
||||
@@ -27,9 +32,6 @@ func TestClientEnqueueAt(t *testing.T) {
|
||||
var (
|
||||
now = time.Now()
|
||||
oneHourLater = now.Add(time.Hour)
|
||||
|
||||
noTimeout = time.Duration(0).String()
|
||||
noDeadline = time.Time{}.Format(time.RFC3339)
|
||||
)
|
||||
|
||||
tests := []struct {
|
||||
@@ -113,11 +115,6 @@ func TestClientEnqueue(t *testing.T) {
|
||||
|
||||
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
|
||||
|
||||
var (
|
||||
noTimeout = time.Duration(0).String()
|
||||
noDeadline = time.Time{}.Format(time.RFC3339)
|
||||
)
|
||||
|
||||
tests := []struct {
|
||||
desc string
|
||||
task *Task
|
||||
@@ -287,11 +284,6 @@ func TestClientEnqueueIn(t *testing.T) {
|
||||
|
||||
task := NewTask("send_email", map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})
|
||||
|
||||
var (
|
||||
noTimeout = time.Duration(0).String()
|
||||
noDeadline = time.Time{}.Format(time.RFC3339)
|
||||
)
|
||||
|
||||
tests := []struct {
|
||||
desc string
|
||||
task *Task
|
||||
@@ -364,6 +356,86 @@ func TestClientEnqueueIn(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestClientDefaultOptions(t *testing.T) {
|
||||
r := setup(t)
|
||||
|
||||
tests := []struct {
|
||||
desc string
|
||||
defaultOpts []Option // options set at the client level.
|
||||
opts []Option // options used at enqueue time.
|
||||
task *Task
|
||||
queue string // queue that the message should go into.
|
||||
want *base.TaskMessage
|
||||
}{
|
||||
{
|
||||
desc: "With queue routing option",
|
||||
defaultOpts: []Option{Queue("feed")},
|
||||
opts: []Option{},
|
||||
task: NewTask("feed:import", nil),
|
||||
queue: "feed",
|
||||
want: &base.TaskMessage{
|
||||
Type: "feed:import",
|
||||
Payload: nil,
|
||||
Retry: defaultMaxRetry,
|
||||
Queue: "feed",
|
||||
Timeout: noTimeout,
|
||||
Deadline: noDeadline,
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "With multiple options",
|
||||
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
|
||||
opts: []Option{},
|
||||
task: NewTask("feed:import", nil),
|
||||
queue: "feed",
|
||||
want: &base.TaskMessage{
|
||||
Type: "feed:import",
|
||||
Payload: nil,
|
||||
Retry: 5,
|
||||
Queue: "feed",
|
||||
Timeout: noTimeout,
|
||||
Deadline: noDeadline,
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "With overriding options at enqueue time",
|
||||
defaultOpts: []Option{Queue("feed"), MaxRetry(5)},
|
||||
opts: []Option{Queue("critical")},
|
||||
task: NewTask("feed:import", nil),
|
||||
queue: "critical",
|
||||
want: &base.TaskMessage{
|
||||
Type: "feed:import",
|
||||
Payload: nil,
|
||||
Retry: 5,
|
||||
Queue: "critical",
|
||||
Timeout: noTimeout,
|
||||
Deadline: noDeadline,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
h.FlushDB(t, r)
|
||||
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
|
||||
c.SetDefaultOptions(tc.task.Type, tc.defaultOpts...)
|
||||
err := c.Enqueue(tc.task, tc.opts...)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
enqueued := h.GetEnqueuedMessages(t, r, tc.queue)
|
||||
if len(enqueued) != 1 {
|
||||
t.Errorf("%s;\nexpected queue %q to have one message; got %d messages in the queue.",
|
||||
tc.desc, tc.queue, len(enqueued))
|
||||
continue
|
||||
}
|
||||
got := enqueued[0]
|
||||
if diff := cmp.Diff(tc.want, got, h.IgnoreIDOpt); diff != "" {
|
||||
t.Errorf("%s;\nmismatch found in enqueued task message; (-want,+got)\n%s",
|
||||
tc.desc, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestUniqueKey(t *testing.T) {
|
||||
tests := []struct {
|
||||
desc string
|
||||
|
12
doc.go
@@ -14,7 +14,7 @@ specify the options using one of RedisConnOpt types.
|
||||
DB: 3,
|
||||
}
|
||||
|
||||
The Client is used to register a task to be processed at the specified time.
|
||||
The Client is used to enqueue a task to be processed at the specified time.
|
||||
|
||||
Task is created with two parameters: its type and payload.
|
||||
|
||||
@@ -27,18 +27,18 @@ Task is created with two parameters: its type and payload.
|
||||
// Enqueue the task to be processed immediately.
|
||||
err := client.Enqueue(t)
|
||||
|
||||
// Schedule the task to be processed in one minute.
|
||||
// Schedule the task to be processed after one minute.
|
||||
err = client.EnqueueIn(time.Minute, t)
|
||||
|
||||
The Background is used to run the background task processing with a given
|
||||
The Server is used to run the background task processing with a given
|
||||
handler.
|
||||
bg := asynq.NewBackground(redis, &asynq.Config{
|
||||
srv := asynq.NewServer(redis, asynq.Config{
|
||||
Concurrency: 10,
|
||||
})
|
||||
|
||||
bg.Run(handler)
|
||||
srv.Run(handler)
|
||||
|
||||
Handler is an interface with one method ProcessTask which
|
||||
Handler is an interface type with a method which
|
||||
takes a task and returns an error. Handler should return nil if
|
||||
the processing is successful, otherwise return a non-nil error.
|
||||
If handler panics or returns a non-nil error, the task will be retried in the future.
|
||||
|
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
Before Width: | Height: | Size: 582 KiB After Width: | Height: | Size: 582 KiB |
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
BIN
docs/assets/overview.png
Normal file
After Width: | Height: | Size: 63 KiB |
95
example_test.go
Normal file
@@ -0,0 +1,95 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
package asynq_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
|
||||
"github.com/hibiken/asynq"
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
func ExampleServer_Run() {
|
||||
srv := asynq.NewServer(
|
||||
asynq.RedisClientOpt{Addr: ":6379"},
|
||||
asynq.Config{Concurrency: 20},
|
||||
)
|
||||
|
||||
h := asynq.NewServeMux()
|
||||
// ... Register handlers
|
||||
|
||||
// Run blocks and waits for os signal to terminate the program.
|
||||
if err := srv.Run(h); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func ExampleServer_Stop() {
|
||||
srv := asynq.NewServer(
|
||||
asynq.RedisClientOpt{Addr: ":6379"},
|
||||
asynq.Config{Concurrency: 20},
|
||||
)
|
||||
|
||||
h := asynq.NewServeMux()
|
||||
// ... Register handlers
|
||||
|
||||
if err := srv.Start(h); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
sigs := make(chan os.Signal, 1)
|
||||
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
|
||||
<-sigs // wait for termination signal
|
||||
|
||||
srv.Stop()
|
||||
}
|
||||
|
||||
func ExampleServer_Quiet() {
|
||||
srv := asynq.NewServer(
|
||||
asynq.RedisClientOpt{Addr: ":6379"},
|
||||
asynq.Config{Concurrency: 20},
|
||||
)
|
||||
|
||||
h := asynq.NewServeMux()
|
||||
// ... Register handlers
|
||||
|
||||
if err := srv.Start(h); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
sigs := make(chan os.Signal, 1)
|
||||
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
|
||||
// Handle SIGTERM, SIGINT to exit the program.
|
||||
// Handle SIGTSTP to stop processing new tasks.
|
||||
for {
|
||||
s := <-sigs
|
||||
if s == unix.SIGTSTP {
|
||||
srv.Quiet() // stop processing new tasks
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
srv.Stop()
|
||||
}
|
||||
|
||||
func ExampleParseRedisURI() {
|
||||
rconn, err := asynq.ParseRedisURI("redis://localhost:6379/10")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
r, ok := rconn.(asynq.RedisClientOpt)
|
||||
if !ok {
|
||||
log.Fatal("unexpected type")
|
||||
}
|
||||
fmt.Println(r.Addr)
|
||||
fmt.Println(r.DB)
|
||||
// Output:
|
||||
// localhost:6379
|
||||
// 10
|
||||
}
|
2
go.mod
@@ -8,7 +8,7 @@ require (
|
||||
github.com/rs/xid v1.2.1
|
||||
github.com/spf13/cast v1.3.1
|
||||
go.uber.org/goleak v0.10.0
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e // indirect
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
|
||||
gopkg.in/yaml.v2 v2.2.7 // indirect
|
||||
)
|
||||
|
19
heartbeat.go
@@ -9,16 +9,15 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
)
|
||||
|
||||
// heartbeater is responsible for writing process info to redis periodically to
|
||||
// indicate that the background worker process is up.
|
||||
type heartbeater struct {
|
||||
logger Logger
|
||||
rdb *rdb.RDB
|
||||
broker base.Broker
|
||||
|
||||
ps *base.ProcessState
|
||||
ss *base.ServerState
|
||||
|
||||
// channel to communicate back to the long running "heartbeater" goroutine.
|
||||
done chan struct{}
|
||||
@@ -27,11 +26,11 @@ type heartbeater struct {
|
||||
interval time.Duration
|
||||
}
|
||||
|
||||
func newHeartbeater(l Logger, rdb *rdb.RDB, ps *base.ProcessState, interval time.Duration) *heartbeater {
|
||||
func newHeartbeater(l Logger, b base.Broker, ss *base.ServerState, interval time.Duration) *heartbeater {
|
||||
return &heartbeater{
|
||||
logger: l,
|
||||
rdb: rdb,
|
||||
ps: ps,
|
||||
broker: b,
|
||||
ss: ss,
|
||||
done: make(chan struct{}),
|
||||
interval: interval,
|
||||
}
|
||||
@@ -44,8 +43,8 @@ func (h *heartbeater) terminate() {
|
||||
}
|
||||
|
||||
func (h *heartbeater) start(wg *sync.WaitGroup) {
|
||||
h.ps.SetStarted(time.Now())
|
||||
h.ps.SetStatus(base.StatusRunning)
|
||||
h.ss.SetStarted(time.Now())
|
||||
h.ss.SetStatus(base.StatusRunning)
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
@@ -53,7 +52,7 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
|
||||
for {
|
||||
select {
|
||||
case <-h.done:
|
||||
h.rdb.ClearProcessState(h.ps)
|
||||
h.broker.ClearServerState(h.ss)
|
||||
h.logger.Info("Heartbeater done")
|
||||
return
|
||||
case <-time.After(h.interval):
|
||||
@@ -66,7 +65,7 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
|
||||
func (h *heartbeater) beat() {
|
||||
// Note: Set TTL to be long enough so that it won't expire before we write again
|
||||
// and short enough to expire quickly once the process is shut down or killed.
|
||||
err := h.rdb.WriteProcessState(h.ps, h.interval*2)
|
||||
err := h.broker.WriteServerState(h.ss, h.interval*2)
|
||||
if err != nil {
|
||||
h.logger.Error("could not write heartbeat data: %v", err)
|
||||
}
|
||||
|
@@ -14,6 +14,7 @@ import (
|
||||
h "github.com/hibiken/asynq/internal/asynqtest"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"github.com/hibiken/asynq/internal/testbroker"
|
||||
)
|
||||
|
||||
func TestHeartbeater(t *testing.T) {
|
||||
@@ -31,17 +32,18 @@ func TestHeartbeater(t *testing.T) {
|
||||
}
|
||||
|
||||
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
|
||||
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
|
||||
ignoreOpt := cmpopts.IgnoreUnexported(base.ServerInfo{})
|
||||
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
|
||||
for _, tc := range tests {
|
||||
h.FlushDB(t, r)
|
||||
|
||||
state := base.NewProcessState(tc.host, tc.pid, tc.concurrency, tc.queues, false)
|
||||
state := base.NewServerState(tc.host, tc.pid, tc.concurrency, tc.queues, false)
|
||||
hb := newHeartbeater(testLogger, rdbClient, state, tc.interval)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
hb.start(&wg)
|
||||
|
||||
want := &base.ProcessInfo{
|
||||
want := &base.ServerInfo{
|
||||
Host: tc.host,
|
||||
PID: tc.pid,
|
||||
Queues: tc.queues,
|
||||
@@ -53,21 +55,21 @@ func TestHeartbeater(t *testing.T) {
|
||||
// allow for heartbeater to write to redis
|
||||
time.Sleep(tc.interval * 2)
|
||||
|
||||
ps, err := rdbClient.ListProcesses()
|
||||
ss, err := rdbClient.ListServers()
|
||||
if err != nil {
|
||||
t.Errorf("could not read process status from redis: %v", err)
|
||||
t.Errorf("could not read server info from redis: %v", err)
|
||||
hb.terminate()
|
||||
continue
|
||||
}
|
||||
|
||||
if len(ps) != 1 {
|
||||
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
|
||||
if len(ss) != 1 {
|
||||
t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
|
||||
hb.terminate()
|
||||
continue
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
|
||||
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
|
||||
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
|
||||
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
|
||||
hb.terminate()
|
||||
continue
|
||||
}
|
||||
@@ -79,21 +81,21 @@ func TestHeartbeater(t *testing.T) {
|
||||
time.Sleep(tc.interval * 2)
|
||||
|
||||
want.Status = "stopped"
|
||||
ps, err = rdbClient.ListProcesses()
|
||||
ss, err = rdbClient.ListServers()
|
||||
if err != nil {
|
||||
t.Errorf("could not read process status from redis: %v", err)
|
||||
hb.terminate()
|
||||
continue
|
||||
}
|
||||
|
||||
if len(ps) != 1 {
|
||||
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ps))
|
||||
if len(ss) != 1 {
|
||||
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
|
||||
hb.terminate()
|
||||
continue
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(want, ps[0], timeCmpOpt, ignoreOpt); diff != "" {
|
||||
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ps[0], want, diff)
|
||||
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
|
||||
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
|
||||
hb.terminate()
|
||||
continue
|
||||
}
|
||||
@@ -101,3 +103,26 @@ func TestHeartbeater(t *testing.T) {
|
||||
hb.terminate()
|
||||
}
|
||||
}
|
||||
|
||||
func TestHeartbeaterWithRedisDown(t *testing.T) {
|
||||
// Make sure that heartbeater goroutine doesn't panic
|
||||
// if it cannot connect to redis.
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
t.Errorf("panic occurred: %v", r)
|
||||
}
|
||||
}()
|
||||
r := rdb.NewRDB(setup(t))
|
||||
testBroker := testbroker.NewTestBroker(r)
|
||||
ss := base.NewServerState("localhost", 1234, 10, map[string]int{"default": 1}, false)
|
||||
hb := newHeartbeater(testLogger, testBroker, ss, time.Second)
|
||||
|
||||
testBroker.Sleep()
|
||||
var wg sync.WaitGroup
|
||||
hb.start(&wg)
|
||||
|
||||
// wait for heartbeater to try writing data to redis
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
hb.terminate()
|
||||
}
|
||||
|
@@ -41,9 +41,9 @@ var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) [
|
||||
return out
|
||||
})
|
||||
|
||||
// SortProcessInfoOpt is a cmp.Option to sort base.ProcessInfo for comparing slice of process info.
|
||||
var SortProcessInfoOpt = cmp.Transformer("SortProcessInfo", func(in []*base.ProcessInfo) []*base.ProcessInfo {
|
||||
out := append([]*base.ProcessInfo(nil), in...) // Copy input to avoid mutating it
|
||||
// SortServerInfoOpt is a cmp.Option to sort base.ServerInfo for comparing slice of process info.
|
||||
var SortServerInfoOpt = cmp.Transformer("SortServerInfo", func(in []*base.ServerInfo) []*base.ServerInfo {
|
||||
out := append([]*base.ServerInfo(nil), in...) // Copy input to avoid mutating it
|
||||
sort.Slice(out, func(i, j int) bool {
|
||||
if out[i].Host != out[j].Host {
|
||||
return out[i].Host < out[j].Host
|
||||
|
@@ -12,6 +12,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/rs/xid"
|
||||
)
|
||||
|
||||
@@ -20,10 +21,10 @@ const DefaultQueueName = "default"
|
||||
|
||||
// Redis keys
|
||||
const (
|
||||
AllProcesses = "asynq:ps" // ZSET
|
||||
psPrefix = "asynq:ps:" // STRING - asynq:ps:<host>:<pid>
|
||||
AllServers = "asynq:servers" // ZSET
|
||||
serversPrefix = "asynq:servers:" // STRING - asynq:ps:<host>:<pid>:<serverid>
|
||||
AllWorkers = "asynq:workers" // ZSET
|
||||
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>
|
||||
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>:<serverid>
|
||||
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
|
||||
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
|
||||
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
|
||||
@@ -51,14 +52,14 @@ func FailureKey(t time.Time) string {
|
||||
return failurePrefix + t.UTC().Format("2006-01-02")
|
||||
}
|
||||
|
||||
// ProcessInfoKey returns a redis key for process info.
|
||||
func ProcessInfoKey(hostname string, pid int) string {
|
||||
return fmt.Sprintf("%s%s:%d", psPrefix, hostname, pid)
|
||||
// ServerInfoKey returns a redis key for process info.
|
||||
func ServerInfoKey(hostname string, pid int, sid string) string {
|
||||
return fmt.Sprintf("%s%s:%d:%s", serversPrefix, hostname, pid, sid)
|
||||
}
|
||||
|
||||
// WorkersKey returns a redis key for the workers given hostname and pid.
|
||||
func WorkersKey(hostname string, pid int) string {
|
||||
return fmt.Sprintf("%s%s:%d", workersPrefix, hostname, pid)
|
||||
// WorkersKey returns a redis key for the workers given hostname, pid, and server ID.
|
||||
func WorkersKey(hostname string, pid int, sid string) string {
|
||||
return fmt.Sprintf("%s%s:%d:%s", workersPrefix, hostname, pid, sid)
|
||||
}
|
||||
|
||||
// TaskMessage is the internal representation of a task with additional metadata fields.
|
||||
@@ -104,42 +105,47 @@ type TaskMessage struct {
|
||||
UniqueKey string
|
||||
}
|
||||
|
||||
// ProcessState holds process level information.
|
||||
// ServerState holds process level information.
|
||||
//
|
||||
// ProcessStates are safe for concurrent use by multiple goroutines.
|
||||
type ProcessState struct {
|
||||
// ServerStates are safe for concurrent use by multiple goroutines.
|
||||
type ServerState struct {
|
||||
mu sync.Mutex // guards all data fields
|
||||
id xid.ID
|
||||
concurrency int
|
||||
queues map[string]int
|
||||
strictPriority bool
|
||||
pid int
|
||||
host string
|
||||
status PStatus
|
||||
status ServerStatus
|
||||
started time.Time
|
||||
workers map[string]*workerStats
|
||||
}
|
||||
|
||||
// PStatus represents status of a process.
|
||||
type PStatus int
|
||||
// ServerStatus represents status of a server.
|
||||
type ServerStatus int
|
||||
|
||||
const (
|
||||
// StatusIdle indicates process is in idle state.
|
||||
StatusIdle PStatus = iota
|
||||
// StatusIdle indicates the server is in idle state.
|
||||
StatusIdle ServerStatus = iota
|
||||
|
||||
// StatusRunning indicates process is up and processing tasks.
|
||||
// StatusRunning indicates the servier is up and processing tasks.
|
||||
StatusRunning
|
||||
|
||||
// StatusStopped indicates process is up but not processing new tasks.
|
||||
// StatusQuiet indicates the server is up but not processing new tasks.
|
||||
StatusQuiet
|
||||
|
||||
// StatusStopped indicates the server server has been stopped.
|
||||
StatusStopped
|
||||
)
|
||||
|
||||
var statuses = []string{
|
||||
"idle",
|
||||
"running",
|
||||
"quiet",
|
||||
"stopped",
|
||||
}
|
||||
|
||||
func (s PStatus) String() string {
|
||||
func (s ServerStatus) String() string {
|
||||
if StatusIdle <= s && s <= StatusStopped {
|
||||
return statuses[s]
|
||||
}
|
||||
@@ -151,11 +157,12 @@ type workerStats struct {
|
||||
started time.Time
|
||||
}
|
||||
|
||||
// NewProcessState returns a new instance of ProcessState.
|
||||
func NewProcessState(host string, pid, concurrency int, queues map[string]int, strict bool) *ProcessState {
|
||||
return &ProcessState{
|
||||
// NewServerState returns a new instance of ServerState.
|
||||
func NewServerState(host string, pid, concurrency int, queues map[string]int, strict bool) *ServerState {
|
||||
return &ServerState{
|
||||
host: host,
|
||||
pid: pid,
|
||||
id: xid.New(),
|
||||
concurrency: concurrency,
|
||||
queues: cloneQueueConfig(queues),
|
||||
strictPriority: strict,
|
||||
@@ -164,59 +171,67 @@ func NewProcessState(host string, pid, concurrency int, queues map[string]int, s
|
||||
}
|
||||
}
|
||||
|
||||
// SetStatus updates the state of process.
|
||||
func (ps *ProcessState) SetStatus(status PStatus) {
|
||||
ps.mu.Lock()
|
||||
defer ps.mu.Unlock()
|
||||
ps.status = status
|
||||
// SetStatus updates the status of server.
|
||||
func (ss *ServerState) SetStatus(status ServerStatus) {
|
||||
ss.mu.Lock()
|
||||
defer ss.mu.Unlock()
|
||||
ss.status = status
|
||||
}
|
||||
|
||||
// Status returns the status of server.
|
||||
func (ss *ServerState) Status() ServerStatus {
|
||||
ss.mu.Lock()
|
||||
defer ss.mu.Unlock()
|
||||
return ss.status
|
||||
}
|
||||
|
||||
// SetStarted records when the process started processing.
|
||||
func (ps *ProcessState) SetStarted(t time.Time) {
|
||||
ps.mu.Lock()
|
||||
defer ps.mu.Unlock()
|
||||
ps.started = t
|
||||
func (ss *ServerState) SetStarted(t time.Time) {
|
||||
ss.mu.Lock()
|
||||
defer ss.mu.Unlock()
|
||||
ss.started = t
|
||||
}
|
||||
|
||||
// AddWorkerStats records when a worker started and which task it's processing.
|
||||
func (ps *ProcessState) AddWorkerStats(msg *TaskMessage, started time.Time) {
|
||||
ps.mu.Lock()
|
||||
defer ps.mu.Unlock()
|
||||
ps.workers[msg.ID.String()] = &workerStats{msg, started}
|
||||
func (ss *ServerState) AddWorkerStats(msg *TaskMessage, started time.Time) {
|
||||
ss.mu.Lock()
|
||||
defer ss.mu.Unlock()
|
||||
ss.workers[msg.ID.String()] = &workerStats{msg, started}
|
||||
}
|
||||
|
||||
// DeleteWorkerStats removes a worker's entry from the process state.
|
||||
func (ps *ProcessState) DeleteWorkerStats(msg *TaskMessage) {
|
||||
ps.mu.Lock()
|
||||
defer ps.mu.Unlock()
|
||||
delete(ps.workers, msg.ID.String())
|
||||
func (ss *ServerState) DeleteWorkerStats(msg *TaskMessage) {
|
||||
ss.mu.Lock()
|
||||
defer ss.mu.Unlock()
|
||||
delete(ss.workers, msg.ID.String())
|
||||
}
|
||||
|
||||
// Get returns current state of process as a ProcessInfo.
|
||||
func (ps *ProcessState) Get() *ProcessInfo {
|
||||
ps.mu.Lock()
|
||||
defer ps.mu.Unlock()
|
||||
return &ProcessInfo{
|
||||
Host: ps.host,
|
||||
PID: ps.pid,
|
||||
Concurrency: ps.concurrency,
|
||||
Queues: cloneQueueConfig(ps.queues),
|
||||
StrictPriority: ps.strictPriority,
|
||||
Status: ps.status.String(),
|
||||
Started: ps.started,
|
||||
ActiveWorkerCount: len(ps.workers),
|
||||
// GetInfo returns current state of server as a ServerInfo.
|
||||
func (ss *ServerState) GetInfo() *ServerInfo {
|
||||
ss.mu.Lock()
|
||||
defer ss.mu.Unlock()
|
||||
return &ServerInfo{
|
||||
Host: ss.host,
|
||||
PID: ss.pid,
|
||||
ServerID: ss.id.String(),
|
||||
Concurrency: ss.concurrency,
|
||||
Queues: cloneQueueConfig(ss.queues),
|
||||
StrictPriority: ss.strictPriority,
|
||||
Status: ss.status.String(),
|
||||
Started: ss.started,
|
||||
ActiveWorkerCount: len(ss.workers),
|
||||
}
|
||||
}
|
||||
|
||||
// GetWorkers returns a list of currently running workers' info.
|
||||
func (ps *ProcessState) GetWorkers() []*WorkerInfo {
|
||||
ps.mu.Lock()
|
||||
defer ps.mu.Unlock()
|
||||
func (ss *ServerState) GetWorkers() []*WorkerInfo {
|
||||
ss.mu.Lock()
|
||||
defer ss.mu.Unlock()
|
||||
var res []*WorkerInfo
|
||||
for _, w := range ps.workers {
|
||||
for _, w := range ss.workers {
|
||||
res = append(res, &WorkerInfo{
|
||||
Host: ps.host,
|
||||
PID: ps.pid,
|
||||
Host: ss.host,
|
||||
PID: ss.pid,
|
||||
ID: w.msg.ID,
|
||||
Type: w.msg.Type,
|
||||
Queue: w.msg.Queue,
|
||||
@@ -243,10 +258,11 @@ func clonePayload(payload map[string]interface{}) map[string]interface{} {
|
||||
return res
|
||||
}
|
||||
|
||||
// ProcessInfo holds information about a running background worker process.
|
||||
type ProcessInfo struct {
|
||||
// ServerInfo holds information about a running server.
|
||||
type ServerInfo struct {
|
||||
Host string
|
||||
PID int
|
||||
ServerID string
|
||||
Concurrency int
|
||||
Queues map[string]int
|
||||
StrictPriority bool
|
||||
@@ -313,3 +329,25 @@ func (c *Cancelations) GetAll() []context.CancelFunc {
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// Broker is a message broker that supports operations to manage task queues.
|
||||
//
|
||||
// See rdb.RDB as a reference implementation.
|
||||
type Broker interface {
|
||||
Enqueue(msg *TaskMessage) error
|
||||
EnqueueUnique(msg *TaskMessage, ttl time.Duration) error
|
||||
Dequeue(qnames ...string) (*TaskMessage, error)
|
||||
Done(msg *TaskMessage) error
|
||||
Requeue(msg *TaskMessage) error
|
||||
Schedule(msg *TaskMessage, processAt time.Time) error
|
||||
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
|
||||
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
|
||||
Kill(msg *TaskMessage, errMsg string) error
|
||||
RequeueAll() (int64, error)
|
||||
CheckAndEnqueue(qnames ...string) error
|
||||
WriteServerState(ss *ServerState, ttl time.Duration) error
|
||||
ClearServerState(ss *ServerState) error
|
||||
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers
|
||||
PublishCancelation(id string) error
|
||||
Close() error
|
||||
}
|
||||
|
@@ -12,6 +12,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
"github.com/rs/xid"
|
||||
)
|
||||
|
||||
@@ -67,20 +68,21 @@ func TestFailureKey(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessInfoKey(t *testing.T) {
|
||||
func TestServerInfoKey(t *testing.T) {
|
||||
tests := []struct {
|
||||
hostname string
|
||||
pid int
|
||||
sid string
|
||||
want string
|
||||
}{
|
||||
{"localhost", 9876, "asynq:ps:localhost:9876"},
|
||||
{"127.0.0.1", 1234, "asynq:ps:127.0.0.1:1234"},
|
||||
{"localhost", 9876, "server123", "asynq:servers:localhost:9876:server123"},
|
||||
{"127.0.0.1", 1234, "server987", "asynq:servers:127.0.0.1:1234:server987"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := ProcessInfoKey(tc.hostname, tc.pid)
|
||||
got := ServerInfoKey(tc.hostname, tc.pid, tc.sid)
|
||||
if got != tc.want {
|
||||
t.Errorf("ProcessInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want)
|
||||
t.Errorf("ServerInfoKey(%q, %d) = %q, want %q", tc.hostname, tc.pid, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -89,24 +91,25 @@ func TestWorkersKey(t *testing.T) {
|
||||
tests := []struct {
|
||||
hostname string
|
||||
pid int
|
||||
sid string
|
||||
want string
|
||||
}{
|
||||
{"localhost", 9876, "asynq:workers:localhost:9876"},
|
||||
{"127.0.0.1", 1234, "asynq:workers:127.0.0.1:1234"},
|
||||
{"localhost", 9876, "server1", "asynq:workers:localhost:9876:server1"},
|
||||
{"127.0.0.1", 1234, "server2", "asynq:workers:127.0.0.1:1234:server2"},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := WorkersKey(tc.hostname, tc.pid)
|
||||
got := WorkersKey(tc.hostname, tc.pid, tc.sid)
|
||||
if got != tc.want {
|
||||
t.Errorf("WorkersKey(%q, %d) = %q, want = %q", tc.hostname, tc.pid, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test for process state being accessed by multiple goroutines.
|
||||
// Test for server state being accessed by multiple goroutines.
|
||||
// Run with -race flag to check for data race.
|
||||
func TestProcessStateConcurrentAccess(t *testing.T) {
|
||||
ps := NewProcessState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false)
|
||||
func TestServerStateConcurrentAccess(t *testing.T) {
|
||||
ss := NewServerState("127.0.0.1", 1234, 10, map[string]int{"default": 1}, false)
|
||||
var wg sync.WaitGroup
|
||||
started := time.Now()
|
||||
msgs := []*TaskMessage{
|
||||
@@ -119,18 +122,21 @@ func TestProcessStateConcurrentAccess(t *testing.T) {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
ps.SetStarted(started)
|
||||
ps.SetStatus(StatusRunning)
|
||||
ss.SetStarted(started)
|
||||
ss.SetStatus(StatusRunning)
|
||||
if status := ss.Status(); status != StatusRunning {
|
||||
t.Errorf("(*ServerState).Status() = %v, want %v", status, StatusRunning)
|
||||
}
|
||||
}()
|
||||
|
||||
// Simulate processor starting worker goroutines.
|
||||
for _, msg := range msgs {
|
||||
wg.Add(1)
|
||||
ps.AddWorkerStats(msg, time.Now())
|
||||
ss.AddWorkerStats(msg, time.Now())
|
||||
go func(msg *TaskMessage) {
|
||||
defer wg.Done()
|
||||
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
|
||||
ps.DeleteWorkerStats(msg)
|
||||
ss.DeleteWorkerStats(msg)
|
||||
}(msg)
|
||||
}
|
||||
|
||||
@@ -139,15 +145,15 @@ func TestProcessStateConcurrentAccess(t *testing.T) {
|
||||
go func() {
|
||||
wg.Done()
|
||||
for i := 0; i < 5; i++ {
|
||||
ps.Get()
|
||||
ps.GetWorkers()
|
||||
ss.GetInfo()
|
||||
ss.GetWorkers()
|
||||
time.Sleep(time.Duration(rand.Intn(100)) * time.Millisecond)
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Wait()
|
||||
|
||||
want := &ProcessInfo{
|
||||
want := &ServerInfo{
|
||||
Host: "127.0.0.1",
|
||||
PID: 1234,
|
||||
Concurrency: 10,
|
||||
@@ -158,9 +164,9 @@ func TestProcessStateConcurrentAccess(t *testing.T) {
|
||||
ActiveWorkerCount: 0,
|
||||
}
|
||||
|
||||
got := ps.Get()
|
||||
if diff := cmp.Diff(want, got); diff != "" {
|
||||
t.Errorf("(*ProcessState).Get() = %+v, want %+v; (-want,+got)\n%s",
|
||||
got := ss.GetInfo()
|
||||
if diff := cmp.Diff(want, got, cmpopts.IgnoreFields(ServerInfo{}, "ServerID")); diff != "" {
|
||||
t.Errorf("(*ServerState).GetInfo() = %+v, want %+v; (-want,+got)\n%s",
|
||||
got, want, diff)
|
||||
}
|
||||
}
|
||||
|
@@ -759,23 +759,23 @@ func (r *RDB) RemoveQueue(qname string, force bool) error {
|
||||
}
|
||||
|
||||
// Note: Script also removes stale keys.
|
||||
var listProcessesCmd = redis.NewScript(`
|
||||
var listServersCmd = redis.NewScript(`
|
||||
local res = {}
|
||||
local now = tonumber(ARGV[1])
|
||||
local keys = redis.call("ZRANGEBYSCORE", KEYS[1], now, "+inf")
|
||||
for _, key in ipairs(keys) do
|
||||
local ps = redis.call("GET", key)
|
||||
if ps then
|
||||
table.insert(res, ps)
|
||||
local s = redis.call("GET", key)
|
||||
if s then
|
||||
table.insert(res, s)
|
||||
end
|
||||
end
|
||||
redis.call("ZREMRANGEBYSCORE", KEYS[1], "-inf", now-1)
|
||||
return res`)
|
||||
|
||||
// ListProcesses returns the list of process statuses.
|
||||
func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
|
||||
res, err := listProcessesCmd.Run(r.client,
|
||||
[]string{base.AllProcesses}, time.Now().UTC().Unix()).Result()
|
||||
// ListServers returns the list of server info.
|
||||
func (r *RDB) ListServers() ([]*base.ServerInfo, error) {
|
||||
res, err := listServersCmd.Run(r.client,
|
||||
[]string{base.AllServers}, time.Now().UTC().Unix()).Result()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -783,16 +783,16 @@ func (r *RDB) ListProcesses() ([]*base.ProcessInfo, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var processes []*base.ProcessInfo
|
||||
var servers []*base.ServerInfo
|
||||
for _, s := range data {
|
||||
var ps base.ProcessInfo
|
||||
err := json.Unmarshal([]byte(s), &ps)
|
||||
var info base.ServerInfo
|
||||
err := json.Unmarshal([]byte(s), &info)
|
||||
if err != nil {
|
||||
continue // skip bad data
|
||||
}
|
||||
processes = append(processes, &ps)
|
||||
servers = append(servers, &info)
|
||||
}
|
||||
return processes, nil
|
||||
return servers, nil
|
||||
}
|
||||
|
||||
// Note: Script also removes stale keys.
|
||||
|
@@ -2051,14 +2051,14 @@ func TestRemoveQueueError(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestListProcesses(t *testing.T) {
|
||||
func TestListServers(t *testing.T) {
|
||||
r := setup(t)
|
||||
|
||||
started1 := time.Now().Add(-time.Hour)
|
||||
ps1 := base.NewProcessState("do.droplet1", 1234, 10, map[string]int{"default": 1}, false)
|
||||
ps1.SetStarted(started1)
|
||||
ps1.SetStatus(base.StatusRunning)
|
||||
info1 := &base.ProcessInfo{
|
||||
ss1 := base.NewServerState("do.droplet1", 1234, 10, map[string]int{"default": 1}, false)
|
||||
ss1.SetStarted(started1)
|
||||
ss1.SetStatus(base.StatusRunning)
|
||||
info1 := &base.ServerInfo{
|
||||
Concurrency: 10,
|
||||
Queues: map[string]int{"default": 1},
|
||||
Host: "do.droplet1",
|
||||
@@ -2069,11 +2069,11 @@ func TestListProcesses(t *testing.T) {
|
||||
}
|
||||
|
||||
started2 := time.Now().Add(-2 * time.Hour)
|
||||
ps2 := base.NewProcessState("do.droplet2", 9876, 20, map[string]int{"email": 1}, false)
|
||||
ps2.SetStarted(started2)
|
||||
ps2.SetStatus(base.StatusStopped)
|
||||
ps2.AddWorkerStats(h.NewTaskMessage("send_email", nil), time.Now())
|
||||
info2 := &base.ProcessInfo{
|
||||
ss2 := base.NewServerState("do.droplet2", 9876, 20, map[string]int{"email": 1}, false)
|
||||
ss2.SetStarted(started2)
|
||||
ss2.SetStatus(base.StatusStopped)
|
||||
ss2.AddWorkerStats(h.NewTaskMessage("send_email", nil), time.Now())
|
||||
info2 := &base.ServerInfo{
|
||||
Concurrency: 20,
|
||||
Queues: map[string]int{"email": 1},
|
||||
Host: "do.droplet2",
|
||||
@@ -2084,41 +2084,42 @@ func TestListProcesses(t *testing.T) {
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
processes []*base.ProcessState
|
||||
want []*base.ProcessInfo
|
||||
serverStates []*base.ServerState
|
||||
want []*base.ServerInfo
|
||||
}{
|
||||
{
|
||||
processes: []*base.ProcessState{},
|
||||
want: []*base.ProcessInfo{},
|
||||
serverStates: []*base.ServerState{},
|
||||
want: []*base.ServerInfo{},
|
||||
},
|
||||
{
|
||||
processes: []*base.ProcessState{ps1},
|
||||
want: []*base.ProcessInfo{info1},
|
||||
serverStates: []*base.ServerState{ss1},
|
||||
want: []*base.ServerInfo{info1},
|
||||
},
|
||||
{
|
||||
processes: []*base.ProcessState{ps1, ps2},
|
||||
want: []*base.ProcessInfo{info1, info2},
|
||||
serverStates: []*base.ServerState{ss1, ss2},
|
||||
want: []*base.ServerInfo{info1, info2},
|
||||
},
|
||||
}
|
||||
|
||||
ignoreOpt := cmpopts.IgnoreUnexported(base.ProcessInfo{})
|
||||
ignoreOpt := cmpopts.IgnoreUnexported(base.ServerInfo{})
|
||||
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
|
||||
|
||||
for _, tc := range tests {
|
||||
h.FlushDB(t, r.client)
|
||||
|
||||
for _, ps := range tc.processes {
|
||||
if err := r.WriteProcessState(ps, 5*time.Second); err != nil {
|
||||
for _, ss := range tc.serverStates {
|
||||
if err := r.WriteServerState(ss, 5*time.Second); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
got, err := r.ListProcesses()
|
||||
got, err := r.ListServers()
|
||||
if err != nil {
|
||||
t.Errorf("r.ListProcesses returned an error: %v", err)
|
||||
t.Errorf("r.ListServers returned an error: %v", err)
|
||||
}
|
||||
if diff := cmp.Diff(tc.want, got, h.SortProcessInfoOpt, ignoreOpt); diff != "" {
|
||||
t.Errorf("r.ListProcesses returned %v, want %v; (-want,+got)\n%s",
|
||||
got, tc.processes, diff)
|
||||
if diff := cmp.Diff(tc.want, got, h.SortServerInfoOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
|
||||
t.Errorf("r.ListServers returned %v, want %v; (-want,+got)\n%s",
|
||||
got, tc.serverStates, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2164,15 +2165,15 @@ func TestListWorkers(t *testing.T) {
|
||||
for _, tc := range tests {
|
||||
h.FlushDB(t, r.client)
|
||||
|
||||
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false)
|
||||
ss := base.NewServerState(host, pid, 10, map[string]int{"default": 1}, false)
|
||||
|
||||
for _, w := range tc.workers {
|
||||
ps.AddWorkerStats(w.msg, w.started)
|
||||
ss.AddWorkerStats(w.msg, w.started)
|
||||
}
|
||||
|
||||
err := r.WriteProcessState(ps, time.Minute)
|
||||
err := r.WriteServerState(ss, time.Minute)
|
||||
if err != nil {
|
||||
t.Errorf("could not write process state to redis: %v", err)
|
||||
t.Errorf("could not write server state to redis: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
|
@@ -463,9 +463,9 @@ func (r *RDB) forwardSingle(src, dst string) error {
|
||||
[]string{src, dst}, now).Err()
|
||||
}
|
||||
|
||||
// KEYS[1] -> asynq:ps:<host:pid>
|
||||
// KEYS[2] -> asynq:ps
|
||||
// KEYS[3] -> asynq:workers<host:pid>
|
||||
// KEYS[1] -> asynq:servers:<host:pid:sid>
|
||||
// KEYS[2] -> asynq:servers
|
||||
// KEYS[3] -> asynq:workers<host:pid:sid>
|
||||
// keys[4] -> asynq:workers
|
||||
// ARGV[1] -> expiration time
|
||||
// ARGV[2] -> TTL in seconds
|
||||
@@ -484,16 +484,16 @@ redis.call("EXPIRE", KEYS[3], ARGV[2])
|
||||
redis.call("ZADD", KEYS[4], ARGV[1], KEYS[3])
|
||||
return redis.status_reply("OK")`)
|
||||
|
||||
// WriteProcessState writes process state data to redis with expiration set to the value ttl.
|
||||
func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error {
|
||||
info := ps.Get()
|
||||
// WriteServerState writes server state data to redis with expiration set to the value ttl.
|
||||
func (r *RDB) WriteServerState(ss *base.ServerState, ttl time.Duration) error {
|
||||
info := ss.GetInfo()
|
||||
bytes, err := json.Marshal(info)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var args []interface{} // args to the lua script
|
||||
exp := time.Now().Add(ttl).UTC()
|
||||
workers := ps.GetWorkers()
|
||||
workers := ss.GetWorkers()
|
||||
args = append(args, float64(exp.Unix()), ttl.Seconds(), bytes)
|
||||
for _, w := range workers {
|
||||
bytes, err := json.Marshal(w)
|
||||
@@ -502,17 +502,17 @@ func (r *RDB) WriteProcessState(ps *base.ProcessState, ttl time.Duration) error
|
||||
}
|
||||
args = append(args, w.ID.String(), bytes)
|
||||
}
|
||||
pkey := base.ProcessInfoKey(info.Host, info.PID)
|
||||
wkey := base.WorkersKey(info.Host, info.PID)
|
||||
skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
|
||||
wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
|
||||
return writeProcessInfoCmd.Run(r.client,
|
||||
[]string{pkey, base.AllProcesses, wkey, base.AllWorkers},
|
||||
[]string{skey, base.AllServers, wkey, base.AllWorkers},
|
||||
args...).Err()
|
||||
}
|
||||
|
||||
// KEYS[1] -> asynq:ps
|
||||
// KEYS[2] -> asynq:ps:<host:pid>
|
||||
// KEYS[1] -> asynq:servers
|
||||
// KEYS[2] -> asynq:servers:<host:pid:sid>
|
||||
// KEYS[3] -> asynq:workers
|
||||
// KEYS[4] -> asynq:workers<host:pid>
|
||||
// KEYS[4] -> asynq:workers<host:pid:sid>
|
||||
var clearProcessInfoCmd = redis.NewScript(`
|
||||
redis.call("ZREM", KEYS[1], KEYS[2])
|
||||
redis.call("DEL", KEYS[2])
|
||||
@@ -520,14 +520,14 @@ redis.call("ZREM", KEYS[3], KEYS[4])
|
||||
redis.call("DEL", KEYS[4])
|
||||
return redis.status_reply("OK")`)
|
||||
|
||||
// ClearProcessState deletes process state data from redis.
|
||||
func (r *RDB) ClearProcessState(ps *base.ProcessState) error {
|
||||
info := ps.Get()
|
||||
host, pid := info.Host, info.PID
|
||||
pkey := base.ProcessInfoKey(host, pid)
|
||||
wkey := base.WorkersKey(host, pid)
|
||||
// ClearServerState deletes server state data from redis.
|
||||
func (r *RDB) ClearServerState(ss *base.ServerState) error {
|
||||
info := ss.GetInfo()
|
||||
host, pid, id := info.Host, info.PID, info.ServerID
|
||||
skey := base.ServerInfoKey(host, pid, id)
|
||||
wkey := base.WorkersKey(host, pid, id)
|
||||
return clearProcessInfoCmd.Run(r.client,
|
||||
[]string{base.AllProcesses, pkey, base.AllWorkers, wkey}).Err()
|
||||
[]string{base.AllServers, skey, base.AllWorkers, wkey}).Err()
|
||||
}
|
||||
|
||||
// CancelationPubSub returns a pubsub for cancelation messages.
|
||||
|
@@ -862,60 +862,61 @@ func TestCheckAndEnqueue(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteProcessState(t *testing.T) {
|
||||
func TestWriteServerState(t *testing.T) {
|
||||
r := setup(t)
|
||||
host, pid := "localhost", 98765
|
||||
queues := map[string]int{"default": 2, "email": 5, "low": 1}
|
||||
|
||||
started := time.Now()
|
||||
ps := base.NewProcessState(host, pid, 10, queues, false)
|
||||
ps.SetStarted(started)
|
||||
ps.SetStatus(base.StatusRunning)
|
||||
ss := base.NewServerState("localhost", 4242, 10, queues, false)
|
||||
ss.SetStarted(started)
|
||||
ss.SetStatus(base.StatusRunning)
|
||||
ttl := 5 * time.Second
|
||||
|
||||
h.FlushDB(t, r.client)
|
||||
|
||||
err := r.WriteProcessState(ps, ttl)
|
||||
err := r.WriteServerState(ss, ttl)
|
||||
if err != nil {
|
||||
t.Errorf("r.WriteProcessState returned an error: %v", err)
|
||||
t.Errorf("r.WriteServerState returned an error: %v", err)
|
||||
}
|
||||
|
||||
// Check ProcessInfo was written correctly
|
||||
pkey := base.ProcessInfoKey(host, pid)
|
||||
data := r.client.Get(pkey).Val()
|
||||
var got base.ProcessInfo
|
||||
// Check ServerInfo was written correctly
|
||||
info := ss.GetInfo()
|
||||
skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
|
||||
data := r.client.Get(skey).Val()
|
||||
var got base.ServerInfo
|
||||
err = json.Unmarshal([]byte(data), &got)
|
||||
if err != nil {
|
||||
t.Fatalf("could not decode json: %v", err)
|
||||
}
|
||||
want := base.ProcessInfo{
|
||||
Host: "localhost",
|
||||
PID: 98765,
|
||||
Concurrency: 10,
|
||||
want := base.ServerInfo{
|
||||
Host: info.Host,
|
||||
PID: info.PID,
|
||||
Concurrency: info.Concurrency,
|
||||
Queues: map[string]int{"default": 2, "email": 5, "low": 1},
|
||||
StrictPriority: false,
|
||||
Status: "running",
|
||||
Started: started,
|
||||
ActiveWorkerCount: 0,
|
||||
}
|
||||
if diff := cmp.Diff(want, got); diff != "" {
|
||||
t.Errorf("persisted ProcessInfo was %v, want %v; (-want,+got)\n%s",
|
||||
ignoreOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
|
||||
if diff := cmp.Diff(want, got, ignoreOpt); diff != "" {
|
||||
t.Errorf("persisted ServerInfo was %v, want %v; (-want,+got)\n%s",
|
||||
got, want, diff)
|
||||
}
|
||||
// Check ProcessInfo TTL was set correctly
|
||||
gotTTL := r.client.TTL(pkey).Val()
|
||||
// Check ServerInfo TTL was set correctly
|
||||
gotTTL := r.client.TTL(skey).Val()
|
||||
if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
|
||||
t.Errorf("TTL of %q was %v, want %v", pkey, gotTTL, ttl)
|
||||
t.Errorf("TTL of %q was %v, want %v", skey, gotTTL, ttl)
|
||||
}
|
||||
// Check ProcessInfo key was added to the set correctly
|
||||
gotProcesses := r.client.ZRange(base.AllProcesses, 0, -1).Val()
|
||||
wantProcesses := []string{pkey}
|
||||
// Check ServerInfo key was added to the set correctly
|
||||
gotProcesses := r.client.ZRange(base.AllServers, 0, -1).Val()
|
||||
wantProcesses := []string{skey}
|
||||
if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" {
|
||||
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcesses, wantProcesses)
|
||||
t.Errorf("%q contained %v, want %v", base.AllServers, gotProcesses, wantProcesses)
|
||||
}
|
||||
|
||||
// Check WorkersInfo was written correctly
|
||||
wkey := base.WorkersKey(host, pid)
|
||||
wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
|
||||
workerExist := r.client.Exists(wkey).Val()
|
||||
if workerExist != 0 {
|
||||
t.Errorf("%q key exists", wkey)
|
||||
@@ -928,9 +929,8 @@ func TestWriteProcessState(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteProcessStateWithWorkers(t *testing.T) {
|
||||
func TestWriteServerStateWithWorkers(t *testing.T) {
|
||||
r := setup(t)
|
||||
host, pid := "localhost", 98765
|
||||
queues := map[string]int{"default": 2, "email": 5, "low": 1}
|
||||
concurrency := 10
|
||||
|
||||
@@ -939,31 +939,33 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
|
||||
w2Started := time.Now().Add(-time.Second)
|
||||
msg1 := h.NewTaskMessage("send_email", map[string]interface{}{"user_id": "123"})
|
||||
msg2 := h.NewTaskMessage("gen_thumbnail", map[string]interface{}{"path": "some/path/to/imgfile"})
|
||||
ps := base.NewProcessState(host, pid, concurrency, queues, false)
|
||||
ps.SetStarted(started)
|
||||
ps.SetStatus(base.StatusRunning)
|
||||
ps.AddWorkerStats(msg1, w1Started)
|
||||
ps.AddWorkerStats(msg2, w2Started)
|
||||
ss := base.NewServerState("127.0.01", 4242, concurrency, queues, false)
|
||||
ss.SetStarted(started)
|
||||
ss.SetStatus(base.StatusRunning)
|
||||
ss.AddWorkerStats(msg1, w1Started)
|
||||
ss.AddWorkerStats(msg2, w2Started)
|
||||
ttl := 5 * time.Second
|
||||
|
||||
h.FlushDB(t, r.client)
|
||||
|
||||
err := r.WriteProcessState(ps, ttl)
|
||||
err := r.WriteServerState(ss, ttl)
|
||||
if err != nil {
|
||||
t.Errorf("r.WriteProcessState returned an error: %v", err)
|
||||
t.Errorf("r.WriteServerState returned an error: %v", err)
|
||||
}
|
||||
|
||||
// Check ProcessInfo was written correctly
|
||||
pkey := base.ProcessInfoKey(host, pid)
|
||||
data := r.client.Get(pkey).Val()
|
||||
var got base.ProcessInfo
|
||||
// Check ServerInfo was written correctly
|
||||
info := ss.GetInfo()
|
||||
skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
|
||||
data := r.client.Get(skey).Val()
|
||||
var got base.ServerInfo
|
||||
err = json.Unmarshal([]byte(data), &got)
|
||||
if err != nil {
|
||||
t.Fatalf("could not decode json: %v", err)
|
||||
}
|
||||
want := base.ProcessInfo{
|
||||
Host: host,
|
||||
PID: pid,
|
||||
want := base.ServerInfo{
|
||||
Host: info.Host,
|
||||
PID: info.PID,
|
||||
ServerID: info.ServerID,
|
||||
Concurrency: concurrency,
|
||||
Queues: queues,
|
||||
StrictPriority: false,
|
||||
@@ -972,23 +974,23 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
|
||||
ActiveWorkerCount: 2,
|
||||
}
|
||||
if diff := cmp.Diff(want, got); diff != "" {
|
||||
t.Errorf("persisted ProcessInfo was %v, want %v; (-want,+got)\n%s",
|
||||
t.Errorf("persisted ServerInfo was %v, want %v; (-want,+got)\n%s",
|
||||
got, want, diff)
|
||||
}
|
||||
// Check ProcessInfo TTL was set correctly
|
||||
gotTTL := r.client.TTL(pkey).Val()
|
||||
// Check ServerInfo TTL was set correctly
|
||||
gotTTL := r.client.TTL(skey).Val()
|
||||
if !cmp.Equal(ttl.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
|
||||
t.Errorf("TTL of %q was %v, want %v", pkey, gotTTL, ttl)
|
||||
t.Errorf("TTL of %q was %v, want %v", skey, gotTTL, ttl)
|
||||
}
|
||||
// Check ProcessInfo key was added to the set correctly
|
||||
gotProcesses := r.client.ZRange(base.AllProcesses, 0, -1).Val()
|
||||
wantProcesses := []string{pkey}
|
||||
// Check ServerInfo key was added to the set correctly
|
||||
gotProcesses := r.client.ZRange(base.AllServers, 0, -1).Val()
|
||||
wantProcesses := []string{skey}
|
||||
if diff := cmp.Diff(wantProcesses, gotProcesses); diff != "" {
|
||||
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcesses, wantProcesses)
|
||||
t.Errorf("%q contained %v, want %v", base.AllServers, gotProcesses, wantProcesses)
|
||||
}
|
||||
|
||||
// Check WorkersInfo was written correctly
|
||||
wkey := base.WorkersKey(host, pid)
|
||||
wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
|
||||
wdata := r.client.HGetAll(wkey).Val()
|
||||
if len(wdata) != 2 {
|
||||
t.Fatalf("HGETALL %q returned a hash of size %d, want 2", wkey, len(wdata))
|
||||
@@ -1003,8 +1005,8 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
|
||||
}
|
||||
wantWorkers := map[string]*base.WorkerInfo{
|
||||
msg1.ID.String(): {
|
||||
Host: host,
|
||||
PID: pid,
|
||||
Host: info.Host,
|
||||
PID: info.PID,
|
||||
ID: msg1.ID,
|
||||
Type: msg1.Type,
|
||||
Queue: msg1.Queue,
|
||||
@@ -1012,8 +1014,8 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
|
||||
Started: w1Started,
|
||||
},
|
||||
msg2.ID.String(): {
|
||||
Host: host,
|
||||
PID: pid,
|
||||
Host: info.Host,
|
||||
PID: info.PID,
|
||||
ID: msg2.ID,
|
||||
Type: msg2.Type,
|
||||
Queue: msg2.Queue,
|
||||
@@ -1039,27 +1041,28 @@ func TestWriteProcessStateWithWorkers(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestClearProcessState(t *testing.T) {
|
||||
func TestClearServerState(t *testing.T) {
|
||||
r := setup(t)
|
||||
host, pid := "127.0.0.1", 1234
|
||||
ss := base.NewServerState("127.0.01", 4242, 10, map[string]int{"default": 1}, false)
|
||||
info := ss.GetInfo()
|
||||
|
||||
h.FlushDB(t, r.client)
|
||||
|
||||
pkey := base.ProcessInfoKey(host, pid)
|
||||
wkey := base.WorkersKey(host, pid)
|
||||
otherPKey := base.ProcessInfoKey("otherhost", 12345)
|
||||
otherWKey := base.WorkersKey("otherhost", 12345)
|
||||
skey := base.ServerInfoKey(info.Host, info.PID, info.ServerID)
|
||||
wkey := base.WorkersKey(info.Host, info.PID, info.ServerID)
|
||||
otherSKey := base.ServerInfoKey("otherhost", 12345, "server98")
|
||||
otherWKey := base.WorkersKey("otherhost", 12345, "server98")
|
||||
// Populate the keys.
|
||||
if err := r.client.Set(pkey, "process-info", 0).Err(); err != nil {
|
||||
if err := r.client.Set(skey, "process-info", 0).Err(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := r.client.HSet(wkey, "worker-key", "worker-info").Err(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := r.client.ZAdd(base.AllProcesses, &redis.Z{Member: pkey}).Err(); err != nil {
|
||||
if err := r.client.ZAdd(base.AllServers, &redis.Z{Member: skey}).Err(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := r.client.ZAdd(base.AllProcesses, &redis.Z{Member: otherPKey}).Err(); err != nil {
|
||||
if err := r.client.ZAdd(base.AllServers, &redis.Z{Member: otherSKey}).Err(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := r.client.ZAdd(base.AllWorkers, &redis.Z{Member: wkey}).Err(); err != nil {
|
||||
@@ -1069,24 +1072,22 @@ func TestClearProcessState(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ps := base.NewProcessState(host, pid, 10, map[string]int{"default": 1}, false)
|
||||
|
||||
err := r.ClearProcessState(ps)
|
||||
err := r.ClearServerState(ss)
|
||||
if err != nil {
|
||||
t.Fatalf("(*RDB).ClearProcessState failed: %v", err)
|
||||
t.Fatalf("(*RDB).ClearServerState failed: %v", err)
|
||||
}
|
||||
|
||||
// Check all keys are cleared
|
||||
if r.client.Exists(pkey).Val() != 0 {
|
||||
t.Errorf("Redis key %q exists", pkey)
|
||||
if r.client.Exists(skey).Val() != 0 {
|
||||
t.Errorf("Redis key %q exists", skey)
|
||||
}
|
||||
if r.client.Exists(wkey).Val() != 0 {
|
||||
t.Errorf("Redis key %q exists", wkey)
|
||||
}
|
||||
gotProcessKeys := r.client.ZRange(base.AllProcesses, 0, -1).Val()
|
||||
wantProcessKeys := []string{otherPKey}
|
||||
gotProcessKeys := r.client.ZRange(base.AllServers, 0, -1).Val()
|
||||
wantProcessKeys := []string{otherSKey}
|
||||
if diff := cmp.Diff(wantProcessKeys, gotProcessKeys); diff != "" {
|
||||
t.Errorf("%q contained %v, want %v", base.AllProcesses, gotProcessKeys, wantProcessKeys)
|
||||
t.Errorf("%q contained %v, want %v", base.AllServers, gotProcessKeys, wantProcessKeys)
|
||||
}
|
||||
gotWorkerKeys := r.client.ZRange(base.AllWorkers, 0, -1).Val()
|
||||
wantWorkerKeys := []string{otherWKey}
|
||||
|
187
internal/testbroker/testbroker.go
Normal file
@@ -0,0 +1,187 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
// Package testbroker exports a broker implementation that should be used in package testing.
|
||||
package testbroker
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
)
|
||||
|
||||
var errRedisDown = errors.New("asynqtest: redis is down")
|
||||
|
||||
// TestBroker is a broker implementation which enables
|
||||
// to simulate Redis failure in tests.
|
||||
type TestBroker struct {
|
||||
mu sync.Mutex
|
||||
sleeping bool
|
||||
|
||||
// real broker
|
||||
real base.Broker
|
||||
}
|
||||
|
||||
func NewTestBroker(b base.Broker) *TestBroker {
|
||||
return &TestBroker{real: b}
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Sleep() {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
tb.sleeping = true
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Wakeup() {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
tb.sleeping = false
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Enqueue(msg *base.TaskMessage) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Enqueue(msg)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.EnqueueUnique(msg, ttl)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, error) {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return nil, errRedisDown
|
||||
}
|
||||
return tb.real.Dequeue(qnames...)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Done(msg *base.TaskMessage) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Done(msg)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Requeue(msg *base.TaskMessage) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Requeue(msg)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Schedule(msg *base.TaskMessage, processAt time.Time) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Schedule(msg, processAt)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.ScheduleUnique(msg, processAt, ttl)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Retry(msg, processAt, errMsg)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Kill(msg *base.TaskMessage, errMsg string) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Kill(msg, errMsg)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) RequeueAll() (int64, error) {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return 0, errRedisDown
|
||||
}
|
||||
return tb.real.RequeueAll()
|
||||
}
|
||||
|
||||
func (tb *TestBroker) CheckAndEnqueue(qnames ...string) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.CheckAndEnqueue()
|
||||
}
|
||||
|
||||
func (tb *TestBroker) WriteServerState(ss *base.ServerState, ttl time.Duration) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.WriteServerState(ss, ttl)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) ClearServerState(ss *base.ServerState) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.ClearServerState(ss)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) CancelationPubSub() (*redis.PubSub, error) {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return nil, errRedisDown
|
||||
}
|
||||
return tb.real.CancelationPubSub()
|
||||
}
|
||||
|
||||
func (tb *TestBroker) PublishCancelation(id string) error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.PublishCancelation(id)
|
||||
}
|
||||
|
||||
func (tb *TestBroker) Close() error {
|
||||
tb.mu.Lock()
|
||||
defer tb.mu.Unlock()
|
||||
if tb.sleeping {
|
||||
return errRedisDown
|
||||
}
|
||||
return tb.real.Close()
|
||||
}
|
64
processor.go
@@ -19,9 +19,9 @@ import (
|
||||
|
||||
type processor struct {
|
||||
logger Logger
|
||||
rdb *rdb.RDB
|
||||
broker base.Broker
|
||||
|
||||
ps *base.ProcessState
|
||||
ss *base.ServerState
|
||||
|
||||
handler Handler
|
||||
|
||||
@@ -34,6 +34,8 @@ type processor struct {
|
||||
|
||||
errHandler ErrorHandler
|
||||
|
||||
shutdownTimeout time.Duration
|
||||
|
||||
// channel via which to send sync requests to syncer.
|
||||
syncRequestCh chan<- *syncRequest
|
||||
|
||||
@@ -61,30 +63,40 @@ type processor struct {
|
||||
|
||||
type retryDelayFunc func(n int, err error, task *Task) time.Duration
|
||||
|
||||
type newProcessorParams struct {
|
||||
logger Logger
|
||||
broker base.Broker
|
||||
ss *base.ServerState
|
||||
retryDelayFunc retryDelayFunc
|
||||
syncCh chan<- *syncRequest
|
||||
cancelations *base.Cancelations
|
||||
errHandler ErrorHandler
|
||||
shutdownTimeout time.Duration
|
||||
}
|
||||
|
||||
// newProcessor constructs a new processor.
|
||||
func newProcessor(l Logger, r *rdb.RDB, ps *base.ProcessState, fn retryDelayFunc,
|
||||
syncCh chan<- *syncRequest, c *base.Cancelations, errHandler ErrorHandler) *processor {
|
||||
info := ps.Get()
|
||||
func newProcessor(params newProcessorParams) *processor {
|
||||
info := params.ss.GetInfo()
|
||||
qcfg := normalizeQueueCfg(info.Queues)
|
||||
orderedQueues := []string(nil)
|
||||
if info.StrictPriority {
|
||||
orderedQueues = sortByPriority(qcfg)
|
||||
}
|
||||
return &processor{
|
||||
logger: l,
|
||||
rdb: r,
|
||||
ps: ps,
|
||||
logger: params.logger,
|
||||
broker: params.broker,
|
||||
ss: params.ss,
|
||||
queueConfig: qcfg,
|
||||
orderedQueues: orderedQueues,
|
||||
retryDelayFunc: fn,
|
||||
syncRequestCh: syncCh,
|
||||
cancelations: c,
|
||||
retryDelayFunc: params.retryDelayFunc,
|
||||
syncRequestCh: params.syncCh,
|
||||
cancelations: params.cancelations,
|
||||
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
|
||||
sema: make(chan struct{}, info.Concurrency),
|
||||
done: make(chan struct{}),
|
||||
abort: make(chan struct{}),
|
||||
quit: make(chan struct{}),
|
||||
errHandler: errHandler,
|
||||
errHandler: params.errHandler,
|
||||
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
|
||||
}
|
||||
}
|
||||
@@ -106,9 +118,7 @@ func (p *processor) stop() {
|
||||
func (p *processor) terminate() {
|
||||
p.stop()
|
||||
|
||||
// IDEA: Allow user to customize this timeout value.
|
||||
const timeout = 8 * time.Second
|
||||
time.AfterFunc(timeout, func() { close(p.quit) })
|
||||
time.AfterFunc(p.shutdownTimeout, func() { close(p.quit) })
|
||||
p.logger.Info("Waiting for all workers to finish...")
|
||||
|
||||
// send cancellation signal to all in-progress task handlers
|
||||
@@ -147,8 +157,8 @@ func (p *processor) start(wg *sync.WaitGroup) {
|
||||
// process the task.
|
||||
func (p *processor) exec() {
|
||||
qnames := p.queues()
|
||||
msg, err := p.rdb.Dequeue(qnames...)
|
||||
if err == rdb.ErrNoProcessableTask {
|
||||
msg, err := p.broker.Dequeue(qnames...)
|
||||
if err == rdb.ErrNoProcessableTask { // TODO: Need to decouple this error from rdb to support other brokers
|
||||
// queues are empty, this is a normal behavior.
|
||||
if len(p.queueConfig) > 1 {
|
||||
// sleep to avoid slamming redis and let scheduler move tasks into queues.
|
||||
@@ -171,10 +181,10 @@ func (p *processor) exec() {
|
||||
p.requeue(msg)
|
||||
return
|
||||
case p.sema <- struct{}{}: // acquire token
|
||||
p.ps.AddWorkerStats(msg, time.Now())
|
||||
p.ss.AddWorkerStats(msg, time.Now())
|
||||
go func() {
|
||||
defer func() {
|
||||
p.ps.DeleteWorkerStats(msg)
|
||||
p.ss.DeleteWorkerStats(msg)
|
||||
<-p.sema /* release token */
|
||||
}()
|
||||
|
||||
@@ -217,7 +227,7 @@ func (p *processor) exec() {
|
||||
// restore moves all tasks from "in-progress" back to queue
|
||||
// to restore all unfinished tasks.
|
||||
func (p *processor) restore() {
|
||||
n, err := p.rdb.RequeueAll()
|
||||
n, err := p.broker.RequeueAll()
|
||||
if err != nil {
|
||||
p.logger.Error("Could not restore unfinished tasks: %v", err)
|
||||
}
|
||||
@@ -227,20 +237,20 @@ func (p *processor) restore() {
|
||||
}
|
||||
|
||||
func (p *processor) requeue(msg *base.TaskMessage) {
|
||||
err := p.rdb.Requeue(msg)
|
||||
err := p.broker.Requeue(msg)
|
||||
if err != nil {
|
||||
p.logger.Error("Could not push task id=%s back to queue: %v", msg.ID, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *processor) markAsDone(msg *base.TaskMessage) {
|
||||
err := p.rdb.Done(msg)
|
||||
err := p.broker.Done(msg)
|
||||
if err != nil {
|
||||
errMsg := fmt.Sprintf("Could not remove task id=%s from %q", msg.ID, base.InProgressQueue)
|
||||
p.logger.Warn("%s; Will retry syncing", errMsg)
|
||||
p.syncRequestCh <- &syncRequest{
|
||||
fn: func() error {
|
||||
return p.rdb.Done(msg)
|
||||
return p.broker.Done(msg)
|
||||
},
|
||||
errMsg: errMsg,
|
||||
}
|
||||
@@ -250,13 +260,13 @@ func (p *processor) markAsDone(msg *base.TaskMessage) {
|
||||
func (p *processor) retry(msg *base.TaskMessage, e error) {
|
||||
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
|
||||
retryAt := time.Now().Add(d)
|
||||
err := p.rdb.Retry(msg, retryAt, e.Error())
|
||||
err := p.broker.Retry(msg, retryAt, e.Error())
|
||||
if err != nil {
|
||||
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue)
|
||||
p.logger.Warn("%s; Will retry syncing", errMsg)
|
||||
p.syncRequestCh <- &syncRequest{
|
||||
fn: func() error {
|
||||
return p.rdb.Retry(msg, retryAt, e.Error())
|
||||
return p.broker.Retry(msg, retryAt, e.Error())
|
||||
},
|
||||
errMsg: errMsg,
|
||||
}
|
||||
@@ -265,13 +275,13 @@ func (p *processor) retry(msg *base.TaskMessage, e error) {
|
||||
|
||||
func (p *processor) kill(msg *base.TaskMessage, e error) {
|
||||
p.logger.Warn("Retry exhausted for task id=%s", msg.ID)
|
||||
err := p.rdb.Kill(msg, e.Error())
|
||||
err := p.broker.Kill(msg, e.Error())
|
||||
if err != nil {
|
||||
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue)
|
||||
p.logger.Warn("%s; Will retry syncing", errMsg)
|
||||
p.syncRequestCh <- &syncRequest{
|
||||
fn: func() error {
|
||||
return p.rdb.Kill(msg, e.Error())
|
||||
return p.broker.Kill(msg, e.Error())
|
||||
},
|
||||
errMsg: errMsg,
|
||||
}
|
||||
|
@@ -67,9 +67,18 @@ func TestProcessorSuccess(t *testing.T) {
|
||||
processed = append(processed, task)
|
||||
return nil
|
||||
}
|
||||
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
|
||||
ss := base.NewServerState("localhost", 1234, 10, defaultQueueConfig, false)
|
||||
cancelations := base.NewCancelations()
|
||||
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
|
||||
p := newProcessor(newProcessorParams{
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
ss: ss,
|
||||
retryDelayFunc: defaultDelayFunc,
|
||||
syncCh: nil,
|
||||
cancelations: cancelations,
|
||||
errHandler: nil,
|
||||
shutdownTimeout: defaultShutdownTimeout,
|
||||
})
|
||||
p.handler = HandlerFunc(handler)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
@@ -165,9 +174,18 @@ func TestProcessorRetry(t *testing.T) {
|
||||
defer mu.Unlock()
|
||||
n++
|
||||
}
|
||||
ps := base.NewProcessState("localhost", 1234, 10, defaultQueueConfig, false)
|
||||
ss := base.NewServerState("localhost", 1234, 10, defaultQueueConfig, false)
|
||||
cancelations := base.NewCancelations()
|
||||
p := newProcessor(testLogger, rdbClient, ps, delayFunc, nil, cancelations, ErrorHandlerFunc(errHandler))
|
||||
p := newProcessor(newProcessorParams{
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
ss: ss,
|
||||
retryDelayFunc: delayFunc,
|
||||
syncCh: nil,
|
||||
cancelations: cancelations,
|
||||
errHandler: ErrorHandlerFunc(errHandler),
|
||||
shutdownTimeout: defaultShutdownTimeout,
|
||||
})
|
||||
p.handler = tc.handler
|
||||
|
||||
var wg sync.WaitGroup
|
||||
@@ -232,8 +250,17 @@ func TestProcessorQueues(t *testing.T) {
|
||||
|
||||
for _, tc := range tests {
|
||||
cancelations := base.NewCancelations()
|
||||
ps := base.NewProcessState("localhost", 1234, 10, tc.queueCfg, false)
|
||||
p := newProcessor(testLogger, nil, ps, defaultDelayFunc, nil, cancelations, nil)
|
||||
ss := base.NewServerState("localhost", 1234, 10, tc.queueCfg, false)
|
||||
p := newProcessor(newProcessorParams{
|
||||
logger: testLogger,
|
||||
broker: nil,
|
||||
ss: ss,
|
||||
retryDelayFunc: defaultDelayFunc,
|
||||
syncCh: nil,
|
||||
cancelations: cancelations,
|
||||
errHandler: nil,
|
||||
shutdownTimeout: defaultShutdownTimeout,
|
||||
})
|
||||
got := p.queues()
|
||||
if diff := cmp.Diff(tc.want, got, sortOpt); diff != "" {
|
||||
t.Errorf("with queue config: %v\n(*processor).queues() = %v, want %v\n(-want,+got):\n%s",
|
||||
@@ -300,8 +327,17 @@ func TestProcessorWithStrictPriority(t *testing.T) {
|
||||
}
|
||||
// Note: Set concurrency to 1 to make sure tasks are processed one at a time.
|
||||
cancelations := base.NewCancelations()
|
||||
ps := base.NewProcessState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/)
|
||||
p := newProcessor(testLogger, rdbClient, ps, defaultDelayFunc, nil, cancelations, nil)
|
||||
ss := base.NewServerState("localhost", 1234, 1 /* concurrency */, queueCfg, true /*strict*/)
|
||||
p := newProcessor(newProcessorParams{
|
||||
logger: testLogger,
|
||||
broker: rdbClient,
|
||||
ss: ss,
|
||||
retryDelayFunc: defaultDelayFunc,
|
||||
syncCh: nil,
|
||||
cancelations: cancelations,
|
||||
errHandler: nil,
|
||||
shutdownTimeout: defaultShutdownTimeout,
|
||||
})
|
||||
p.handler = HandlerFunc(handler)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
@@ -446,3 +482,83 @@ func TestCreateContextWithoutTimeRestrictions(t *testing.T) {
|
||||
t.Error("ctx.Done() blocked, want it to be non-blocking")
|
||||
}
|
||||
}
|
||||
|
||||
func TestGCD(t *testing.T) {
|
||||
tests := []struct {
|
||||
input []int
|
||||
want int
|
||||
}{
|
||||
{[]int{6, 2, 12}, 2},
|
||||
{[]int{3, 3, 3}, 3},
|
||||
{[]int{6, 3, 1}, 1},
|
||||
{[]int{1}, 1},
|
||||
{[]int{1, 0, 2}, 1},
|
||||
{[]int{8, 0, 4}, 4},
|
||||
{[]int{9, 12, 18, 30}, 3},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := gcd(tc.input...)
|
||||
if got != tc.want {
|
||||
t.Errorf("gcd(%v) = %d, want %d", tc.input, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeQueueCfg(t *testing.T) {
|
||||
tests := []struct {
|
||||
input map[string]int
|
||||
want map[string]int
|
||||
}{
|
||||
{
|
||||
input: map[string]int{
|
||||
"high": 100,
|
||||
"default": 20,
|
||||
"low": 5,
|
||||
},
|
||||
want: map[string]int{
|
||||
"high": 20,
|
||||
"default": 4,
|
||||
"low": 1,
|
||||
},
|
||||
},
|
||||
{
|
||||
input: map[string]int{
|
||||
"default": 10,
|
||||
},
|
||||
want: map[string]int{
|
||||
"default": 1,
|
||||
},
|
||||
},
|
||||
{
|
||||
input: map[string]int{
|
||||
"critical": 5,
|
||||
"default": 1,
|
||||
},
|
||||
want: map[string]int{
|
||||
"critical": 5,
|
||||
"default": 1,
|
||||
},
|
||||
},
|
||||
{
|
||||
input: map[string]int{
|
||||
"critical": 6,
|
||||
"default": 3,
|
||||
"low": 0,
|
||||
},
|
||||
want: map[string]int{
|
||||
"critical": 2,
|
||||
"default": 1,
|
||||
"low": 0,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := normalizeQueueCfg(tc.input)
|
||||
if diff := cmp.Diff(tc.want, got); diff != "" {
|
||||
t.Errorf("normalizeQueueCfg(%v) = %v, want %v; (-want, +got):\n%s",
|
||||
tc.input, got, tc.want, diff)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
10
scheduler.go
@@ -8,12 +8,12 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
)
|
||||
|
||||
type scheduler struct {
|
||||
logger Logger
|
||||
rdb *rdb.RDB
|
||||
broker base.Broker
|
||||
|
||||
// channel to communicate back to the long running "scheduler" goroutine.
|
||||
done chan struct{}
|
||||
@@ -25,14 +25,14 @@ type scheduler struct {
|
||||
qnames []string
|
||||
}
|
||||
|
||||
func newScheduler(l Logger, r *rdb.RDB, avgInterval time.Duration, qcfg map[string]int) *scheduler {
|
||||
func newScheduler(l Logger, b base.Broker, avgInterval time.Duration, qcfg map[string]int) *scheduler {
|
||||
var qnames []string
|
||||
for q := range qcfg {
|
||||
qnames = append(qnames, q)
|
||||
}
|
||||
return &scheduler{
|
||||
logger: l,
|
||||
rdb: r,
|
||||
broker: b,
|
||||
done: make(chan struct{}),
|
||||
avgInterval: avgInterval,
|
||||
qnames: qnames,
|
||||
@@ -63,7 +63,7 @@ func (s *scheduler) start(wg *sync.WaitGroup) {
|
||||
}
|
||||
|
||||
func (s *scheduler) exec() {
|
||||
if err := s.rdb.CheckAndEnqueue(s.qnames...); err != nil {
|
||||
if err := s.broker.CheckAndEnqueue(s.qnames...); err != nil {
|
||||
s.logger.Error("Could not enqueue scheduled tasks: %v", err)
|
||||
}
|
||||
}
|
||||
|
@@ -6,13 +6,13 @@ package asynq
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/rand"
|
||||
"os"
|
||||
"os/signal"
|
||||
"runtime"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
@@ -20,29 +20,27 @@ import (
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
)
|
||||
|
||||
// Background is responsible for managing the background-task processing.
|
||||
// Server is responsible for managing the background-task processing.
|
||||
//
|
||||
// Background manages task queues to process tasks.
|
||||
// If the processing of a task is unsuccessful, background will
|
||||
// schedule it for a retry until either the task gets processed successfully
|
||||
// or it exhausts its max retry count.
|
||||
// Server pulls tasks off queues and processes them.
|
||||
// If the processing of a task is unsuccessful, server will
|
||||
// schedule it for a retry.
|
||||
// A task will be retried until either the task gets processed successfully
|
||||
// or until it reaches its max retry count.
|
||||
//
|
||||
// Once a task exhausts its retries, it will be moved to the "dead" queue and
|
||||
// If a task exhausts its retries, it will be moved to the "dead" queue and
|
||||
// will be kept in the queue for some time until a certain condition is met
|
||||
// (e.g., queue size reaches a certain limit, or the task has been in the
|
||||
// queue for a certain amount of time).
|
||||
type Background struct {
|
||||
mu sync.Mutex
|
||||
running bool
|
||||
|
||||
ps *base.ProcessState
|
||||
|
||||
// wait group to wait for all goroutines to finish.
|
||||
wg sync.WaitGroup
|
||||
type Server struct {
|
||||
ss *base.ServerState
|
||||
|
||||
logger Logger
|
||||
|
||||
rdb *rdb.RDB
|
||||
broker base.Broker
|
||||
|
||||
// wait group to wait for all goroutines to finish.
|
||||
wg sync.WaitGroup
|
||||
scheduler *scheduler
|
||||
processor *processor
|
||||
syncer *syncer
|
||||
@@ -50,11 +48,12 @@ type Background struct {
|
||||
subscriber *subscriber
|
||||
}
|
||||
|
||||
// Config specifies the background-task processing behavior.
|
||||
// Config specifies the server's background-task processing behavior.
|
||||
type Config struct {
|
||||
// Maximum number of concurrent processing of tasks.
|
||||
//
|
||||
// If set to a zero or negative value, NewBackground will overwrite the value to one.
|
||||
// If set to a zero or negative value, NewServer will overwrite the value
|
||||
// to the number of CPUs usable by the currennt process.
|
||||
Concurrency int
|
||||
|
||||
// Function to calculate retry delay for a failed task.
|
||||
@@ -69,7 +68,7 @@ type Config struct {
|
||||
// List of queues to process with given priority value. Keys are the names of the
|
||||
// queues and values are associated priority value.
|
||||
//
|
||||
// If set to nil or not specified, the background will process only the "default" queue.
|
||||
// If set to nil or not specified, the server will process only the "default" queue.
|
||||
//
|
||||
// Priority is treated as follows to avoid starving low priority queues.
|
||||
//
|
||||
@@ -108,10 +107,16 @@ type Config struct {
|
||||
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
|
||||
ErrorHandler ErrorHandler
|
||||
|
||||
// Logger specifies the logger used by the background instance.
|
||||
// Logger specifies the logger used by the server instance.
|
||||
//
|
||||
// If unset, default logger is used.
|
||||
Logger Logger
|
||||
|
||||
// ShutdownTimeout specifies the duration to wait to let workers finish their tasks
|
||||
// before forcing them to abort when stopping the server.
|
||||
//
|
||||
// If unset or zero, default timeout of 8 seconds is used.
|
||||
ShutdownTimeout time.Duration
|
||||
}
|
||||
|
||||
// An ErrorHandler handles errors returned by the task handler.
|
||||
@@ -158,12 +163,14 @@ var defaultQueueConfig = map[string]int{
|
||||
base.DefaultQueueName: 1,
|
||||
}
|
||||
|
||||
// NewBackground returns a new Background given a redis connection option
|
||||
const defaultShutdownTimeout = 8 * time.Second
|
||||
|
||||
// NewServer returns a new Server given a redis connection option
|
||||
// and background processing configuration.
|
||||
func NewBackground(r RedisConnOpt, cfg *Config) *Background {
|
||||
func NewServer(r RedisConnOpt, cfg Config) *Server {
|
||||
n := cfg.Concurrency
|
||||
if n < 1 {
|
||||
n = 1
|
||||
n = runtime.NumCPU()
|
||||
}
|
||||
delayFunc := cfg.RetryDelayFunc
|
||||
if delayFunc == nil {
|
||||
@@ -182,6 +189,10 @@ func NewBackground(r RedisConnOpt, cfg *Config) *Background {
|
||||
if logger == nil {
|
||||
logger = log.NewLogger(os.Stderr)
|
||||
}
|
||||
shutdownTimeout := cfg.ShutdownTimeout
|
||||
if shutdownTimeout == 0 {
|
||||
shutdownTimeout = defaultShutdownTimeout
|
||||
}
|
||||
|
||||
host, err := os.Hostname()
|
||||
if err != nil {
|
||||
@@ -190,18 +201,27 @@ func NewBackground(r RedisConnOpt, cfg *Config) *Background {
|
||||
pid := os.Getpid()
|
||||
|
||||
rdb := rdb.NewRDB(createRedisClient(r))
|
||||
ps := base.NewProcessState(host, pid, n, queues, cfg.StrictPriority)
|
||||
ss := base.NewServerState(host, pid, n, queues, cfg.StrictPriority)
|
||||
syncCh := make(chan *syncRequest)
|
||||
cancels := base.NewCancelations()
|
||||
syncer := newSyncer(logger, syncCh, 5*time.Second)
|
||||
heartbeater := newHeartbeater(logger, rdb, ps, 5*time.Second)
|
||||
heartbeater := newHeartbeater(logger, rdb, ss, 5*time.Second)
|
||||
scheduler := newScheduler(logger, rdb, 5*time.Second, queues)
|
||||
processor := newProcessor(logger, rdb, ps, delayFunc, syncCh, cancels, cfg.ErrorHandler)
|
||||
subscriber := newSubscriber(logger, rdb, cancels)
|
||||
return &Background{
|
||||
processor := newProcessor(newProcessorParams{
|
||||
logger: logger,
|
||||
rdb: rdb,
|
||||
ps: ps,
|
||||
broker: rdb,
|
||||
ss: ss,
|
||||
retryDelayFunc: delayFunc,
|
||||
syncCh: syncCh,
|
||||
cancelations: cancels,
|
||||
errHandler: cfg.ErrorHandler,
|
||||
shutdownTimeout: shutdownTimeout,
|
||||
})
|
||||
return &Server{
|
||||
ss: ss,
|
||||
logger: logger,
|
||||
broker: rdb,
|
||||
scheduler: scheduler,
|
||||
processor: processor,
|
||||
syncer: syncer,
|
||||
@@ -232,82 +252,95 @@ func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
|
||||
return fn(ctx, task)
|
||||
}
|
||||
|
||||
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped.
|
||||
var ErrServerStopped = errors.New("asynq: the server has been stopped")
|
||||
|
||||
// Run starts the background-task processing and blocks until
|
||||
// an os signal to exit the program is received. Once it receives
|
||||
// a signal, it gracefully shuts down all pending workers and other
|
||||
// a signal, it gracefully shuts down all active workers and other
|
||||
// goroutines to process the tasks.
|
||||
func (bg *Background) Run(handler Handler) {
|
||||
//
|
||||
// Run returns any error encountered during server startup time.
|
||||
// If the server has already been stopped, ErrServerStopped is returned.
|
||||
func (srv *Server) Run(handler Handler) error {
|
||||
if err := srv.Start(handler); err != nil {
|
||||
return err
|
||||
}
|
||||
srv.waitForSignals()
|
||||
srv.Stop()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start starts the worker server. Once the server has started,
|
||||
// it pulls tasks off queues and starts a worker goroutine for each task.
|
||||
// Tasks are processed concurrently by the workers up to the number of
|
||||
// concurrency specified at the initialization time.
|
||||
//
|
||||
// Start returns any error encountered during server startup time.
|
||||
// If the server has already been stopped, ErrServerStopped is returned.
|
||||
func (srv *Server) Start(handler Handler) error {
|
||||
if handler == nil {
|
||||
return fmt.Errorf("asynq: server cannot run with nil handler")
|
||||
}
|
||||
switch srv.ss.Status() {
|
||||
case base.StatusRunning:
|
||||
return fmt.Errorf("asynq: the server is already running")
|
||||
case base.StatusStopped:
|
||||
return ErrServerStopped
|
||||
}
|
||||
srv.ss.SetStatus(base.StatusRunning)
|
||||
srv.processor.handler = handler
|
||||
|
||||
type prefixLogger interface {
|
||||
SetPrefix(prefix string)
|
||||
}
|
||||
// If logger supports setting prefix, then set prefix for log output.
|
||||
if l, ok := bg.logger.(prefixLogger); ok {
|
||||
if l, ok := srv.logger.(prefixLogger); ok {
|
||||
l.SetPrefix(fmt.Sprintf("asynq: pid=%d ", os.Getpid()))
|
||||
}
|
||||
bg.logger.Info("Starting processing")
|
||||
srv.logger.Info("Starting processing")
|
||||
|
||||
bg.start(handler)
|
||||
defer bg.stop()
|
||||
|
||||
bg.logger.Info("Send signal TSTP to stop processing new tasks")
|
||||
bg.logger.Info("Send signal TERM or INT to terminate the process")
|
||||
|
||||
// Wait for a signal to terminate.
|
||||
sigs := make(chan os.Signal, 1)
|
||||
signal.Notify(sigs, syscall.SIGTERM, syscall.SIGINT, syscall.SIGTSTP)
|
||||
for {
|
||||
sig := <-sigs
|
||||
if sig == syscall.SIGTSTP {
|
||||
bg.processor.stop()
|
||||
bg.ps.SetStatus(base.StatusStopped)
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
fmt.Println()
|
||||
bg.logger.Info("Starting graceful shutdown")
|
||||
srv.heartbeater.start(&srv.wg)
|
||||
srv.subscriber.start(&srv.wg)
|
||||
srv.syncer.start(&srv.wg)
|
||||
srv.scheduler.start(&srv.wg)
|
||||
srv.processor.start(&srv.wg)
|
||||
return nil
|
||||
}
|
||||
|
||||
// starts the background-task processing.
|
||||
func (bg *Background) start(handler Handler) {
|
||||
bg.mu.Lock()
|
||||
defer bg.mu.Unlock()
|
||||
if bg.running {
|
||||
return
|
||||
}
|
||||
|
||||
bg.running = true
|
||||
bg.processor.handler = handler
|
||||
|
||||
bg.heartbeater.start(&bg.wg)
|
||||
bg.subscriber.start(&bg.wg)
|
||||
bg.syncer.start(&bg.wg)
|
||||
bg.scheduler.start(&bg.wg)
|
||||
bg.processor.start(&bg.wg)
|
||||
}
|
||||
|
||||
// stops the background-task processing.
|
||||
func (bg *Background) stop() {
|
||||
bg.mu.Lock()
|
||||
defer bg.mu.Unlock()
|
||||
if !bg.running {
|
||||
// Stop stops the worker server.
|
||||
// It gracefully closes all active workers. The server will wait for
|
||||
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
|
||||
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
|
||||
func (srv *Server) Stop() {
|
||||
switch srv.ss.Status() {
|
||||
case base.StatusIdle, base.StatusStopped:
|
||||
// server is not running, do nothing and return.
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Println() // print newline for prettier log.
|
||||
srv.logger.Info("Starting graceful shutdown")
|
||||
// Note: The order of termination is important.
|
||||
// Sender goroutines should be terminated before the receiver goroutines.
|
||||
//
|
||||
// processor -> syncer (via syncCh)
|
||||
bg.scheduler.terminate()
|
||||
bg.processor.terminate()
|
||||
bg.syncer.terminate()
|
||||
bg.subscriber.terminate()
|
||||
bg.heartbeater.terminate()
|
||||
srv.scheduler.terminate()
|
||||
srv.processor.terminate()
|
||||
srv.syncer.terminate()
|
||||
srv.subscriber.terminate()
|
||||
srv.heartbeater.terminate()
|
||||
|
||||
bg.wg.Wait()
|
||||
srv.wg.Wait()
|
||||
|
||||
bg.rdb.Close()
|
||||
bg.running = false
|
||||
srv.broker.Close()
|
||||
srv.ss.SetStatus(base.StatusStopped)
|
||||
|
||||
bg.logger.Info("Bye!")
|
||||
srv.logger.Info("Bye!")
|
||||
}
|
||||
|
||||
// Quiet signals the server to stop pulling new tasks off queues.
|
||||
// Quiet should be used before stopping the server.
|
||||
func (srv *Server) Quiet() {
|
||||
srv.processor.stop()
|
||||
srv.ss.SetStatus(base.StatusQuiet)
|
||||
}
|
85
server_test.go
Normal file
@@ -0,0 +1,85 @@
|
||||
// Copyright 2020 Kentaro Hibino. All rights reserved.
|
||||
// Use of this source code is governed by a MIT license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"go.uber.org/goleak"
|
||||
)
|
||||
|
||||
func TestServer(t *testing.T) {
|
||||
// https://github.com/go-redis/redis/issues/1029
|
||||
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
|
||||
defer goleak.VerifyNoLeaks(t, ignoreOpt)
|
||||
|
||||
r := &RedisClientOpt{
|
||||
Addr: "localhost:6379",
|
||||
DB: 15,
|
||||
}
|
||||
c := NewClient(r)
|
||||
srv := NewServer(r, Config{
|
||||
Concurrency: 10,
|
||||
})
|
||||
|
||||
// no-op handler
|
||||
h := func(ctx context.Context, task *Task) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
err := srv.Start(HandlerFunc(h))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
|
||||
if err != nil {
|
||||
t.Errorf("could not enqueue a task: %v", err)
|
||||
}
|
||||
|
||||
err = c.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
|
||||
if err != nil {
|
||||
t.Errorf("could not enqueue a task: %v", err)
|
||||
}
|
||||
|
||||
srv.Stop()
|
||||
}
|
||||
|
||||
func TestServerErrServerStopped(t *testing.T) {
|
||||
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
|
||||
handler := NewServeMux()
|
||||
if err := srv.Start(handler); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
srv.Stop()
|
||||
err := srv.Start(handler)
|
||||
if err != ErrServerStopped {
|
||||
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerErrNilHandler(t *testing.T) {
|
||||
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
|
||||
err := srv.Start(nil)
|
||||
if err == nil {
|
||||
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
|
||||
srv.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerErrServerRunning(t *testing.T) {
|
||||
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{})
|
||||
handler := NewServeMux()
|
||||
if err := srv.Start(handler); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err := srv.Start(handler)
|
||||
if err == nil {
|
||||
t.Error("Calling (*Server).Start(handler) on already running server did not return error")
|
||||
}
|
||||
srv.Stop()
|
||||
}
|
30
signals_unix.go
Normal file
@@ -0,0 +1,30 @@
|
||||
// +build linux bsd darwin
|
||||
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/signal"
|
||||
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
// waitForSignals waits for signals and handles them.
|
||||
// It handles SIGTERM, SIGINT, and SIGTSTP.
|
||||
// SIGTERM and SIGINT will signal the process to exit.
|
||||
// SIGTSTP will signal the process to stop processing new tasks.
|
||||
func (srv *Server) waitForSignals() {
|
||||
srv.logger.Info("Send signal TSTP to stop processing new tasks")
|
||||
srv.logger.Info("Send signal TERM or INT to terminate the process")
|
||||
|
||||
sigs := make(chan os.Signal, 1)
|
||||
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT, unix.SIGTSTP)
|
||||
for {
|
||||
sig := <-sigs
|
||||
if sig == unix.SIGTSTP {
|
||||
srv.Quiet()
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
22
signals_windows.go
Normal file
@@ -0,0 +1,22 @@
|
||||
// +build windows
|
||||
|
||||
package asynq
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/signal"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
// waitForSignals waits for signals and handles them.
|
||||
// It handles SIGTERM and SIGINT.
|
||||
// SIGTERM and SIGINT will signal the process to exit.
|
||||
//
|
||||
// Note: Currently SIGTSTP is not supported for windows build.
|
||||
func (srv *Server) waitForSignals() {
|
||||
srv.logger.Info("Send signal TERM or INT to terminate the process")
|
||||
sigs := make(chan os.Signal, 1)
|
||||
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
|
||||
<-sigs
|
||||
}
|
@@ -6,28 +6,33 @@ package asynq
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-redis/redis/v7"
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
)
|
||||
|
||||
type subscriber struct {
|
||||
logger Logger
|
||||
rdb *rdb.RDB
|
||||
broker base.Broker
|
||||
|
||||
// channel to communicate back to the long running "subscriber" goroutine.
|
||||
done chan struct{}
|
||||
|
||||
// cancelations hold cancel functions for all in-progress tasks.
|
||||
cancelations *base.Cancelations
|
||||
|
||||
// time to wait before retrying to connect to redis.
|
||||
retryTimeout time.Duration
|
||||
}
|
||||
|
||||
func newSubscriber(l Logger, rdb *rdb.RDB, cancelations *base.Cancelations) *subscriber {
|
||||
func newSubscriber(l Logger, b base.Broker, cancelations *base.Cancelations) *subscriber {
|
||||
return &subscriber{
|
||||
logger: l,
|
||||
rdb: rdb,
|
||||
broker: b,
|
||||
done: make(chan struct{}),
|
||||
cancelations: cancelations,
|
||||
retryTimeout: 5 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -38,15 +43,29 @@ func (s *subscriber) terminate() {
|
||||
}
|
||||
|
||||
func (s *subscriber) start(wg *sync.WaitGroup) {
|
||||
pubsub, err := s.rdb.CancelationPubSub()
|
||||
cancelCh := pubsub.Channel()
|
||||
if err != nil {
|
||||
s.logger.Error("cannot subscribe to cancelation channel: %v", err)
|
||||
return
|
||||
}
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
var (
|
||||
pubsub *redis.PubSub
|
||||
err error
|
||||
)
|
||||
// Try until successfully connect to Redis.
|
||||
for {
|
||||
pubsub, err = s.broker.CancelationPubSub()
|
||||
if err != nil {
|
||||
s.logger.Error("cannot subscribe to cancelation channel: %v", err)
|
||||
select {
|
||||
case <-time.After(s.retryTimeout):
|
||||
continue
|
||||
case <-s.done:
|
||||
s.logger.Info("Subscriber done")
|
||||
return
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
cancelCh := pubsub.Channel()
|
||||
for {
|
||||
select {
|
||||
case <-s.done:
|
||||
|
@@ -11,6 +11,7 @@ import (
|
||||
|
||||
"github.com/hibiken/asynq/internal/base"
|
||||
"github.com/hibiken/asynq/internal/rdb"
|
||||
"github.com/hibiken/asynq/internal/testbroker"
|
||||
)
|
||||
|
||||
func TestSubscriber(t *testing.T) {
|
||||
@@ -40,13 +41,16 @@ func TestSubscriber(t *testing.T) {
|
||||
subscriber := newSubscriber(testLogger, rdbClient, cancelations)
|
||||
var wg sync.WaitGroup
|
||||
subscriber.start(&wg)
|
||||
defer subscriber.terminate()
|
||||
|
||||
// wait for subscriber to establish connection to pubsub channel
|
||||
time.Sleep(time.Second)
|
||||
|
||||
if err := rdbClient.PublishCancelation(tc.publishID); err != nil {
|
||||
subscriber.terminate()
|
||||
t.Fatalf("could not publish cancelation message: %v", err)
|
||||
}
|
||||
|
||||
// allow for redis to publish message
|
||||
// wait for redis to publish message
|
||||
time.Sleep(time.Second)
|
||||
|
||||
mu.Lock()
|
||||
@@ -58,7 +62,53 @@ func TestSubscriber(t *testing.T) {
|
||||
}
|
||||
}
|
||||
mu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
subscriber.terminate()
|
||||
func TestSubscriberWithRedisDown(t *testing.T) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
t.Errorf("panic occurred: %v", r)
|
||||
}
|
||||
}()
|
||||
r := rdb.NewRDB(setup(t))
|
||||
testBroker := testbroker.NewTestBroker(r)
|
||||
|
||||
cancelations := base.NewCancelations()
|
||||
subscriber := newSubscriber(testLogger, testBroker, cancelations)
|
||||
subscriber.retryTimeout = 1 * time.Second // set shorter retry timeout for testing purpose.
|
||||
|
||||
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
|
||||
var wg sync.WaitGroup
|
||||
subscriber.start(&wg)
|
||||
defer subscriber.terminate()
|
||||
|
||||
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.
|
||||
|
||||
testBroker.Wakeup() // simulate a situation where redis server is back online.
|
||||
|
||||
time.Sleep(2 * time.Second) // allow subscriber to establish pubsub channel.
|
||||
|
||||
const id = "test"
|
||||
var (
|
||||
mu sync.Mutex
|
||||
called bool
|
||||
)
|
||||
cancelations.Add(id, func() {
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
called = true
|
||||
})
|
||||
|
||||
if err := r.PublishCancelation(id); err != nil {
|
||||
t.Fatalf("could not publish cancelation message: %v", err)
|
||||
}
|
||||
|
||||
time.Sleep(time.Second) // wait for redis to publish message.
|
||||
|
||||
mu.Lock()
|
||||
if !called {
|
||||
t.Errorf("cancel function was not called")
|
||||
}
|
||||
mu.Unlock()
|
||||
}
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# Asynqmon
|
||||
# Asynq CLI
|
||||
|
||||
Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
|
||||
Asynq CLI is a command line tool to monitor the tasks managed by `asynq` package.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
@@ -8,7 +8,7 @@ Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
|
||||
- [Quick Start](#quick-start)
|
||||
- [Stats](#stats)
|
||||
- [History](#history)
|
||||
- [Process Status](#process-status)
|
||||
- [Servers](#servers)
|
||||
- [List](#list)
|
||||
- [Enqueue](#enqueue)
|
||||
- [Delete](#delete)
|
||||
@@ -20,19 +20,19 @@ Asynqmon is a command line tool to monitor the tasks managed by `asynq` package.
|
||||
|
||||
In order to use the tool, compile it using the following command:
|
||||
|
||||
go get github.com/hibiken/asynq/tools/asynqmon
|
||||
go get github.com/hibiken/asynq/tools/asynq
|
||||
|
||||
This will create the asynqmon executable under your `$GOPATH/bin` directory.
|
||||
This will create the asynq executable under your `$GOPATH/bin` directory.
|
||||
|
||||
## Quickstart
|
||||
|
||||
The tool has a few commands to inspect the state of tasks and queues.
|
||||
|
||||
Run `asynqmon help` to see all the available commands.
|
||||
Run `asynq help` to see all the available commands.
|
||||
|
||||
Asynqmon needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
|
||||
Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
|
||||
|
||||
By default, Asynqmon will try to connect to a redis server running at `localhost:6379`.
|
||||
By default, CLI will try to connect to a redis server running at `localhost:6379`.
|
||||
|
||||
### Stats
|
||||
|
||||
@@ -40,11 +40,11 @@ Stats command gives the overview of the current state of tasks and queues. You c
|
||||
|
||||
Example:
|
||||
|
||||
watch -n 3 asynqmon stats
|
||||
watch -n 3 asynq stats
|
||||
|
||||
This will run `asynqmon stats` command every 3 seconds.
|
||||
This will run `asynq stats` command every 3 seconds.
|
||||
|
||||

|
||||

|
||||
|
||||
### History
|
||||
|
||||
@@ -54,19 +54,17 @@ By default, it shows the stats from the last 10 days. Use `--days` to specify th
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon history --days=30
|
||||
asynq history --days=30
|
||||
|
||||

|
||||

|
||||
|
||||
### Process Status
|
||||
### Servers
|
||||
|
||||
PS (ProcessStatus) command shows the list of running worker processes.
|
||||
Servers command shows the list of running worker servers pulling tasks from the given redis instance.
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon ps
|
||||
|
||||

|
||||
asynq servers
|
||||
|
||||
### List
|
||||
|
||||
@@ -74,11 +72,11 @@ List command shows all tasks in the specified state in a table format
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon ls retry
|
||||
asynqmon ls scheduled
|
||||
asynqmon ls dead
|
||||
asynqmon ls enqueued:default
|
||||
asynqmon ls inprogress
|
||||
asynq ls retry
|
||||
asynq ls scheduled
|
||||
asynq ls dead
|
||||
asynq ls enqueued:default
|
||||
asynq ls inprogress
|
||||
|
||||
### Enqueue
|
||||
|
||||
@@ -88,13 +86,13 @@ Command `enq` takes a task ID and moves the task to **Enqueued** state. You can
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g
|
||||
asynq enq d:1575732274:bnogo8gt6toe23vhef0g
|
||||
|
||||
Command `enqall` moves all tasks to **Enqueued** state from the specified state.
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon enqall retry
|
||||
asynq enqall retry
|
||||
|
||||
Running the above command will move all **Retry** tasks to **Enqueued** state.
|
||||
|
||||
@@ -106,13 +104,13 @@ Command `del` takes a task ID and deletes the task. You can obtain the task ID b
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon del r:1575732274:bnogo8gt6toe23vhef0g
|
||||
asynq del r:1575732274:bnogo8gt6toe23vhef0g
|
||||
|
||||
Command `delall` deletes all tasks which are in the specified state.
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon delall retry
|
||||
asynq delall retry
|
||||
|
||||
Running the above command will delete all **Retry** tasks.
|
||||
|
||||
@@ -124,13 +122,13 @@ Command `kill` takes a task ID and kills the task. You can obtain the task ID by
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g
|
||||
asynq kill r:1575732274:bnogo8gt6toe23vhef0g
|
||||
|
||||
Command `killall` kills all tasks which are in the specified state.
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon killall retry
|
||||
asynq killall retry
|
||||
|
||||
Running the above command will move all **Retry** tasks to **Dead** state.
|
||||
|
||||
@@ -144,15 +142,15 @@ Handler implementation needs to be context aware in order to actually stop proce
|
||||
|
||||
Example:
|
||||
|
||||
asynqmon cancel bnogo8gt6toe23vhef0g
|
||||
asynq cancel bnogo8gt6toe23vhef0g
|
||||
|
||||
## Config File
|
||||
|
||||
You can use a config file to set default values for the flags.
|
||||
This is useful, for example when you have to connect to a remote redis server.
|
||||
|
||||
By default, `asynqmon` will try to read config file located in
|
||||
`$HOME/.asynqmon.(yaml|json)`. You can specify the file location via `--config` flag.
|
||||
By default, `asynq` will try to read config file located in
|
||||
`$HOME/.asynq.(yaml|json)`. You can specify the file location via `--config` flag.
|
||||
|
||||
Config file example:
|
||||
|
@@ -18,17 +18,17 @@ import (
|
||||
var cancelCmd = &cobra.Command{
|
||||
Use: "cancel [task id]",
|
||||
Short: "Sends a cancelation signal to the goroutine processing the specified task",
|
||||
Long: `Cancel (asynqmon cancel) will send a cancelation signal to the goroutine processing
|
||||
Long: `Cancel (asynq cancel) will send a cancelation signal to the goroutine processing
|
||||
the specified task.
|
||||
|
||||
The command takes one argument which specifies the task to cancel.
|
||||
The task should be in in-progress state.
|
||||
Identifier for a task should be obtained by running "asynqmon ls" command.
|
||||
Identifier for a task should be obtained by running "asynq ls" command.
|
||||
|
||||
Handler implementation needs to be context aware for cancelation signal to
|
||||
actually cancel the processing.
|
||||
|
||||
Example: asynqmon cancel bnogo8gt6toe23vhef0g`,
|
||||
Example: asynq cancel bnogo8gt6toe23vhef0g`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: cancel,
|
||||
}
|
@@ -18,13 +18,13 @@ import (
|
||||
var delCmd = &cobra.Command{
|
||||
Use: "del [task id]",
|
||||
Short: "Deletes a task given an identifier",
|
||||
Long: `Del (asynqmon del) will delete a task given an identifier.
|
||||
Long: `Del (asynq del) will delete a task given an identifier.
|
||||
|
||||
The command takes one argument which specifies the task to delete.
|
||||
The task should be in either scheduled, retry or dead state.
|
||||
Identifier for a task should be obtained by running "asynqmon ls" command.
|
||||
Identifier for a task should be obtained by running "asynq ls" command.
|
||||
|
||||
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`,
|
||||
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: del,
|
||||
}
|
@@ -20,11 +20,11 @@ var delallValidArgs = []string{"scheduled", "retry", "dead"}
|
||||
var delallCmd = &cobra.Command{
|
||||
Use: "delall [state]",
|
||||
Short: "Deletes all tasks in the specified state",
|
||||
Long: `Delall (asynqmon delall) will delete all tasks in the specified state.
|
||||
Long: `Delall (asynq delall) will delete all tasks in the specified state.
|
||||
|
||||
The argument should be one of "scheduled", "retry", or "dead".
|
||||
|
||||
Example: asynqmon delall dead -> Deletes all dead tasks`,
|
||||
Example: asynq delall dead -> Deletes all dead tasks`,
|
||||
ValidArgs: delallValidArgs,
|
||||
Args: cobra.ExactValidArgs(1),
|
||||
Run: delall,
|
||||
@@ -60,7 +60,7 @@ func delall(cmd *cobra.Command, args []string) {
|
||||
case "dead":
|
||||
err = r.DeleteAllDeadTasks()
|
||||
default:
|
||||
fmt.Printf("error: `asynqmon delall [state]` only accepts %v as the argument.\n", delallValidArgs)
|
||||
fmt.Printf("error: `asynq delall [state]` only accepts %v as the argument.\n", delallValidArgs)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err != nil {
|
@@ -18,16 +18,16 @@ import (
|
||||
var enqCmd = &cobra.Command{
|
||||
Use: "enq [task id]",
|
||||
Short: "Enqueues a task given an identifier",
|
||||
Long: `Enq (asynqmon enq) will enqueue a task given an identifier.
|
||||
Long: `Enq (asynq enq) will enqueue a task given an identifier.
|
||||
|
||||
The command takes one argument which specifies the task to enqueue.
|
||||
The task should be in either scheduled, retry or dead state.
|
||||
Identifier for a task should be obtained by running "asynqmon ls" command.
|
||||
Identifier for a task should be obtained by running "asynq ls" command.
|
||||
|
||||
The task enqueued by this command will be processed as soon as the task
|
||||
gets dequeued by a processor.
|
||||
|
||||
Example: asynqmon enq d:1575732274:bnogo8gt6toe23vhef0g`,
|
||||
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: enq,
|
||||
}
|
@@ -20,14 +20,14 @@ var enqallValidArgs = []string{"scheduled", "retry", "dead"}
|
||||
var enqallCmd = &cobra.Command{
|
||||
Use: "enqall [state]",
|
||||
Short: "Enqueues all tasks in the specified state",
|
||||
Long: `Enqall (asynqmon enqall) will enqueue all tasks in the specified state.
|
||||
Long: `Enqall (asynq enqall) will enqueue all tasks in the specified state.
|
||||
|
||||
The argument should be one of "scheduled", "retry", or "dead".
|
||||
|
||||
The tasks enqueued by this command will be processed as soon as it
|
||||
gets dequeued by a processor.
|
||||
|
||||
Example: asynqmon enqall dead -> Enqueues all dead tasks`,
|
||||
Example: asynq enqall dead -> Enqueues all dead tasks`,
|
||||
ValidArgs: enqallValidArgs,
|
||||
Args: cobra.ExactValidArgs(1),
|
||||
Run: enqall,
|
||||
@@ -64,7 +64,7 @@ func enqall(cmd *cobra.Command, args []string) {
|
||||
case "dead":
|
||||
n, err = r.EnqueueAllDeadTasks()
|
||||
default:
|
||||
fmt.Printf("error: `asynqmon enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
|
||||
fmt.Printf("error: `asynq enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err != nil {
|
@@ -22,12 +22,12 @@ var days int
|
||||
var historyCmd = &cobra.Command{
|
||||
Use: "history",
|
||||
Short: "Shows historical aggregate data",
|
||||
Long: `History (asynqmon history) will show the number of processed and failed tasks
|
||||
Long: `History (asynq history) will show the number of processed and failed tasks
|
||||
from the last x days.
|
||||
|
||||
By default, it will show the data from the last 10 days.
|
||||
|
||||
Example: asynqmon history -x=30 -> Shows stats from the last 30 days`,
|
||||
Example: asynq history -x=30 -> Shows stats from the last 30 days`,
|
||||
Args: cobra.NoArgs,
|
||||
Run: history,
|
||||
}
|
@@ -18,13 +18,13 @@ import (
|
||||
var killCmd = &cobra.Command{
|
||||
Use: "kill [task id]",
|
||||
Short: "Kills a task given an identifier",
|
||||
Long: `Kill (asynqmon kill) will put a task in dead state given an identifier.
|
||||
Long: `Kill (asynq kill) will put a task in dead state given an identifier.
|
||||
|
||||
The command takes one argument which specifies the task to kill.
|
||||
The task should be in either scheduled or retry state.
|
||||
Identifier for a task should be obtained by running "asynqmon ls" command.
|
||||
Identifier for a task should be obtained by running "asynq ls" command.
|
||||
|
||||
Example: asynqmon kill r:1575732274:bnogo8gt6toe23vhef0g`,
|
||||
Example: asynq kill r:1575732274:bnogo8gt6toe23vhef0g`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: kill,
|
||||
}
|
@@ -20,11 +20,11 @@ var killallValidArgs = []string{"scheduled", "retry"}
|
||||
var killallCmd = &cobra.Command{
|
||||
Use: "killall [state]",
|
||||
Short: "Kills all tasks in the specified state",
|
||||
Long: `Killall (asynqmon killall) will update all tasks from the specified state to dead state.
|
||||
Long: `Killall (asynq killall) will update all tasks from the specified state to dead state.
|
||||
|
||||
The argument should be either "scheduled" or "retry".
|
||||
|
||||
Example: asynqmon killall retry -> Update all retry tasks to dead tasks`,
|
||||
Example: asynq killall retry -> Update all retry tasks to dead tasks`,
|
||||
ValidArgs: killallValidArgs,
|
||||
Args: cobra.ExactValidArgs(1),
|
||||
Run: killall,
|
||||
@@ -59,7 +59,7 @@ func killall(cmd *cobra.Command, args []string) {
|
||||
case "retry":
|
||||
n, err = r.KillAllRetryTasks()
|
||||
default:
|
||||
fmt.Printf("error: `asynqmon killall [state]` only accepts %v as the argument.\n", killallValidArgs)
|
||||
fmt.Printf("error: `asynq killall [state]` only accepts %v as the argument.\n", killallValidArgs)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err != nil {
|
@@ -25,19 +25,19 @@ var lsValidArgs = []string{"enqueued", "inprogress", "scheduled", "retry", "dead
|
||||
var lsCmd = &cobra.Command{
|
||||
Use: "ls [state]",
|
||||
Short: "Lists tasks in the specified state",
|
||||
Long: `Ls (asynqmon ls) will list all tasks in the specified state in a table format.
|
||||
Long: `Ls (asynq ls) will list all tasks in the specified state in a table format.
|
||||
|
||||
The command takes one argument which specifies the state of tasks.
|
||||
The argument value should be one of "enqueued", "inprogress", "scheduled",
|
||||
"retry", or "dead".
|
||||
|
||||
Example:
|
||||
asynqmon ls dead -> Lists all tasks in dead state
|
||||
asynq ls dead -> Lists all tasks in dead state
|
||||
|
||||
Enqueued tasks requires a queue name after ":"
|
||||
Example:
|
||||
asynqmon ls enqueued:default -> List tasks from default queue
|
||||
asynqmon ls enqueued:critical -> List tasks from critical queue
|
||||
asynq ls enqueued:default -> List tasks from default queue
|
||||
asynq ls enqueued:critical -> List tasks from critical queue
|
||||
`,
|
||||
Args: cobra.ExactValidArgs(1),
|
||||
Run: ls,
|
||||
@@ -72,7 +72,7 @@ func ls(cmd *cobra.Command, args []string) {
|
||||
switch parts[0] {
|
||||
case "enqueued":
|
||||
if len(parts) != 2 {
|
||||
fmt.Printf("error: Missing queue name\n`asynqmon ls enqueued:[queue name]`\n")
|
||||
fmt.Printf("error: Missing queue name\n`asynq ls enqueued:[queue name]`\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
listEnqueued(r, parts[1])
|
||||
@@ -85,7 +85,7 @@ func ls(cmd *cobra.Command, args []string) {
|
||||
case "dead":
|
||||
listDead(r)
|
||||
default:
|
||||
fmt.Printf("error: `asynqmon ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
|
||||
fmt.Printf("error: `asynq ls [state]`\nonly accepts %v as the argument.\n", lsValidArgs)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
@@ -18,11 +18,11 @@ import (
|
||||
var rmqCmd = &cobra.Command{
|
||||
Use: "rmq [queue name]",
|
||||
Short: "Removes the specified queue",
|
||||
Long: `Rmq (asynqmon rmq) will remove the specified queue.
|
||||
Long: `Rmq (asynq rmq) will remove the specified queue.
|
||||
By default, it will remove the queue only if it's empty.
|
||||
Use --force option to override this behavior.
|
||||
|
||||
Example: asynqmon rmq low -> Removes "low" queue`,
|
||||
Example: asynq rmq low -> Removes "low" queue`,
|
||||
Args: cobra.ExactValidArgs(1),
|
||||
Run: rmq,
|
||||
}
|
||||
@@ -44,7 +44,7 @@ func rmq(cmd *cobra.Command, args []string) {
|
||||
err := r.RemoveQueue(args[0], rmqForce)
|
||||
if err != nil {
|
||||
if _, ok := err.(*rdb.ErrQueueNotEmpty); ok {
|
||||
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynqmon rmq --force %s'\n", err, args[0])
|
||||
fmt.Printf("error: %v\nIf you are sure you want to delete it, run 'asynq rmq --force %s'\n", err, args[0])
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("error: %v", err)
|
@@ -26,9 +26,9 @@ var password string
|
||||
|
||||
// rootCmd represents the base command when called without any subcommands
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "asynqmon",
|
||||
Use: "asynq",
|
||||
Short: "A monitoring tool for asynq queues",
|
||||
Long: `Asynqmon is a montoring CLI to inspect tasks and queues managed by asynq.`,
|
||||
Long: `Asynq is a montoring CLI to inspect tasks and queues managed by asynq.`,
|
||||
}
|
||||
|
||||
// Execute adds all child commands to the root command and sets flags appropriately.
|
||||
@@ -43,7 +43,7 @@ func Execute() {
|
||||
func init() {
|
||||
cobra.OnInitialize(initConfig)
|
||||
|
||||
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynqmon.yaml)")
|
||||
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynq.yaml)")
|
||||
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI")
|
||||
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)")
|
||||
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server")
|
||||
@@ -65,9 +65,9 @@ func initConfig() {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Search config in home directory with name ".asynqmon" (without extension).
|
||||
// Search config in home directory with name ".asynq" (without extension).
|
||||
viper.AddConfigPath(home)
|
||||
viper.SetConfigName(".asynqmon")
|
||||
viper.SetConfigName(".asynq")
|
||||
}
|
||||
|
||||
viper.AutomaticEnv() // read in environment variables that match
|
@@ -18,64 +18,64 @@ import (
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
// psCmd represents the ps command
|
||||
var psCmd = &cobra.Command{
|
||||
Use: "ps",
|
||||
Short: "Shows all background worker processes",
|
||||
Long: `Ps (asynqmon ps) will show all background worker processes
|
||||
backed by the specified redis instance.
|
||||
// serversCmd represents the servers command
|
||||
var serversCmd = &cobra.Command{
|
||||
Use: "servers",
|
||||
Short: "Shows all running worker servers",
|
||||
Long: `Servers (asynq servers) will show all running worker servers
|
||||
pulling tasks from the specified redis instance.
|
||||
|
||||
The command shows the following for each process:
|
||||
* Host and PID of the process
|
||||
The command shows the following for each server:
|
||||
* Host and PID of the process in which the server is running
|
||||
* Number of active workers out of worker pool
|
||||
* Queue configuration
|
||||
* State of the worker process ("running" | "stopped")
|
||||
* Time the process was started
|
||||
* State of the worker server ("running" | "quiet")
|
||||
* Time the server was started
|
||||
|
||||
A "running" process is processing tasks in queues.
|
||||
A "stopped" process is no longer processing new tasks.`,
|
||||
A "running" server is pulling tasks from queues and processing them.
|
||||
A "quiet" server is no longer pulling new tasks from queues`,
|
||||
Args: cobra.NoArgs,
|
||||
Run: ps,
|
||||
Run: servers,
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(psCmd)
|
||||
rootCmd.AddCommand(serversCmd)
|
||||
}
|
||||
|
||||
func ps(cmd *cobra.Command, args []string) {
|
||||
func servers(cmd *cobra.Command, args []string) {
|
||||
r := rdb.NewRDB(redis.NewClient(&redis.Options{
|
||||
Addr: viper.GetString("uri"),
|
||||
DB: viper.GetInt("db"),
|
||||
Password: viper.GetString("password"),
|
||||
}))
|
||||
|
||||
processes, err := r.ListProcesses()
|
||||
servers, err := r.ListServers()
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if len(processes) == 0 {
|
||||
fmt.Println("No processes")
|
||||
if len(servers) == 0 {
|
||||
fmt.Println("No running servers")
|
||||
return
|
||||
}
|
||||
|
||||
// sort by hostname and pid
|
||||
sort.Slice(processes, func(i, j int) bool {
|
||||
x, y := processes[i], processes[j]
|
||||
sort.Slice(servers, func(i, j int) bool {
|
||||
x, y := servers[i], servers[j]
|
||||
if x.Host != y.Host {
|
||||
return x.Host < y.Host
|
||||
}
|
||||
return x.PID < y.PID
|
||||
})
|
||||
|
||||
// print processes
|
||||
// print server info
|
||||
cols := []string{"Host", "PID", "State", "Active Workers", "Queues", "Started"}
|
||||
printRows := func(w io.Writer, tmpl string) {
|
||||
for _, ps := range processes {
|
||||
for _, info := range servers {
|
||||
fmt.Fprintf(w, tmpl,
|
||||
ps.Host, ps.PID, ps.Status,
|
||||
fmt.Sprintf("%d/%d", ps.ActiveWorkerCount, ps.Concurrency),
|
||||
formatQueues(ps.Queues), timeAgo(ps.Started))
|
||||
info.Host, info.PID, info.Status,
|
||||
fmt.Sprintf("%d/%d", info.ActiveWorkerCount, info.Concurrency),
|
||||
formatQueues(info.Queues), timeAgo(info.Started))
|
||||
}
|
||||
}
|
||||
printTable(cols, printRows)
|
@@ -33,7 +33,7 @@ Specifically, the command shows the following:
|
||||
To monitor the tasks continuously, it's recommended that you run this
|
||||
command in conjunction with the watch command.
|
||||
|
||||
Example: watch -n 3 asynqmon stats -> Shows current state of tasks every three seconds`,
|
||||
Example: watch -n 3 asynq stats -> Shows current state of tasks every three seconds`,
|
||||
Args: cobra.NoArgs,
|
||||
Run: stats,
|
||||
}
|
@@ -20,7 +20,7 @@ import (
|
||||
var workersCmd = &cobra.Command{
|
||||
Use: "workers",
|
||||
Short: "Shows all running workers information",
|
||||
Long: `Workers (asynqmon workers) will show all running workers information.
|
||||
Long: `Workers (asynq workers) will show all running workers information.
|
||||
|
||||
The command shows the following for each worker:
|
||||
* Process in which the worker is running
|
@@ -4,7 +4,7 @@
|
||||
|
||||
package main
|
||||
|
||||
import "github.com/hibiken/asynq/tools/asynqmon/cmd"
|
||||
import "github.com/hibiken/asynq/tools/asynq/cmd"
|
||||
|
||||
func main() {
|
||||
cmd.Execute()
|