2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-06-07 15:22:55 +08:00

Compare commits

...

521 Commits

Author SHA1 Message Date
Khash Sajadi
c327bc40a2
docs: Update server.go (#1010)
Typo in the docs
2025-04-01 09:06:12 +03:00
Broderick Westrope
ea0c6e93f0
chore: fix godoc comment (#1009) 2025-04-01 09:05:18 +03:00
Mohammed Sohail
489e21920b
release: v0.25.1 2024-12-11 09:19:37 +03:00
Mohamed Sohail
043dcfbf56
fix: call Stop on all other signals to correctly set the server state for the shutdown procedure to complete successfully (#982)
* fixes: #979
2024-12-11 09:05:00 +03:00
Robin Joseph
02907551b4
feat(dash): Add --insecure option (#980) 2024-12-09 09:09:12 +03:00
Mohamed Sohail
127fac2e90
fix: NewScheduler incorrectly creates underlying Client, closing broker properly (#977)
* fix: NewScheduler wrongly creates a client whose sharedConnection value is always true

* This is affecting the PeriodicManager as well as the Scheduler

* fix: closing the Client also closes the broker

* The error was also previously unhandled. For shared connections an error will be returned by the broker itself because the sharedConnection bool is also set on the client. This also means we can get rid of the sharedConnection flag on the Scheduler itself and let it work internally.
2024-12-06 08:40:04 +03:00
dependabot[bot]
106c07adaa
build(deps): bump codecov/codecov-action from 4 to 5 (#970)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 07:37:44 +03:00
dependabot[bot]
1c7195ff1a
build(deps): bump google.golang.org/protobuf from 1.35.1 to 1.35.2 (#971)
Bumps google.golang.org/protobuf from 1.35.1 to 1.35.2.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 07:37:13 +03:00
Xijun Dai
12cbba4926
feat(periodic_task_manager): Add RedisUniversalClient support (#958)
Signed-off-by: Xijun Dai <daixijun1990@gmail.com>
2024-11-13 14:48:56 +03:00
Khash Sajadi
80479b528d
Include registration error in the log (#657)
* Include registration error in the log

* remove chatty debug log

this will show in the logs every 5 seconds as debug (not even trace) which leads to a lot of noise
2024-11-13 14:09:59 +03:00
dependabot[bot]
e14c312fe3
build(deps): bump golang.org/x/sys from 0.26.0 to 0.27.0 (#963)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.26.0 to 0.27.0.
- [Commits](https://github.com/golang/sys/compare/v0.26.0...v0.27.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:38:50 +03:00
dependabot[bot]
ad1f587403
build(deps): bump golang.org/x/time from 0.7.0 to 0.8.0 (#964)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.7.0 to 0.8.0.
- [Commits](https://github.com/golang/time/compare/v0.7.0...v0.8.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:13:01 +03:00
dependabot[bot]
8b32b38fd5
build(deps): bump github.com/mattn/go-runewidth in /tools (#965)
Bumps [github.com/mattn/go-runewidth](https://github.com/mattn/go-runewidth) from 0.0.13 to 0.0.16.
- [Commits](https://github.com/mattn/go-runewidth/compare/v0.0.13...v0.0.16)

---
updated-dependencies:
- dependency-name: github.com/mattn/go-runewidth
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:12:43 +03:00
dependabot[bot]
96a84fac0c
build(deps): bump github.com/hibiken/asynq in /tools (#966)
Bumps [github.com/hibiken/asynq](https://github.com/hibiken/asynq) from 0.24.1 to 0.25.0.
- [Release notes](https://github.com/hibiken/asynq/releases)
- [Changelog](https://github.com/hibiken/asynq/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hibiken/asynq/compare/v0.24.1...v0.25.0)

---
updated-dependencies:
- dependency-name: github.com/hibiken/asynq
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:11:52 +03:00
ghosx
d2c207fbb8
fix: queues map init with size (#673)
Co-authored-by: yipinhe <yipinhe@tencent.com>
2024-11-11 08:25:42 +03:00
Pior Bastida
1a7c61ac49
Use string concat instead of fmt.Sprintf (#962) 2024-11-11 08:20:16 +03:00
dependabot[bot]
87375b5534
Bump codecov/codecov-action from 1 to 4 (#930)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-08 08:23:18 +03:00
dependabot[bot]
ffd75ebb5f
Bump actions/upload-artifact from 3 to 4 (#929)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-08 08:23:10 +03:00
dependabot[bot]
d64fd328cb
Bump actions/download-artifact from 3 to 4 (#931)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-08 08:23:00 +03:00
Pior Bastida
4f00f52c1d
Add the scheduler option HeartbeatInterval (#956)
* Add the scheduler option HeartbeatInterval

* Fix possible premature expiration of scheduler entries
2024-11-07 08:34:28 +03:00
dependabot[bot]
580d69e88f
build(deps): bump github.com/redis/go-redis/v9 in /tools (#957)
Bumps [github.com/redis/go-redis/v9](https://github.com/redis/go-redis) from 9.0.5 to 9.7.0.
- [Release notes](https://github.com/redis/go-redis/releases)
- [Changelog](https://github.com/redis/go-redis/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/go-redis/compare/v9.0.5...v9.7.0)

---
updated-dependencies:
- dependency-name: github.com/redis/go-redis/v9
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-05 07:59:30 +03:00
dependabot[bot]
c97652d408
build(deps): bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /tools (#954)
Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.9 to 0.6.0.
- [Release notes](https://github.com/google/go-cmp/releases)
- [Commits](https://github.com/google/go-cmp/compare/v0.5.9...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/google/go-cmp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-02 09:58:40 +03:00
dependabot[bot]
4644d37ef4
Bump github.com/prometheus/client_golang from 1.11.1 to 1.20.5 in /x (#934)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.1 to 1.20.5.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.1...v1.20.5)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-02 09:58:08 +03:00
dependabot[bot]
45c0fc6ad9
build(deps): bump github.com/hibiken/asynq from 0.24.1 to 0.25.0 in /x (#955)
Bumps [github.com/hibiken/asynq](https://github.com/hibiken/asynq) from 0.24.1 to 0.25.0.
- [Release notes](https://github.com/hibiken/asynq/releases)
- [Changelog](https://github.com/hibiken/asynq/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hibiken/asynq/compare/v0.24.1...v0.25.0)

---
updated-dependencies:
- dependency-name: github.com/hibiken/asynq
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-02 08:16:01 +03:00
Mohamed Sohail
fd3eb86d95
release: v0.25.0
* prepare release (docs): v0.25.0

* docs: add PR 946 to changelog

* docs: update issue templates, add releatively stable update

* Ths project should be considered relatively stable because we haven't broken the API in over 2 years.

* docs: add Redis Cluster compatibility caveat
2024-11-01 11:13:57 +03:00
Pior Bastida
3dbda60333
Improve performance of enqueueing tasks (#946)
* Improve performance of enqueueing tasks

Add an in-memory cache to keep track of all the queues. Use this cache
to avoid sending an SADD since after the first call, that extra network
call isn't necessary.

The cache will expire every 10 secs so for cases where the queue is
deleted from asynq:queues set, it can be added again next time a task is
enqueued to it.

* Use sync.Map to simplify the conditional SADD

* Cleanup queuePublished in RemoveQueue

---------

Co-authored-by: Yousif <753751+yousifh@users.noreply.github.com>
2024-10-30 08:25:35 +03:00
dependabot[bot]
02c6dae7eb
Bump golang.org/x/time from 0.3.0 to 0.7.0 (#948)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.3.0 to 0.7.0.
- [Commits](https://github.com/golang/time/compare/v0.3.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 10:12:44 +03:00
dependabot[bot]
5cfcb71139
Bump github.com/spf13/cast from 1.5.1 to 1.7.0 (#938)
Bumps [github.com/spf13/cast](https://github.com/spf13/cast) from 1.5.1 to 1.7.0.
- [Release notes](https://github.com/spf13/cast/releases)
- [Commits](https://github.com/spf13/cast/compare/v1.5.1...v1.7.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:59:38 +03:00
dependabot[bot]
c78e7b0ccd
Bump golang.org/x/sys from 0.16.0 to 0.26.0 (#933)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.16.0 to 0.26.0.
- [Commits](https://github.com/golang/sys/compare/v0.16.0...v0.26.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:59:20 +03:00
dependabot[bot]
b4db174032
Bump github.com/redis/go-redis/v9 from 9.4.0 to 9.7.0 (#935)
Bumps [github.com/redis/go-redis/v9](https://github.com/redis/go-redis) from 9.4.0 to 9.7.0.
- [Release notes](https://github.com/redis/go-redis/releases)
- [Changelog](https://github.com/redis/go-redis/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/go-redis/compare/v9.4.0...v9.7.0)

---
updated-dependencies:
- dependency-name: github.com/redis/go-redis/v9
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:25:57 +03:00
dependabot[bot]
39f1d8c3e6
Bump github.com/fatih/color from 1.9.0 to 1.18.0 in /tools (#941)
Bumps [github.com/fatih/color](https://github.com/fatih/color) from 1.9.0 to 1.18.0.
- [Release notes](https://github.com/fatih/color/releases)
- [Commits](https://github.com/fatih/color/compare/v1.9.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/fatih/color
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:24:06 +03:00
Ahmed Radwan
e70de721b8
remove deprecated protobuf ptypes (#942)
* remove deprecated protobuf ptypes

* tidy compiled proto and go mod

* bump protobuf
2024-10-29 09:21:27 +03:00
Mohamed Sohail
6c06ad7e45
Revert "Bump golang.org/x/time from 0.3.0 to 0.7.0" (#947) 2024-10-29 09:20:45 +03:00
Mohamed Sohail
a676d3d2fa
Merge pull request #937 from hibiken/dependabot/go_modules/golang.org/x/time-0.7.0
Bump golang.org/x/time from 0.3.0 to 0.7.0
2024-10-29 09:18:06 +03:00
Mohamed Sohail
ef0d32965f
Merge pull request #945 from Shopify/fix-test-default-port
Update tests to use the configured Redis address
2024-10-29 08:45:33 +03:00
Mohamed Sohail
f16f9ac440
Merge pull request #944 from Shopify/randv2
Use math/rand/v2
2024-10-29 08:42:38 +03:00
Pior Bastida
63f7cb7b17
Use math/rand/v2 2024-10-28 18:39:54 +01:00
Pior Bastida
04b3a3475d
Update tests to use the configured Redis address 2024-10-28 12:48:56 +01:00
Marcus Boorstin
013190b824
Add task enqueue command to cli (#918) 2024-10-26 13:04:54 +03:00
Skwol
1e102a5392
Need to support redis sentinel username. (#924) 2024-10-26 13:04:21 +03:00
dependabot[bot]
e1a8a366a6
Bump golang.org/x/time from 0.3.0 to 0.7.0
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.3.0 to 0.7.0.
- [Commits](https://github.com/golang/time/compare/v0.3.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-26 05:49:43 +00:00
Pior Bastida
c8c8adfaa6
Configure dependabot to update Github Actions (#928) 2024-10-26 08:48:41 +03:00
Pior Bastida
03f4799712
Run golangci-lint in CI (#927)
* Setup golangci-lint in CI and local-dev

* Fix linting error or locally disable linter
2024-10-26 08:48:12 +03:00
Pior Bastida
3f4e211a3b
Call all context cancelFunc in processor (#926) 2024-10-26 08:39:13 +03:00
Pior Bastida
0655c569f5
Bump to Go 1.22 and 1.23 (#925) 2024-10-26 08:33:58 +03:00
Pior Bastida
95a0768ae0
Add jitter on the processor fetch backoff sleep (#868) 2024-10-19 10:46:48 +03:00
Mohammed Sohail
f4b56498f2
docs: small fix on semantics 2024-10-19 10:07:17 +03:00
kanzihuang
ae478d5b22
feat: revoke the task to modify task parameters and enqueue new task with the same task id (#882) 2024-10-19 10:06:12 +03:00
kanzihuang
ff7ef48463
fix: possible inconsistent scores between ProcessIn and ProcessAt (#876) 2024-10-19 09:45:18 +03:00
Patrick Barnum
b1e13893ff
[RFC] Adds Ping() to client/scheduler/server (#585)
* [RFC] Adds Ping() to client/scheduler/server

* Checks for scheduler state closed
2024-10-19 09:44:06 +03:00
Harrison Miller
0dc670d7d8
Archived tasks that are trimmed from the set are deleted (#743)
* fixed trimmed archive tasks not being deleted.

* improved test case.

* changed ZRANGEBYSCORE to ZRANGE with BYSCORE option.

---------

Co-authored-by: Harrison <harrison@Harrisons-MacBook-Pro.local>
Co-authored-by: Harrison Miller <harrison.miller@MBP-Harrison-Miller-M2.local>
2024-10-19 09:18:09 +03:00
Mohammed Sohail
461d922616
docs: apply recommendaded updates
* additionally, we log an erro in the case the redis client cannot shutdown in the scheduler
2024-10-19 09:05:17 +03:00
Mohammed Sohail
5daa3c52ed
Merge remote-tracking branch 'jerbob92-fork/feature/implement-reusing-redis-client' into develop 2024-10-19 08:58:39 +03:00
Tedja
d04888e748
feature: configurable janitor interval and deletion batch size (#715)
* feature: configurable janitor interval and deletion batch size

* warn user when they set a big number of janitor batch size

* Update CHANGELOG.md

---------

Co-authored-by: Agung Hariadi Tedja <agung.tedja@kumparan.com>
2024-05-06 14:11:52 +08:00
Trịnh Đức Bảo Linh(Kevin)
174008843d
feat(*): correct panic error (#758)
* error panic handling

* updated CHANGELOG.md file

* correct msg panic error (#5)

* correct msg panic error
2024-05-06 13:46:19 +08:00
Mohamed Sohail 天命
2b632b93d5
chore: fix function names in comment (pull request #860 from camcui/master)
chore: fix function names in comment
2024-04-23 00:56:52 +08:00
camcui
b35b559d40 chore: fix function names in comment
Signed-off-by: camcui <cuishua@sina.cn>
2024-04-12 13:54:08 +08:00
Mohamed Sohail
8df0bfa583
Merge pull request #843 from mrusme/fix-bsd
Fix go:build for BSD
2024-03-15 13:32:19 +08:00
mrusme
b25d10b61d
Fixed go:build for BSD 2024-03-14 20:26:33 +05:00
crazyoptimist
38f7499b71
fix(typo): delete-all to deleteall (#827)
* typo: delete-all to deleteall

* docs: update tools/asynq/README.md

* fix archiveall runall

---------

Co-authored-by: Mohamed Sohail <sohailsameja@gmail.com>
2024-02-23 09:17:12 +03:00
dependabot[bot]
0a73fc6201
Bump go.uber.org/goleak from 1.1.12 to 1.3.0 (#770)
Bumps [go.uber.org/goleak](https://github.com/uber-go/goleak) from 1.1.12 to 1.3.0.
- [Release notes](https://github.com/uber-go/goleak/releases)
- [Changelog](https://github.com/uber-go/goleak/blob/master/CHANGELOG.md)
- [Commits](https://github.com/uber-go/goleak/compare/v1.1.12...v1.3.0)

---
updated-dependencies:
- dependency-name: go.uber.org/goleak
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:37:18 +03:00
dependabot[bot]
1a11a33b4f
Bump github.com/google/uuid from 1.3.0 to 1.6.0 (#810)
Bumps [github.com/google/uuid](https://github.com/google/uuid) from 1.3.0 to 1.6.0.
- [Release notes](https://github.com/google/uuid/releases)
- [Changelog](https://github.com/google/uuid/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/uuid/compare/v1.3.0...v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:35:55 +03:00
dependabot[bot]
f0888df813
Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 (#767)
Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.9 to 0.6.0.
- [Release notes](https://github.com/google/go-cmp/releases)
- [Commits](https://github.com/google/go-cmp/compare/v0.5.9...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/google/go-cmp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:33:57 +03:00
dependabot[bot]
c2dd648a51
Bump github.com/redis/go-redis/v9 from 9.0.3 to 9.4.0 (#809)
Bumps [github.com/redis/go-redis/v9](https://github.com/redis/go-redis) from 9.0.3 to 9.4.0.
- [Release notes](https://github.com/redis/go-redis/releases)
- [Changelog](https://github.com/redis/go-redis/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/go-redis/compare/v9.0.3...v9.4.0)

---
updated-dependencies:
- dependency-name: github.com/redis/go-redis/v9
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:33:16 +03:00
Andrii Buriachevskyi
a3cca853a0
docs: include version in CLI package installation (#802)
go install github.com/hibiken/asynq/tools/asynq@latest
2024-01-29 09:46:47 +03:00
dependabot[bot]
83df622a92
Bump golang.org/x/sys from 0.0.0-20211216021012-1d35b9e2eb4e to 0.16.0 (#807)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.0.0-20211216021012-1d35b9e2eb4e to 0.16.0.
- [Commits](https://github.com/golang/sys/commits/v0.16.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 09:42:38 +03:00
krhubert
fdbf54eb04 Update docs 2023-12-10 09:49:43 -08:00
Hubert Krauze
16ec43cbca Add option to configure task check interval 2023-12-10 09:49:43 -08:00
yeqown
1e0bf88bf3 fix: listLeaseExpiredCmd doesn't ignore possibly empty value of task in lua script 2023-12-10 09:47:55 -08:00
yeqown
d0041c55a3 fix(274): ignore empty data to append to msgs
fix issue 274
2023-12-10 09:47:55 -08:00
Mohammed Sohail
7ef0511f35 ci: upgrade benchstat actions, go version -> 1.21.x
* closes #759

Squashed commit of the following:

commit 3d94ee14aeaf9a868dbeed4b65f90ccdda1f08d6
Author: Mohammed Sohail <sohailsameja@gmail.com>
Date:   Thu Dec 7 11:49:07 2023 +0300

    ci: upgrade benchstat actions, go version -> 1.21.x

commit 129e2253118c76d640ce7dcfbcba36d562316f97
Author: angshumukherjee100 <angshumukherjee100@gmail.com>
Date:   Sun Oct 8 11:20:43 2023 +0530

    (workflow): bump go version to 1.18 in benchstat
2023-12-10 09:46:45 -08:00
Mohammed Sohail
1ec90810db chore: Update redis to v9 in x/go.mod (#795)
Squashed commit of the following:

commit 6e3656db222a3f9347ee4806ef065a1b9b01a214
Author: Mohammed Sohail <sohailsameja@gmail.com>
Date:   Thu Dec 7 11:12:41 2023 +0300

    pkg(x): go version update -> 1.20

commit 2931df37081ff64abcd8a647014925ad2b9461eb
Author: Amaury <1293565+amaury1729@users.noreply.github.com>
Date:   Wed Dec 6 17:47:03 2023 +0100

    fix tests

commit 11227804cbfc71a01af1c06782210ccfd560ed5d
Author: Amaury <1293565+amaury1729@users.noreply.github.com>
Date:   Wed Dec 6 16:40:32 2023 +0100

    chore: Update redis to v9 in x/go.mod
2023-12-10 09:46:45 -08:00
Mohammed Sohail
90188a093d ci: upgrade actions, lock redis version 2023-12-10 09:46:45 -08:00
Mohammed Sohail
e05f0b7196 ci/docs: update go versions
Squashed commit of the following:

commit de18fe9839da60e927a9e2b143fb57f8c8e0bacc
Author: Ken Hibino <ken.hibino7@gmail.com>
Date:   Sun Sep 17 19:35:33 2023 -0700

    Update README about supported Go versions

commit 714d62bb75de80590af0e0051fa1d1710ba02895
Author: Ken Hibino <ken.hibino7@gmail.com>
Date:   Sun Sep 17 19:26:16 2023 -0700

    Bound build to the latest two go versions
2023-12-10 09:46:45 -08:00
Mohammed Sohail
c1096a0fae pkg: go version update -> 1.20 2023-12-10 09:46:45 -08:00
Jeroen Bobbeldijk
9e548fc097 Implement reusing redis client 2023-09-19 11:20:32 +02:00
dependabot[bot]
6a7bf2ceff Bump github.com/google/uuid from 1.3.0 to 1.3.1 in /x
Bumps [github.com/google/uuid](https://github.com/google/uuid) from 1.3.0 to 1.3.1.
- [Release notes](https://github.com/google/uuid/releases)
- [Changelog](https://github.com/google/uuid/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/uuid/compare/v1.3.0...v1.3.1)

---
updated-dependencies:
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-08 08:37:24 -07:00
dependabot[bot]
e7fa0ae865 Bump google.golang.org/protobuf from 1.26.0 to 1.31.0
Bumps google.golang.org/protobuf from 1.26.0 to 1.31.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-08 08:37:01 -07:00
dependabot[bot]
fc4b6713f6 Bump github.com/spf13/cast from 1.3.1 to 1.5.1
Bumps [github.com/spf13/cast](https://github.com/spf13/cast) from 1.3.1 to 1.5.1.
- [Release notes](https://github.com/spf13/cast/releases)
- [Commits](https://github.com/spf13/cast/compare/v1.3.1...v1.5.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-08 08:36:38 -07:00
dependabot[bot]
6b98c0bbae
Bump github.com/google/uuid from 1.2.0 to 1.3.0 (#699)
Bumps [github.com/google/uuid](https://github.com/google/uuid) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/google/uuid/releases)
- [Commits](https://github.com/google/uuid/compare/v1.2.0...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 16:14:25 +08:00
dependabot[bot]
ed1ab8ee55
Bump github.com/golang/protobuf from 1.5.2 to 1.5.3 (#703)
Bumps [github.com/golang/protobuf](https://github.com/golang/protobuf) from 1.5.2 to 1.5.3.
- [Release notes](https://github.com/golang/protobuf/releases)
- [Commits](https://github.com/golang/protobuf/compare/v1.5.2...v1.5.3)

---
updated-dependencies:
- dependency-name: github.com/golang/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 16:14:12 +08:00
dependabot[bot]
e18c0381ad
Bump golang.org/x/time from 0.0.0-20190308202827-9d24e82272b4 to 0.3.0 (#696)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.0.0-20190308202827-9d24e82272b4 to 0.3.0.
- [Commits](https://github.com/golang/time/commits/v0.3.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 15:59:16 +08:00
Mohammed Sohail
8b422c237c feat (ci/cd): add dependabot weekly checks 2023-07-29 21:07:37 -07:00
dependabot[bot]
e6f74c1c2b
Bump golang.org/x/text from 0.3.7 to 0.3.8 in /tools (#619)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.7 to 0.3.8.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.3.7...v0.3.8)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 15:12:45 +08:00
dependabot[bot]
6edba6994e
Bump github.com/prometheus/client_golang from 1.11.0 to 1.11.1 in /tools (#614)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.0...v1.11.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 11:44:52 +08:00
dependabot[bot]
571f0d2613
Bump github.com/prometheus/client_golang from 1.11.0 to 1.11.1 in /x (#615)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.0...v1.11.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 11:43:45 +08:00
Andrew Bezzub
2165ed133b Upgrade tools to use redis v9 2023-07-20 07:04:41 -07:00
Trịnh Đức Bảo Linh(Kevin)
551b0c7119
feat (add): panic error handling (#491)
* closes #487
2023-07-20 21:33:39 +08:00
guoguangwu
123d560a44 chore: replace loop with mux.mws = append(mux.mws, mws...) 2023-07-07 21:01:54 -07:00
Ken Hibino
5bef53d1ac
Update README.md to include sponsoring section 2023-07-07 21:00:05 -07:00
Ken Hibino
90af7749ca
Update FUNDING.yml 2023-07-07 20:54:12 -07:00
guoguangwu
e4b8663154 chore: unnecessary use of fmt.Sprintf 2023-07-07 20:45:42 -07:00
Ken Hibino
fde294be32 v0.24.1 2023-05-01 06:48:07 -07:00
Mohammed Sohail
cbb1be34ac (tools & x): revert to v8 version cc777eb
* revert state to as it was before v9 updates for tools and x modules
2023-04-17 22:30:33 -07:00
Mohammed Sohail
6ed70adf3b fix: breaking build below go < 1.18
* see https://github.com/redis/go-redis/pull/2458 for more info
2023-04-17 22:30:33 -07:00
Mohammed Sohail
1f42d71e9b pkg (tools): revert replace directive in go.mod
* this was previously reverted also in #392
2023-04-17 22:30:33 -07:00
Phước Trung
f966a6c3b8 completely update Redis package
Signed-off-by: Mohammed Sohail <sohailsameja@gmail.com>
2023-04-17 22:30:33 -07:00
Mohammed Sohail
8b057b8767 tests: restore ignore goleak.IgnoreTopFunction 2023-04-17 22:30:33 -07:00
Phước Trung
c72bfef094 fix unit test
Signed-off-by: Mohammed Sohail <sohailsameja@gmail.com>
2023-04-17 22:30:33 -07:00
Mohammed Sohail
dffb78cca4 pkg: remove v8 refs 2023-04-17 22:30:33 -07:00
Emanuel Bennici
0275df8df4 Update redis/go-redis to v9
Version v9 implements the support for Redis v7 and has some
other improvements.
2023-04-17 22:30:33 -07:00
cui fliter
cc777ebdaa fix some typos
Signed-off-by: cui fliter <imcusg@gmail.com>
2023-01-05 20:03:02 -08:00
Ken Hibino
783071c47f v0.24.0 2023-01-02 14:55:33 -08:00
Ken Hibino
bafed907e9 Fix redis script error 2023-01-02 14:53:45 -08:00
Ken Hibino
0b8cfad703 (cli:fix) Read --cluster flag from config file 2022-12-18 21:11:01 -08:00
Zhidong Chen
c08f142b56 fix redis sentinel url parse 2022-09-25 15:04:04 -07:00
徐胖
c70ff6a335
fix a missing ticker.stop() 2022-06-26 13:10:06 -07:00
Ken Hibino
a04ba6411d
Fix dash command flags 2022-06-22 20:57:11 -07:00
Ken Hibino
d0209d9273
Remove error log from Scheduler 2022-06-04 12:48:56 -07:00
Ken Hibino
6c954c87bf
Update readme 2022-06-03 04:14:45 -07:00
Ken Hibino
86fe31990b
(cli): Add dash command 2022-06-02 19:23:06 -07:00
Chih Sean Hsu
e0e5d1ac24
Add pre and post enqueue callback options for Scheduler 2022-05-27 10:50:02 -07:00
Trịnh Đức Bảo Linh
30d409371b
Fix comment typos 2022-05-16 21:14:15 -07:00
Mohab Abd El-Dayem
aefd276146
Upgrade goleak version 2022-05-08 10:17:22 -07:00
Mohab Abd El-Dayem
94ad9e5e74
Update CONTRIBUTING.md to use git ssh 2022-05-08 09:21:33 -07:00
Ken Hibino
5187844ca5
Update CLI install command in README 2022-05-06 16:55:54 -07:00
Ken Hibino
4dd2b5738a
(cli): Improve help command output 2022-05-06 16:18:40 -07:00
Jeffrey Lo
9116c096ec docs: correct typo (deafult => default) 2022-05-04 19:10:09 -07:00
Erwan Leboucher
5c723f597e Correct the error message to cancel an active tasks 2022-04-13 06:08:46 -07:00
Ken Hibino
dd6f84c575 (cli): Use asynq v0.23 2022-04-13 06:07:31 -07:00
Ken Hibino
c438339c3d Fix date in changelog 2022-04-12 06:22:42 -07:00
Ken Hibino
901938b0fe Update readme 2022-04-11 17:07:24 -07:00
Ken Hibino
245d4fe663 v0.23.0 2022-04-11 16:57:33 -07:00
Ken Hibino
94719e325c Update CHANGELOG.md 2022-04-11 16:55:43 -07:00
Ken Hibino
8b2a787759 Rename variables for public API documentation 2022-04-11 16:55:43 -07:00
Ken Hibino
451be7e50f (cli): Update stats command to print the number of aggregating tasks 2022-04-11 16:55:43 -07:00
Ken Hibino
578321f226 (cli): Extend task ls command to list aggregating tasks 2022-04-11 16:55:43 -07:00
Ken Hibino
2c783566f3 (cli): Add group ls command 2022-04-11 16:55:43 -07:00
Ken Hibino
39718f8bea Always enqueue the aggregated task in the same queue 2022-04-11 16:55:43 -07:00
Ken Hibino
829f64fd38 Define GroupAggregator interface 2022-04-11 16:55:43 -07:00
Ken Hibino
a369443955 Add batch actions to inspector for aggregating tasks
Added:
- Inspector.DeleteAllAggregatingTasks
- Inspector.ArchiveAllAggregatingTasks
- Inspector.RunAllAggregatingTasks
2022-04-11 16:55:43 -07:00
Ken Hibino
de139cc18e Update RDB.RunTask to schedule aggregating task 2022-04-11 16:55:43 -07:00
Ken Hibino
74db013ab9 Add RDB.RunAllAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
725105ca03 Update RDB.ArchiveTask to archive aggregating task 2022-04-11 16:55:43 -07:00
Ken Hibino
d8f31e45f1 Add RDB.ArchiveAllAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
9023cbf4be Update RDB.DeleteTask to handle aggregating task 2022-04-11 16:55:43 -07:00
Ken Hibino
9279c09125 Add RDB.DeleteAllAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
bc27126670 Fix memory usage lua script 2022-04-11 16:55:43 -07:00
Ken Hibino
0cfa7f47ba Fix memory_usage lua script 2022-04-11 16:55:43 -07:00
Ken Hibino
8a4fb71dd5 Update go.mod with replace directive 2022-04-11 16:55:43 -07:00
Ken Hibino
7fb5b25944 Add Inspector.ListAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
71bd8f0535 Add RDB.ListAggregating 2022-04-11 16:55:43 -07:00
Ken Hibino
4c8432e0ce Add Inspector.Groups method 2022-04-11 16:55:43 -07:00
Ken Hibino
e939b5d166 Rename asynqtest package to testutil 2022-04-11 16:55:43 -07:00
Ken Hibino
1acd62c760 Move test helpers to asynqtest package 2022-04-11 16:55:43 -07:00
Ken Hibino
0149396bae Add RDB.GroupStats for inspecting groups 2022-04-11 16:55:43 -07:00
Ken Hibino
45ed560708 Add Group field to TaskInfo struct 2022-04-11 16:55:43 -07:00
Ken Hibino
01eeb8756e (cli): Update queue inspect cmd to show # of groups and aggregating tasks 2022-04-11 16:55:43 -07:00
Ken Hibino
47af17cfb4 Fix RDB.CurrentStats to report the correct queue size 2022-04-11 16:55:43 -07:00
Ken Hibino
eb064c2bab Fix AggregationCheck with unlimited size to clear group name from
all-groups set
2022-04-11 16:55:43 -07:00
Ken Hibino
652939dd3a Update memory usage redis lua script to account for groups 2022-04-11 16:55:43 -07:00
Ken Hibino
efe3c74037 Show number of groups and aggregating task count in QueueInfo 2022-04-11 16:55:43 -07:00
Ken Hibino
74d2eea4e0 Clear group if aggregation set empties the group 2022-04-11 16:55:43 -07:00
Ken Hibino
60a4dc1401 Add test for DeleteAggregationSet error case 2022-04-11 16:55:43 -07:00
Ken Hibino
4b716780ef Rewrite test for DeleteAggregationSet function with a new pattern 2022-04-11 16:55:43 -07:00
Ken Hibino
e63f41fb24 Fix DeleteAggregationSet 2022-04-11 16:55:43 -07:00
Ken Hibino
1c388baf06 Implement RDB.ReclaimStaleAggregationSets 2022-04-11 16:55:43 -07:00
Ken Hibino
47a66231b3 Store aggregation set *key* in all aggreationsets zset 2022-04-11 16:55:43 -07:00
Ken Hibino
3551d3334c Use zset for aggregation set to preserve score 2022-04-11 16:55:43 -07:00
Ken Hibino
8b16ede8bc Declare ReclaimStaleAggregationSets 2022-04-11 16:55:43 -07:00
Ken Hibino
c8658a53e6 Add aggregator test 2022-04-11 16:55:43 -07:00
Ken Hibino
562506c7ba Fix client to return error when nil task is passed 2022-04-11 16:55:43 -07:00
Ken Hibino
888b5590fb Make GroupMaxSize and GroupMaxDelay config optional 2022-04-11 16:55:43 -07:00
Ken Hibino
196db64d4d Run aggregator on the server 2022-04-11 16:55:43 -07:00
Ken Hibino
4b35eb0e1a Fix RDB.AggregationCheck when run against an empty group 2022-04-11 16:55:43 -07:00
Ken Hibino
b29fe58434 Implement RDB.ListGroups 2022-04-11 16:55:43 -07:00
Ken Hibino
7849b1114c Implement RDB.DeleteAggregationSet 2022-04-11 16:55:43 -07:00
Ken Hibino
99c00bffeb Implement RDB.AggregationCheck 2022-04-11 16:55:43 -07:00
Ken Hibino
4542b52da8 Check for aggregation at an interval <= gracePeriod 2022-04-11 16:55:43 -07:00
Ken Hibino
d841dc2f8d Add initial implementation of aggregator 2022-04-11 16:55:43 -07:00
Ken Hibino
ab28234767 Update client dependency to base.Broker 2022-04-11 16:55:43 -07:00
Ken Hibino
eb27b0fe1e Add TaskMessageBuilder type as a test helper 2022-04-11 16:55:43 -07:00
Ken Hibino
088be63ee4 Update forwarder to use time.Timer 2022-04-11 16:55:43 -07:00
Ken Hibino
ed69667e86 Update ForwardIfReady test with group 2022-04-11 16:55:43 -07:00
Ken Hibino
4e8885276c Update client to store groupKey under TaskMessage 2022-04-11 16:55:43 -07:00
Ken Hibino
401f7fb4fe Add GroupKey field to TaskMessage 2022-04-11 16:55:43 -07:00
Ken Hibino
61854ea1dc Update RDB.ForwardIfReady to forward to group if groupKey is specified 2022-04-11 16:55:43 -07:00
Ken Hibino
f17c157b0f Update Client to add task to group if Group option is specified 2022-04-11 16:55:43 -07:00
Ken Hibino
8b582899ad Add RDB.AddToGroup and RDB.AddToGroupUnique methods 2022-04-11 16:55:43 -07:00
Ken Hibino
e3d2939a4c Add helper functions to generate group key 2022-04-11 16:55:43 -07:00
Ken Hibino
2ce71e83b0 Add Group task option 2022-04-11 16:55:43 -07:00
Ken Hibino
1608366032 Add group related configuration options 2022-04-11 16:55:43 -07:00
ashang
3f4f0c1daa
Use explicit types for limit constants 2022-03-29 06:30:10 -07:00
Ken Hibino
f94a65dc9f Add go1.18 to build workflow matrix 2022-03-22 06:51:28 -07:00
Erwan Leboucher
04d7c8c38c
Add rediss url parsing support 2022-02-24 08:30:55 -08:00
Ken Hibino
c04fd41653 v0.22.1 2022-02-20 06:22:55 -08:00
Ken Hibino
7e5efb0e30 Drop GT option from RDB.ExtendLease
GT option in ZAdd is supported for redis v6.2.0 or above.
This Change fixes redis version compatibility (currently v4.0+)
2022-02-20 06:20:38 -08:00
Ken Hibino
cabf8d3627 Fix changelog 2022-02-19 06:21:56 -08:00
Ken Hibino
a19909f5f4 v0.22.0 2022-02-19 06:20:05 -08:00
Ken Hibino
cea5110d15 Add IsOrphaned field to TaskInfo 2022-02-19 06:15:44 -08:00
Ken Hibino
9b63e23274 Update log messages 2022-02-19 06:15:44 -08:00
Ken Hibino
de25201d9f Make timeutil.SimulatedClock concurrency safe 2022-02-19 06:15:44 -08:00
Ken Hibino
ec560afb01 Fix processor test 2022-02-19 06:15:44 -08:00
Ken Hibino
d4006894ad Remove base.DeadlinesKey 2022-02-19 06:15:44 -08:00
Ken Hibino
59927509d8 Remove timeout and deadline fields under task key 2022-02-19 06:15:44 -08:00
Ken Hibino
8211167de2 Update processor to create a lease and watch for expiration 2022-02-19 06:15:44 -08:00
Ken Hibino
d7169cd445 Update heartbeat to extend lease of active workers 2022-02-19 06:15:44 -08:00
Ken Hibino
dfae8638e1 Update RDB methods to work with lease 2022-02-19 06:15:44 -08:00
Ken Hibino
b9943de2ab Add Lease type to base package 2022-02-19 06:15:44 -08:00
Ken Hibino
871474f220 Update heartbeat goroutine to call ExtendLease on active tasks 2022-02-19 06:15:44 -08:00
Ken Hibino
87dc392c7f Add RDB.ExtendLease method 2022-02-19 06:15:44 -08:00
Ken Hibino
dabcb120d5 Update recoverer to use ListLeaseExpired 2022-02-19 06:15:44 -08:00
Ken Hibino
bc2f1986d7 Update ListDeadlineExceeded to ListLeaseExpired 2022-02-19 06:15:44 -08:00
Ken Hibino
b8cb579407 Update RDB methods to use lease instead of deadlines set 2022-02-19 06:15:44 -08:00
Ken Hibino
bca624792c Move task deadline compute logic to processor 2022-02-19 06:15:44 -08:00
Ken Hibino
d865d89900 Update RDB.Dequeue to insert task ID to lease set 2022-02-19 06:15:44 -08:00
Ken Hibino
852af7abd1 Add base.LeaseKey helper function 2022-02-19 06:15:44 -08:00
Ken Hibino
5490d2c625 Fix tests 2022-02-16 07:08:01 -08:00
Binaek Sarkar
ebd7a32c0f conventions 2022-02-16 06:43:08 -08:00
Binaek Sarkar
55d0610a03 test and changelog 2022-02-16 06:43:08 -08:00
Binaek Sarkar
ab8a4f5b1e review corrections 2022-02-16 06:43:08 -08:00
Binaek Sarkar
d7ceb0c090 first cut 2022-02-16 06:43:08 -08:00
Ken Hibino
8bd70c6f84 (ci): Run go (build|test) commands for each module 2022-02-01 07:00:00 -08:00
Ken Hibino
10ab4e3745 Remove replace directives in go.mod 2022-02-01 06:18:41 -08:00
Ken Hibino
349f4c50fb Add example for ResultWriter 2022-01-31 09:08:41 -08:00
Ken Hibino
dff2e3a336 v0.21.0 2022-01-22 06:15:29 -08:00
Ken Hibino
65040af7b5 Update changelog 2022-01-22 06:14:24 -08:00
Ken Hibino
053fe2d1ee Create PeriodicTaskManager 2022-01-22 05:59:33 -08:00
Ken Hibino
25832e5e95
Fix bug related to concurrently executing server state changes 2022-01-12 09:10:56 -08:00
Ken Hibino
aa26f3819e
Fix flaky tests 2022-01-05 09:07:42 -08:00
Ken Hibino
d94614bb9b
Add CODE_OF_CONDUCT.md 2022-01-04 06:17:48 -08:00
Mahdi Dibaiee
ce46b07652
Allow configuration of DelayedTaskCheckInterval 2022-01-03 14:44:00 -08:00
Mahdi Dibaiee
2d0170541c
Add --json flag for asynq stats command 2022-01-02 07:24:29 -08:00
Andreas Thomas
c1f08106da
fix: missing import statement in example code 2021-12-27 05:40:10 -08:00
Ken Hibino
74cf804197
Update readme 2021-12-20 05:51:51 -08:00
Ken Hibino
8dfabfccb3 Fix build 2021-12-19 07:06:37 -08:00
Ken Hibino
5f20edcbd1 v0.20.0 2021-12-19 07:00:21 -08:00
Ken Hibino
1ddb2f7bce Use math.MaxInt64 instead of custom const 2021-12-19 06:58:12 -08:00
Ken Hibino
82d18e3d91
Record total tasks processed/failed 2021-12-16 16:53:02 -08:00
Ken Hibino
43cb4ddf19
Add queue metrics exporter
Changes:
- Added `x/metrics` package
- Added `tools/metrics_exporter` binary
2021-12-16 06:01:01 -08:00
Francisco Miamoto
ddfc6747a1 Fix typo in Server doc 2021-12-13 16:23:30 -08:00
Ken Hibino
970cb7a606 v0.19.1 2021-12-12 06:16:13 -08:00
Ken Hibino
157e97e72e Update changelog 2021-12-11 10:29:43 -08:00
Ken Hibino
22e6c9d297 Delete "pending_since" under task-key when state changes to active 2021-12-11 10:29:43 -08:00
Ken Hibino
99a6750656 Add Latency field to QueueInfo 2021-12-11 10:29:43 -08:00
Ken Hibino
e7c1c3ad6f Use clock in RDB 2021-12-11 10:29:43 -08:00
Ken Hibino
c9183374c5 Add internal timeutil package 2021-12-11 10:29:43 -08:00
Ken Hibino
6e7106c8f2 Record time when task moved to pending state 2021-12-11 10:29:43 -08:00
Ken Hibino
9f2c321e98
Add EnqueueContext method to Client 2021-11-15 16:34:26 -08:00
Ken Hibino
e2b61c9056 Return error if Unique TTL is less than 1s 2021-11-09 16:37:02 -08:00
Ken Hibino
531d1ef089 Fix godoc around errors returned from Inspector 2021-11-09 15:45:20 -08:00
Ken Hibino
413afc2ab6 v0.19.0 2021-11-06 15:20:09 -07:00
Ken Hibino
6bb4818509 Update readme 2021-11-06 15:18:42 -07:00
Ken Hibino
f4ddac4dcc Introduce Task Results
* Added Retention Option to specify retention TTL for tasks
* Added ResultWriter as a client interface to write result data for the associated task
2021-11-06 15:18:42 -07:00
Ken Hibino
4638405cbd Fix flaky test 2021-11-06 15:18:42 -07:00
Ken Hibino
9e2f88c00d Add TaskID option to allow user to specify task id 2021-11-06 15:18:42 -07:00
Ken Hibino
dbdd9c6d5f Update RDB Enqueue and Schedule methods to check for task ID conflict 2021-11-06 15:18:42 -07:00
Ken Hibino
2261c7c9a0 Change TaskMessage.ID type from uuid.UUID to string 2021-11-06 15:18:42 -07:00
Ken Hibino
83cae4bb24 Update NewTask function to take Option as varargs 2021-11-06 15:18:42 -07:00
Ajat Prabha
23c522dc9f
Add asynq/x/rate package
- Added a directory /x for external, experimental packeges
- Added a `rate` package to enable rate limiting across multiple asynq worker servers
2021-11-03 15:55:23 -07:00
Ken Hibino
0d2c0f612b
Add FUNDING.yml 2021-10-03 09:25:35 -07:00
Ken Hibino
d612a8a9e4 v0.18.6 2021-10-03 05:55:49 -07:00
Jason White
b3ef9e91a9
Upgrade go-redis/redis to version 8 2021-09-02 05:56:02 -07:00
Ken Hibino
05534c6f24 v0.18.5 2021-09-01 06:02:49 -07:00
Ken Hibino
f0db219f6a
Add IsFailure to Config
With this IsFailure config, users can provide a predicate function to 
determine whether the error returned from Handler counts as a failure.
2021-09-01 06:00:54 -07:00
Mehran Poursadeghi
3ae0e7f528
Fix readme 2021-08-27 14:47:03 -07:00
Ken Hibino
421dc584ff v0.18.4 2021-08-17 17:12:33 -07:00
Ken Hibino
cfd1a1dfe8 Make scheduler methods thread-safe 2021-08-17 17:10:53 -07:00
Ken Hibino
c197902dc0 v0.18.3 2021-08-09 08:59:35 -07:00
Ken Hibino
e6355bf3f5 Use approximate memory usage for QueueInfo 2021-08-09 08:58:44 -07:00
Luqqk
95c90a5cb8 Add changelog entry, add additional test case 2021-08-02 20:20:09 -07:00
Luqqk
6817af366a Adjust error message, use TrimSpace for more robust empty typename check 2021-08-02 20:20:09 -07:00
Luqqk
4bce28d677 client.Enqueue - prevent empty task's typename 2021-08-02 20:20:09 -07:00
Pedro Henrique
73f930313c Fixes links 2021-07-29 17:15:27 -07:00
Ken Hibino
bff2a05d59 Fix examples in readme 2021-07-18 09:28:43 -07:00
Ken Hibino
684a7e0c98 v0.18.2 2021-07-15 06:56:53 -07:00
Ken Hibino
46b23d6495 Allow upper case characters in queue name 2021-07-15 06:55:47 -07:00
Ken Hibino
c0ae62499f v0.18.1 2021-07-04 06:39:54 -07:00
Ken Hibino
7744ade362 Update changelog 2021-07-04 06:38:36 -07:00
Ken Hibino
f532c95394 Update recoverer to recover tasks on server startup 2021-07-04 06:38:36 -07:00
Ken Hibino
ff6768f9bb Fix recoverer to run task recovering logic every minute 2021-07-04 06:38:36 -07:00
Ken Hibino
d5e9f3b1bd Update readme 2021-06-30 06:26:14 -07:00
Ken Hibino
d02b722d8a v0.18.0 2021-06-29 16:36:52 -07:00
Ken Hibino
99c7ebeef2 Add migration command in CLI 2021-06-29 16:34:21 -07:00
Ken Hibino
bf54621196 Update example code in README 2021-06-29 16:34:21 -07:00
Ken Hibino
27baf6de0d Fix error in readme 2021-06-29 16:34:21 -07:00
Ken Hibino
1bd0bee1e5 Fix CLI build 2021-06-29 16:34:21 -07:00
Ken Hibino
a9feec5967 Change TaskInfo to use public fields instead of methods 2021-06-29 16:34:21 -07:00
Ken Hibino
e01c6379c8 Fix lua script for redis-cluster mode 2021-06-29 16:34:21 -07:00
Ken Hibino
a0df047f71 Use md5 to generate checksum for unique key 2021-06-29 16:34:21 -07:00
Ken Hibino
68dd6d9a9d (fix): Clear unique lock when task is deleted via Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
6cce31a134 Fix recoverer test 2021-06-29 16:34:21 -07:00
Ken Hibino
f9d7af3def Update ProcessorRetry test 2021-06-29 16:34:21 -07:00
Ken Hibino
b0321fb465 Format payload bytes in CLI output 2021-06-29 16:34:21 -07:00
Ken Hibino
7776c7ae53 Rename cli subcommand to not to use dash 2021-06-29 16:34:21 -07:00
Ken Hibino
709ca79a2b Add task inspect command 2021-06-29 16:34:21 -07:00
Ken Hibino
08d8f0b37c Add String method to TaskState 2021-06-29 16:34:21 -07:00
Ken Hibino
385323b679 Minor fix in queue command 2021-06-29 16:34:21 -07:00
Ken Hibino
77604af265 Fix asynq CLI build 2021-06-29 16:34:21 -07:00
Ken Hibino
4765742e8a Add Inspector.GetTaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
68839dc9d3 Fix lua scripts for redis cluster 2021-06-29 16:34:21 -07:00
Ken Hibino
8922d2423a Define RDB.GetTaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
b358de907e Rename Inspector.CurrentStats to GetQueueInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
8ee1825e67 Rename Inspector.CancelActiveTask to CancelProcessing 2021-06-29 16:34:21 -07:00
Ken Hibino
c8bda26bed Make NodeCluster fields read-only 2021-06-29 16:34:21 -07:00
Ken Hibino
8aeeb61c9d Misc cleanup 2021-06-29 16:34:21 -07:00
Ken Hibino
96c51fdc23 Update WorkerInfo and remove unnecessary types 2021-06-29 16:34:21 -07:00
Ken Hibino
ea9086fd8b Update Inspector.List*Task methods to return ErrQueueNotFound 2021-06-29 16:34:21 -07:00
Ken Hibino
e63d51da0c Update Inspector.ListArchivedTasks 2021-06-29 16:34:21 -07:00
Ken Hibino
cd351d49b9 Add LastFailedAt to TaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
87264b66f3 Record last_failed_at time on Retry or Archive event 2021-06-29 16:34:21 -07:00
Ken Hibino
62168b8d0d Add LastFailedAt field to TaskMessage 2021-06-29 16:34:21 -07:00
Ken Hibino
840f7245b1 Update List methods (expect for ListArchived) 2021-06-29 16:34:21 -07:00
Ken Hibino
12f4c7cf6e Move inspeq package content to asynq package 2021-06-29 16:34:21 -07:00
Ken Hibino
0ec3b55e6b Replace ArchiveTaskByKey with ArchiveTask in Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
4bcc5ab6aa Replace DeleteTaskByKey with DeleteTask in Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
456edb6b71 Replace RunTaskByKey with RunTask in Inspector 2021-06-29 16:34:21 -07:00
Ken Hibino
b835090ad8 Update Client.Enqueue to return TaskInfo 2021-06-29 16:34:21 -07:00
Ken Hibino
09cbea66f6 Define TaskInfo type 2021-06-29 16:34:21 -07:00
Ken Hibino
b9c2572203 Refactor redis keys and store messages in protobuf
Changes:
- Task messages are stored under "asynq:{<qname>}:t:<task_id>" key in redis, value is a HASH type and message are stored under "msg" key in the hash. The hash also stores "deadline", "timeout".
- Redis LIST and ZSET stores task message IDs
- Task messages are serialized using protocol buffer
2021-06-29 16:34:21 -07:00
Ken Hibino
0bf767cf21 Add TaskState type to base package 2021-06-29 16:34:21 -07:00
Ken Hibino
1812d05d21 Fix build 2021-06-29 16:34:21 -07:00
Ken Hibino
4af65d5fa5 Update RDB methods with new errors package 2021-06-29 16:34:21 -07:00
Ken Hibino
a19ad19382 Update RDB.Dequeue with new errors package 2021-06-29 16:34:21 -07:00
Ken Hibino
8117ce8972 Minor fixes 2021-06-29 16:34:21 -07:00
Ken Hibino
d98ecdebb4 Update RDB.EnqueueUnique and RDB.ScheduleUnique with specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
ffe9aa74b3 Add errors.RedisCommandError type 2021-06-29 16:34:21 -07:00
Ken Hibino
d2d4029aba Update RDB.CurrentStats and RDB.HistoricalStats with specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
76bd865ebc Update RDB.RemoveQueue with specific error types 2021-06-29 16:34:21 -07:00
Ken Hibino
136d1c9ea9 Update rdb.List* methods with specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
52e04355d3 Return QueueNotFoundError from DeleteAll* methods 2021-06-29 16:34:21 -07:00
Ken Hibino
cde3e57c6c Update RDB.RunAll* methods with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
dd66acef1b Return QueueNotFoundError from ArchiveAll* methods 2021-06-29 16:34:21 -07:00
Ken Hibino
30a3d9641a Update tests for RDB.DeleteTask and RDB.ArchiveTask 2021-06-29 16:34:21 -07:00
Ken Hibino
961582cba6 Update RDB.RunTask with more specific errors 2021-06-29 16:34:21 -07:00
Ken Hibino
430dbb298e Update RDB.DeleteTask with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
675826be5f Update RDB.ArchiveAll methods with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
62f4e46b73 Update RDB.ArchiveAllPendingTasks with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
a500f8a534 Reorganize test for RDB.ArchiveTask 2021-06-29 16:34:21 -07:00
Ken Hibino
bcfeff38ed Update errors package with detailed comments 2021-06-29 16:34:21 -07:00
Ken Hibino
12a90f6a8d Update RDB.ArchiveTask with custom errors 2021-06-29 16:34:21 -07:00
Ken Hibino
807624e7dd Create internal errors package 2021-06-29 16:34:21 -07:00
Ken Hibino
4d65024bd7 Update rdb.ArchiveTask with more specific error types 2021-06-29 16:34:21 -07:00
Ken Hibino
76486b5cb4 Rename error types 2021-06-29 16:34:21 -07:00
Ken Hibino
1db516c53c Add a list of canonical errors in base package 2021-06-29 16:34:21 -07:00
Ken Hibino
cb5bdf245c Update RDB.ArchiveTask with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
267493ccef Update RDB.RunTask with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
5d7f1b6a80 Update RDB.Requeue with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
77ded502ab Update RDB.Retry, RDB.Archive with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
f2284be43d Update RDB.Dequeue with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
3cadab55cb Update RDB.ForwardIfReady with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
298a420f9f Update RDB.ScheduleUnique with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
b1d717c842 Update RDB.Schedule with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
56e5762eea Update RDB.EnqueueUnique with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
5ec41e388b Update RDB.Enqueue with task state 2021-06-29 16:34:21 -07:00
Ken Hibino
9c95c41651 Change Server API
* Rename ServerStatus to ServerState internally

* Rename terminate to shutdown internally

* Update Scheduler API to match Server API
2021-06-29 16:34:21 -07:00
Ken Hibino
476812475e Change payload to byte slice 2021-06-29 16:34:21 -07:00
Ken Hibino
7af3981929 Refactor redis keys and store messages in protobuf
Changes:
- Task messages are stored under "asynq:{<qname>}:t:<task_id>" key in redis, value is a HASH type and message are stored under "msg" key in the hash. The hash also stores "deadline", "timeout".
- Redis LIST and ZSET stores task message IDs
- Task messages are serialized using protocol buffer
2021-06-29 16:34:21 -07:00
Ken Hibino
2516c4baba v0.17.2 2021-06-06 06:51:30 -07:00
Ken Hibino
ebe482a65c Free uniqueness lock when task is deleted 2021-06-06 06:48:59 -07:00
Vic Shóstak
3e9fc2f972
Update README 2021-04-28 10:25:34 -07:00
Vic Shóstak
63ce9ed0f9
Update README with a new logo 2021-04-14 10:21:47 -07:00
Ken Hibino
32d3f329b9 v0.17.1 2021-04-04 12:51:00 -07:00
Ken Hibino
544c301a8b Fix bug in RDB.memoryUsage 2021-04-04 12:49:19 -07:00
Ken Hibino
8b997d2fab v0.17.0 2021-03-24 16:51:59 -07:00
Ken Hibino
901105a8d7 Add dial, read, write timeout options to RedisConnOpt 2021-03-24 16:49:04 -07:00
Ken Hibino
aaa3f1d4fd v0.16.1 2021-03-20 06:27:03 -07:00
disc
4722ca2d3d Replaced blocking KEYS XXX:* command to non-blocking SCAN XXX:*
More details: https://redis.io/commands/KEYS
2021-03-20 06:24:08 -07:00
Ken Hibino
6a9d9fd717 v0.16.0 2021-03-10 20:39:46 -08:00
Ken Hibino
de28c1ea19 Add Unregister method to Scheduler 2021-03-10 20:38:44 -08:00
Ken Hibino
f618f5b1f5 Add benchmark tests for rdb package 2021-03-07 16:27:14 -08:00
Ken Hibino
aa936466b3 Minor fix 2021-03-07 16:27:14 -08:00
Ken Hibino
5d1ec70544 Run CI build with go1.16 2021-02-25 09:31:17 -08:00
Ken Hibino
d1d3be9b00 Add Web UI section in README 2021-02-01 17:01:04 -08:00
Ken Hibino
bc77f6fe14 v0.15.0 2021-01-31 06:11:17 -08:00
Ken Hibino
efe197a47b Use db13 for inspeq package testing 2021-01-31 06:09:40 -08:00
Ken Hibino
97b5516183 Update RedisConnOpt interface 2021-01-31 06:09:40 -08:00
Ken Hibino
8eafa03ca7 Fix doc indentation 2021-01-31 06:09:40 -08:00
Ken Hibino
430b01c9aa Fix CLI build 2021-01-31 06:09:40 -08:00
Ken Hibino
14c381dc40 Update documentation for inspeq package 2021-01-31 06:09:40 -08:00
Ken Hibino
e13122723a Move all inspector related code to subpackage inspeq 2021-01-31 06:09:40 -08:00
Ken Hibino
eba7c4e085 Record deadline within WorkerInfo 2021-01-31 06:09:40 -08:00
Ken Hibino
bfde0b6283 Add Retry and LastError fields to inspector tasks 2021-01-31 06:09:40 -08:00
Ken Hibino
afde6a7266 Add MemoryUsage field to QueueStats 2021-01-31 06:09:40 -08:00
Ken Hibino
6529a1e0b1 Fix scheduler
* Delete scheduler history data when scheduler stops

* Fix history trimming bug
2021-01-31 06:09:40 -08:00
Ken Hibino
c9a6ab8ae1 Support delete and archive actions on PendingTask
* Add `DeleteAllPendingTasks`, `ArchiveAllPendingTasks` to `Inspector`

* `DeleteTaskByKey` and `ArchiveTaskByKey` now supports deleting/archiving PendingTask

* Updated `asynq task` command with support for deleting/archiving pending tasks
2021-01-31 06:09:40 -08:00
Ken Hibino
557c1a5044 Remove Travis CI files 2021-01-29 23:01:20 -08:00
Ken Hibino
0236eb9a1c
Add benchstat workflow 2021-01-29 22:59:28 -08:00
Ken Hibino
3c2b2cf4a3 Update build status badge 2021-01-29 15:03:27 -08:00
Ken Hibino
04df71198d Create go github action 2021-01-29 14:52:55 -08:00
Ken Hibino
2884044e75 v0.14.1 2021-01-19 06:22:54 -08:00
Ken Hibino
3719fad396 Update asynq version in go.mod for toolings 2021-01-19 06:20:39 -08:00
Ken Hibino
42c7ac0746 v0.14.0 2021-01-14 06:49:36 -08:00
Ken Hibino
d331ff055d Minor doc fixes 2021-01-14 06:43:44 -08:00
Ken Hibino
ccb682853e Export DefaultRetryDelayFunc 2021-01-14 06:43:44 -08:00
Ken Hibino
7c3ad9e45c Update CHANGELOG 2021-01-14 06:43:44 -08:00
Ken Hibino
ea23db4f6b Update migrate command to move all dead tasks to the new archived zset 2021-01-14 06:43:44 -08:00
Ken Hibino
00a25ca570 Rename DeadTask to ArchivedTask and action "kill" to "archive" 2021-01-14 06:43:44 -08:00
Ken Hibino
7235041128 Add SkipRetry error to be used as a return value from Handler 2021-01-14 06:43:44 -08:00
Ken Hibino
a150d18ed7 Include file and line number info in the error generated from a panic 2021-01-14 06:43:44 -08:00
Ken Hibino
0712e90f23 Print stack track when recovering from a panic in processor 2021-01-14 06:43:44 -08:00
Ken Hibino
c5100a9c23 Add a method to list running servers to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
196d66f221 Fix ListSchedulerEnqueueEvents to list recent events first 2021-01-14 06:43:44 -08:00
Ken Hibino
38509e309f Update cron history command to accept pagination options 2021-01-14 06:43:44 -08:00
Ken Hibino
f4dd8fe962 Add ListScheduelerEnqueueEvents to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
c06e9de97d Add CancelActiveTask method to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
52d536a8f5 Update changelog 2021-01-14 06:43:44 -08:00
Ken Hibino
f9c0673116 Add SchedulerEntries method to Inspector 2021-01-14 06:43:44 -08:00
Ken Hibino
b604d25937 Add helper function to parse Option string 2021-01-14 06:43:44 -08:00
Ken Hibino
dfdf530a24 Fix cron history command usage string 2021-01-14 06:43:44 -08:00
Ken Hibino
e9239260ae Add DeleteQueue method to Inspector
- Added ErrQueueNotFound and ErrQueueNotEmpty type to indicate the kind
  of an error returned from the method.
2021-01-14 06:43:44 -08:00
Bojan Zivanovic
8f9d5a3352 When a scheduler enqueues a task, log to DEBUG, not INFO. Fixes #223. 2021-01-13 15:49:56 -08:00
MinJae Kwon
c4dc993241
fix: resolve go vet lint 2020-12-20 06:09:51 -08:00
MinJae Kwon
37dfd746d4
fix: syntax error in readme example 2020-12-17 06:05:16 -08:00
Ken Hibino
8d6e4167ab Fix a typo in readme 2020-11-25 06:11:55 -08:00
Ken Hibino
476862dd7b v0.13.1 2020-11-22 12:26:52 -08:00
Ken Hibino
dcd873fa2a fix: Wait for specified time duration before shutdown 2020-11-22 12:25:27 -08:00
strobus
2604bb2192 add tls support to command line tool 2020-10-14 15:13:05 -07:00
Ken Hibino
942345ee80 v0.13.0 2020-10-13 06:33:47 -07:00
Ken Hibino
1f059eeee1 Update docs for periodic tasks feature 2020-10-13 06:31:47 -07:00
Ken Hibino
4ae73abdaa Minor update to asynq cron command 2020-10-13 06:31:47 -07:00
Ken Hibino
96b2318300 Add EnqueueErrorHandler option to SchedulerOpts 2020-10-13 06:31:47 -07:00
Ken Hibino
8312515e64 Update Option interface
- Added `String()`, `Type()`, and `Value()` methods to the interface to
  aid with debugging and error handling.
2020-10-13 06:31:47 -07:00
Ken Hibino
50e7f38365 Add Scheduler
- Renamed previously called scheduler to forwarder to resolve name
  conflicts
2020-10-13 06:31:47 -07:00
Ken Hibino
fadcae76d6 Add String and MarshalJSON methods to Payload type 2020-09-20 07:33:23 -07:00
Ken Hibino
a2d4ead989 Fix comments in Config 2020-09-14 21:48:05 -07:00
Ken Hibino
82b6828f43 Replace benchcmp with benchstat 2020-09-14 06:59:55 -07:00
Ken Hibino
3114987428 v0.12.0 2020-09-12 13:34:27 -07:00
Ken Hibino
1ee3b10104 Update changelog 2020-09-12 12:59:03 -07:00
Ken Hibino
6d720d6a05 Update demo.gif for CLI demo 2020-09-12 12:59:03 -07:00
Ken Hibino
3e6981170d Use color package to bold fonts in CLI output 2020-09-12 12:59:03 -07:00
Ken Hibino
a9aa480551 Update migrate command 2020-09-12 12:59:03 -07:00
Ken Hibino
9d41de795a Mention about testing using redis cluster in CONTRIBUTING.md 2020-09-12 12:59:03 -07:00
Ken Hibino
c43fb21a0a Minor test updates 2020-09-12 12:59:03 -07:00
Ken Hibino
a293efcdab Add Close to Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
69d7ec725a Close redis client after each test run 2020-09-12 12:59:03 -07:00
Ken Hibino
450a9aa1e2 Add MaxRedirects field in RedisClusterClientOpt 2020-09-12 12:59:03 -07:00
Ken Hibino
6e294a7013 Add Username field to RedisConnOpt 2020-09-12 12:59:03 -07:00
Ken Hibino
c26b7469bd Display cluster info in stats command when --cluster flag is passed 2020-09-12 12:59:03 -07:00
Ken Hibino
818c2d6f35 Add GetQueueName helper to extract queue name from context 2020-09-12 12:59:03 -07:00
Ken Hibino
e09870a08a Update package documentation 2020-09-12 12:59:03 -07:00
Ken Hibino
ac3d5b126a Update README 2020-09-12 12:59:03 -07:00
Ken Hibino
29e542e591 Rename Enqueue methods in Inspector to Run 2020-09-12 12:59:03 -07:00
Ken Hibino
a891ce5568 Rename InProgress to Active 2020-09-12 12:59:03 -07:00
Ken Hibino
ebe3c4083f Rename NextEnqueueAt to NextProcessAt 2020-09-12 12:59:03 -07:00
Ken Hibino
c8c47fcbf0 Rename Enqueued to Pending 2020-09-12 12:59:03 -07:00
Ken Hibino
cca680a7fd Change Client.Enqueue to take ProcessAt and ProcessIn as Option 2020-09-12 12:59:03 -07:00
Ken Hibino
8076b5ae50 Use different redis db number for rdb package tests 2020-09-12 12:59:03 -07:00
Ken Hibino
a42c174dae Display cluster keyslot and nodes in queueList command 2020-09-12 12:59:03 -07:00
Ken Hibino
a88325cb96 Add ClusterNodes and ClusterKeySlot in Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
eb739a0258 Fix flaky test 2020-09-12 12:59:03 -07:00
Ken Hibino
a9c31553b8 Add redis-cluster support in asynq CLI 2020-09-12 12:59:03 -07:00
Ken Hibino
dab8295883 Validate queue name in Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
131ac823fd Return error if queue name is empty when enqueueing 2020-09-12 12:59:03 -07:00
Ken Hibino
4897dba397 Upgrade redis client lib to v7.4.0 2020-09-12 12:59:03 -07:00
Ken Hibino
6b96459881 Add test flags to run tests using redis cluster 2020-09-12 12:59:03 -07:00
Ken Hibino
572eb338d5 Fix flaky ProcessorRetry test 2020-09-12 12:59:03 -07:00
Ken Hibino
27f4027447 Add RedisClusterClientOpt to connect to redis cluster 2020-09-12 12:59:03 -07:00
Ken Hibino
ee1afd12f5 Fix done lua script
If UniqueKey is an empty string, do not provide the key to Lua script
because that will cause CROSSSLOT error in redis cluster (since it
doesn't have any hash tag).
2020-09-12 12:59:03 -07:00
Ken Hibino
3ac548e97c Fix dequeue Lua script to use a single hash tag 2020-09-12 12:59:03 -07:00
Ken Hibino
f38f94b947 Restructure CLI commands with subcommands 2020-09-12 12:59:03 -07:00
Ken Hibino
d6f389e63f Add Queues method to Inspector 2020-09-12 12:59:03 -07:00
Ken Hibino
118ef27bf2 Update RemoveQueue in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
fad0696828 Fix errors in inspector tests 2020-09-12 12:59:03 -07:00
Ken Hibino
4037b41479 Fix client tests 2020-09-12 12:59:03 -07:00
Ken Hibino
96f23d88cd Add more processor tests 2020-09-12 12:59:03 -07:00
Ken Hibino
83bdca5220 Fix test build errors 2020-09-12 12:59:03 -07:00
Ken Hibino
2f226dfb84 Update ListServers and ListWorkers methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
3f26122ac0 Fix more build errors 2020-09-12 12:59:03 -07:00
Ken Hibino
2a18181501 Fix inspector build error 2020-09-12 12:59:03 -07:00
Ken Hibino
aa2676bb57 Update Broker interface 2020-09-12 12:59:03 -07:00
Ken Hibino
9348a62691 Update Inspector API 2020-09-12 12:59:03 -07:00
Ken Hibino
f59de9ac56 Update all delete methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
996a6c0ead Update all kill methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
47e9ba4eba Update enqueue methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
dbf140a767 Update all list methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
5f82b4b365 Update HistoricalStats method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
44a3d177f0 Update Pause and Unpause methods in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
24b13bd865 Update CurrentStats method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
d25090c669 Add AllQueues method to RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
b5caefd663 Remove stale benchmark test 2020-09-12 12:59:03 -07:00
Ken Hibino
becd26479b Update WriteServerState and ClearServerState in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
4b81b91d3e Minor fix 2020-09-12 12:59:03 -07:00
Ken Hibino
8e23b865e9 Update recoverer 2020-09-12 12:59:03 -07:00
Ken Hibino
a873d488ee Update ListDeadlineExceeded in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
e0a8f1252a Update scheduler to check and enqueue for only the specified queues. 2020-09-12 12:59:03 -07:00
Ken Hibino
650d7fdbe9 Update CheckAndEnqueue method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
f6d504939e Update Requeue method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
74f08795f8 Update Kill method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
35b2b1782e Update Retry method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
f63dcce0c0 Update Done method in RDB 2020-09-12 12:59:03 -07:00
Ken Hibino
565f86ee4f Update Dequeue command in rdb 2020-09-12 12:59:03 -07:00
Ken Hibino
94aa878060 Update Enqueue and Schedule commands in rdb 2020-09-12 12:59:03 -07:00
Ken Hibino
50b6034bf9 Move unique key generator function to base 2020-09-12 12:59:03 -07:00
Ken Hibino
154113d0d0 Update base package to generate redis keys with hashtag 2020-09-12 12:59:03 -07:00
Ken Hibino
669c7995c4 Run CI builds using go v1.15.x 2020-09-02 06:34:58 -07:00
Ken Hibino
6d6a301379 v0.11.0 2020-07-28 22:46:41 -07:00
Ken Hibino
53f9475582 Update changelog 2020-07-28 22:45:57 -07:00
Ken Hibino
e8fdbc5a72 Fix history command 2020-07-28 22:45:57 -07:00
Ken Hibino
5f06c308f0 Add Pause and Unpause queue methods to Inspector 2020-07-28 22:45:57 -07:00
Ken Hibino
a913e6d73f Add healthchecker to check broker connection 2020-07-28 22:45:57 -07:00
Ken Hibino
6978e93080 Fix flaky test 2020-07-28 22:45:57 -07:00
Ken Hibino
92d77bbc6e Minor comment fix 2020-07-28 22:45:57 -07:00
Ken Hibino
a28f61f313 Add Inspector type 2020-07-28 22:45:57 -07:00
Ken Hibino
9bd3d8e19e v0.10.0 2020-07-06 05:53:56 -07:00
Ken Hibino
7382e2aeb8 Do not start worker goroutine for task already exceeded its deadline 2020-07-06 05:48:31 -07:00
Ken Hibino
007fac8055 Invoke error handler when ctx.Done channel is closed 2020-07-06 05:48:31 -07:00
Ken Hibino
8d43fe407a Change ErrorHandler function signature 2020-07-06 05:48:31 -07:00
Ken Hibino
34b90ecc8a Return Result struct to caller of Enqueue 2020-07-06 05:48:31 -07:00
Ken Hibino
8b60e6a268 Replace github.com/rs/xid with github.com/google/uuid 2020-07-06 05:48:31 -07:00
Ken Hibino
486dcd799b Add version command to CLI 2020-07-06 05:48:31 -07:00
Ken Hibino
195f4603bb Add migrate command to CLI
The command converts all messages in redis to be compatible for asynq
v0.10.0
2020-07-06 05:48:31 -07:00
Ken Hibino
2e2c9b9f6b Update docs 2020-07-06 05:48:31 -07:00
Ken Hibino
199bf4d66a Minor code cleanup 2020-07-06 05:48:31 -07:00
Ken Hibino
7e942ec241 Use int64 type for Timeout and Deadline in TaskMessage 2020-07-06 05:48:31 -07:00
Ken Hibino
379da8f7a2 Clean up processor test 2020-07-06 05:48:31 -07:00
Ken Hibino
feee87adda Add recoverer 2020-07-06 05:48:31 -07:00
Ken Hibino
7657f560ec Add RDB.ListDeadlineExceeded 2020-07-06 05:48:31 -07:00
Ken Hibino
7c7de0d8e0 Fix processor 2020-07-06 05:48:31 -07:00
Ken Hibino
83f1e20d74 Add deadline to syncRequest
- syncer will drop a request if its deadline has been exceeded
2020-07-06 05:48:31 -07:00
Ken Hibino
4e8ac151ae Update processor to adapt for deadlines set change
- Processor dequeues tasks only when it's available to process
- Processor retries a task when its context's Done channel is closed
2020-07-06 05:48:31 -07:00
Ken Hibino
08b71672aa Update RDB.Requeue to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
92af00f9fd Update RDB.Dequeue to return deadline as time.Time 2020-07-06 05:48:31 -07:00
Ken Hibino
113451ce6a Update RDB.Kill to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
9cd9f3d6b4 Update RDB.Retry to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
7b9119c703 Update RDB.Done to remove message from deadlines set 2020-07-06 05:48:31 -07:00
Ken Hibino
9b05dea394 Update RDB.Dequeue to return message and deadline 2020-07-06 05:48:31 -07:00
Ken Hibino
6cc5bafaba Add task message to deadlines set on dequeue
Updated dequeueCmd to decode the message and compute its deadline and add
the message to the Deadline set.
2020-07-06 05:48:31 -07:00
Ken Hibino
716d3d987e Use default timeout of 30mins if both timeout and deadline are not
provided
2020-07-06 05:48:31 -07:00
Ken Hibino
0527b93432 Change TaskMessage Timeout and Deadline to int
* This change breaks existing tasks in Redis
2020-07-06 05:48:31 -07:00
Ken Hibino
5dddc35d7c Add redis key for deadlines in base package 2020-07-06 05:48:31 -07:00
121 changed files with 31775 additions and 6997 deletions

4
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,4 @@
# These are supported funding model platforms
github: [hibiken]
open_collective: ken-hibino

View File

@ -3,13 +3,20 @@ name: Bug report
about: Create a report to help us improve
title: "[BUG] Description of the bug"
labels: bug
assignees: hibiken
assignees:
- hibiken
- kamikazechaser
---
**Describe the bug**
A clear and concise description of what the bug is.
**Environment (please complete the following information):**
- OS: [e.g. MacOS, Linux]
- `asynq` package version [e.g. v0.25.0]
- Redis/Valkey version
**To Reproduce**
Steps to reproduce the behavior (Code snippets if applicable):
1. Setup background processing ...
@ -22,9 +29,5 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. MacOS, Linux]
- Version of `asynq` package [e.g. v1.0.0]
**Additional context**
Add any other context about the problem here.

View File

@ -3,7 +3,9 @@ name: Feature request
about: Suggest an idea for this project
title: "[FEATURE REQUEST] Description of the feature request"
labels: enhancement
assignees: hibiken
assignees:
- hibiken
- kamikazechaser
---

24
.github/dependabot.yaml vendored Normal file
View File

@ -0,0 +1,24 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
labels:
- "pr-deps"
- package-ecosystem: "gomod"
directory: "/tools"
schedule:
interval: "weekly"
labels:
- "pr-deps"
- package-ecosystem: "gomod"
directory: "/x"
schedule:
interval: "weekly"
labels:
- "pr-deps"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

82
.github/workflows/benchstat.yml vendored Normal file
View File

@ -0,0 +1,82 @@
# This workflow runs benchmarks against the current branch,
# compares them to benchmarks against master,
# and uploads the results as an artifact.
name: benchstat
on: [pull_request]
jobs:
incoming:
runs-on: ubuntu-latest
services:
redis:
image: redis:7
ports:
- 6379:6379
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.23.x
- name: Benchmark
run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a new.txt
- name: Upload Benchmark
uses: actions/upload-artifact@v4
with:
name: bench-incoming
path: new.txt
current:
runs-on: ubuntu-latest
services:
redis:
image: redis:7
ports:
- 6379:6379
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: master
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.23.x
- name: Benchmark
run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a old.txt
- name: Upload Benchmark
uses: actions/upload-artifact@v4
with:
name: bench-current
path: old.txt
benchstat:
needs: [incoming, current]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.23.x
- name: Install benchstat
run: go get -u golang.org/x/perf/cmd/benchstat
- name: Download Incoming
uses: actions/download-artifact@v4
with:
name: bench-incoming
- name: Download Current
uses: actions/download-artifact@v4
with:
name: bench-current
- name: Benchstat Results
run: benchstat old.txt new.txt | tee -a benchstat.txt
- name: Upload benchstat results
uses: actions/upload-artifact@v4
with:
name: benchstat
path: benchstat.txt

83
.github/workflows/build.yml vendored Normal file
View File

@ -0,0 +1,83 @@
name: build
on: [push, pull_request]
jobs:
build:
strategy:
matrix:
os: [ubuntu-latest]
go-version: [1.22.x, 1.23.x]
runs-on: ${{ matrix.os }}
services:
redis:
image: redis:7
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: false
- name: Build core module
run: go build -v ./...
- name: Build x module
run: cd x && go build -v ./... && cd ..
- name: Test core module
run: go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
- name: Test x module
run: cd x && go test -race -v ./... && cd ..
- name: Benchmark Test
run: go test -run=^$ -bench=. -loglevel=debug ./...
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
build-tool:
strategy:
matrix:
os: [ubuntu-latest]
go-version: [1.22.x, 1.23.x]
runs-on: ${{ matrix.os }}
services:
redis:
image: redis:7
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: false
- name: Build tools module
run: cd tools && go build -v ./... && cd ..
- name: Test tools module
run: cd tools && go test -race -v ./... && cd ..
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: stable
- name: golangci-lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.61

10
.gitignore vendored
View File

@ -1,3 +1,4 @@
vendor
# Binaries for programs and plugins
*.exe
*.exe~
@ -14,8 +15,13 @@
# Ignore examples for now
/examples
# Ignore command binary
# Ignore tool binaries
/tools/asynq/asynq
/tools/metrics_exporter/metrics_exporter
# Ignore asynq config file
.asynq.*
.asynq.*
# Ignore editor config files
.vscode
.idea

View File

@ -1,13 +0,0 @@
language: go
go_import_path: github.com/hibiken/asynq
git:
depth: 1
go: [1.13.x, 1.14.x]
script:
- go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
- go test -run=XXX -bench=. -loglevel=debug ./...
services:
- redis-server
after_success:
- bash ./.travis/benchcmp.sh
- bash <(curl -s https://codecov.io/bash)

View File

@ -1,18 +0,0 @@
if [ "${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}" != "master" ]; then
REMOTE_URL="$(git config --get remote.origin.url)";
cd ${TRAVIS_BUILD_DIR}/.. && \
git clone ${REMOTE_URL} "${TRAVIS_REPO_SLUG}-bench" && \
cd "${TRAVIS_REPO_SLUG}-bench" && \
# Benchmark master
git checkout master && \
go test -run=XXX -bench=. ./... > master.txt && \
# Benchmark feature branch
git checkout ${TRAVIS_COMMIT} && \
go test -run=XXX -bench=. ./... > feature.txt && \
# compare two benchmarks
go get -u golang.org/x/tools/cmd/benchcmp && \
benchcmp master.txt feature.txt;
fi

View File

@ -7,6 +7,398 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.25.1] - 2024-12-11
### Upgrades
* Some packages
### Added
* Add `HeartbeatInterval` option to the scheduler (PR: https://github.com/hibiken/asynq/pull/956)
* Add `RedisUniversalClient` support to periodic task manager (PR: https://github.com/hibiken/asynq/pull/958)
* Add `--insecure` flag to CLI dash command (PR: https://github.com/hibiken/asynq/pull/980)
* Add logging for registration errors (PR: https://github.com/hibiken/asynq/pull/657)
### Fixes
- Perf: Use string concat inplace of fmt.Sprintf in hotpath (PR: https://github.com/hibiken/asynq/pull/962)
- Perf: Init map with size (PR: https://github.com/hibiken/asynq/pull/673)
- Fix: `Scheduler` and `PeriodicTaskManager` graceful shutdown (PR: https://github.com/hibiken/asynq/pull/977)
- Fix: `Server` graceful shutdown on UNIX systems (PR: https://github.com/hibiken/asynq/pull/982)
## [0.25.0] - 2024-10-29
### Upgrades
- Minumum go version is set to 1.22 (PR: https://github.com/hibiken/asynq/pull/925)
- Internal protobuf package is upgraded to address security advisories (PR: https://github.com/hibiken/asynq/pull/925)
- Most packages are upgraded
- CI/CD spec upgraded
### Added
- `IsPanicError` function is introduced to support catching of panic errors when processing tasks (PR: https://github.com/hibiken/asynq/pull/491)
- `JanitorInterval` and `JanitorBatchSize` are added as Server options (PR: https://github.com/hibiken/asynq/pull/715)
- `NewClientFromRedisClient` is introduced to allow reusing an existing redis client (PR: https://github.com/hibiken/asynq/pull/742)
- `TaskCheckInterval` config option is added to specify the interval between checks for new tasks to process when all queues are empty (PR: https://github.com/hibiken/asynq/pull/694)
- `Ping` method is added to Client, Server and Scheduler ((PR: https://github.com/hibiken/asynq/pull/585))
- `RevokeTask` error type is introduced to prevent a task from being retried or archived (PR: https://github.com/hibiken/asynq/pull/882)
- `SentinelUsername` is added as a redis config option (PR: https://github.com/hibiken/asynq/pull/924)
- Some jitter is introduced to improve latency when fetching jobs in the processor (PR: https://github.com/hibiken/asynq/pull/868)
- Add task enqueue command to the CLI (PR: https://github.com/hibiken/asynq/pull/918)
- Add a map cache (concurrent safe) to keep track of queues that ultimately reduces redis load when enqueuing tasks (PR: https://github.com/hibiken/asynq/pull/946)
### Fixes
- Archived tasks that are trimmed should now be deleted (PR: https://github.com/hibiken/asynq/pull/743)
- Fix lua script when listing task messages with an expired lease (PR: https://github.com/hibiken/asynq/pull/709)
- Fix potential context leaks due to cancellation not being called (PR: https://github.com/hibiken/asynq/pull/926)
- Misc documentation fixes
- Misc test fixes
## [0.24.1] - 2023-05-01
### Changed
- Updated package version dependency for go-redis
## [0.24.0] - 2023-01-02
### Added
- `PreEnqueueFunc`, `PostEnqueueFunc` is added in `Scheduler` and deprecated `EnqueueErrorHandler` (PR: https://github.com/hibiken/asynq/pull/476)
### Changed
- Removed error log when `Scheduler` failed to enqueue a task. Use `PostEnqueueFunc` to check for errors and task actions if needed.
- Changed log level from ERROR to WARNINING when `Scheduler` failed to record `SchedulerEnqueueEvent`.
## [0.23.0] - 2022-04-11
### Added
- `Group` option is introduced to enqueue task in a group.
- `GroupAggregator` and related types are introduced for task aggregation feature.
- `GroupGracePeriod`, `GroupMaxSize`, `GroupMaxDelay`, and `GroupAggregator` fields are added to `Config`.
- `Inspector` has new methods related to "aggregating tasks".
- `Group` field is added to `TaskInfo`.
- (CLI): `group ls` command is added
- (CLI): `task ls` supports listing aggregating tasks via `--state=aggregating --group=<GROUP>` flags
- Enable rediss url parsing support
### Fixed
- Fixed overflow issue with 32-bit systems (For details, see https://github.com/hibiken/asynq/pull/426)
## [0.22.1] - 2022-02-20
### Fixed
- Fixed Redis version compatibility: Keep support for redis v4.0+
## [0.22.0] - 2022-02-19
### Added
- `BaseContext` is introduced in `Config` to specify callback hook to provide a base `context` from which `Handler` `context` is derived
- `IsOrphaned` field is added to `TaskInfo` to describe a task left in active state with no worker processing it.
### Changed
- `Server` now recovers tasks with an expired lease. Recovered tasks are retried/archived with `ErrLeaseExpired` error.
## [0.21.0] - 2022-01-22
### Added
- `PeriodicTaskManager` is added. Prefer using this over `Scheduler` as it has better support for dynamic periodic tasks.
- The `asynq stats` command now supports a `--json` option, making its output a JSON object
- Introduced new configuration for `DelayedTaskCheckInterval`. See [godoc](https://godoc.org/github.com/hibiken/asynq) for more details.
## [0.20.0] - 2021-12-19
### Added
- Package `x/metrics` is added.
- Tool `tools/metrics_exporter` binary is added.
- `ProcessedTotal` and `FailedTotal` fields were added to `QueueInfo` struct.
## [0.19.1] - 2021-12-12
### Added
- `Latency` field is added to `QueueInfo`.
- `EnqueueContext` method is added to `Client`.
### Fixed
- Fixed an error when user pass a duration less than 1s to `Unique` option
## [0.19.0] - 2021-11-06
### Changed
- `NewTask` takes `Option` as variadic argument
- Bumped minimum supported go version to 1.14 (i.e. go1.14 or higher is required).
### Added
- `Retention` option is added to allow user to specify task retention duration after completion.
- `TaskID` option is added to allow user to specify task ID.
- `ErrTaskIDConflict` sentinel error value is added.
- `ResultWriter` type is added and provided through `Task.ResultWriter` method.
- `TaskInfo` has new fields `CompletedAt`, `Result` and `Retention`.
### Removed
- `Client.SetDefaultOptions` is removed. Use `NewTask` instead to pass default options for tasks.
## [0.18.6] - 2021-10-03
### Changed
- Updated `github.com/go-redis/redis` package to v8
## [0.18.5] - 2021-09-01
### Added
- `IsFailure` config option is added to determine whether error returned from Handler counts as a failure.
## [0.18.4] - 2021-08-17
### Fixed
- Scheduler methods are now thread-safe. It's now safe to call `Register` and `Unregister` concurrently.
## [0.18.3] - 2021-08-09
### Changed
- `Client.Enqueue` no longer enqueues tasks with empty typename; Error message is returned.
## [0.18.2] - 2021-07-15
### Changed
- Changed `Queue` function to not to convert the provided queue name to lowercase. Queue names are now case-sensitive.
- `QueueInfo.MemoryUsage` is now an approximate usage value.
### Fixed
- Fixed latency issue around memory usage (see https://github.com/hibiken/asynq/issues/309).
## [0.18.1] - 2021-07-04
### Changed
- Changed to execute task recovering logic when server starts up; Previously it needed to wait for a minute for task recovering logic to exeucte.
### Fixed
- Fixed task recovering logic to execute every minute
## [0.18.0] - 2021-06-29
### Changed
- NewTask function now takes array of bytes as payload.
- Task `Type` and `Payload` should be accessed by a method call.
- `Server` API has changed. Renamed `Quiet` to `Stop`. Renamed `Stop` to `Shutdown`. _Note:_ As a result of this renaming, the behavior of `Stop` has changed. Please update the exising code to call `Shutdown` where it used to call `Stop`.
- `Scheduler` API has changed. Renamed `Stop` to `Shutdown`.
- Requires redis v4.0+ for multiple field/value pair support
- `Client.Enqueue` now returns `TaskInfo`
- `Inspector.RunTaskByKey` is replaced with `Inspector.RunTask`
- `Inspector.DeleteTaskByKey` is replaced with `Inspector.DeleteTask`
- `Inspector.ArchiveTaskByKey` is replaced with `Inspector.ArchiveTask`
- `inspeq` package is removed. All types and functions from the package is moved to `asynq` package.
- `WorkerInfo` field names have changed.
- `Inspector.CancelActiveTask` is renamed to `Inspector.CancelProcessing`
## [0.17.2] - 2021-06-06
### Fixed
- Free unique lock when task is deleted (https://github.com/hibiken/asynq/issues/275).
## [0.17.1] - 2021-04-04
### Fixed
- Fix bug in internal `RDB.memoryUsage` method.
## [0.17.0] - 2021-03-24
### Added
- `DialTimeout`, `ReadTimeout`, and `WriteTimeout` options are added to `RedisConnOpt`.
## [0.16.1] - 2021-03-20
### Fixed
- Replace `KEYS` command with `SCAN` as recommended by [redis doc](https://redis.io/commands/KEYS).
## [0.16.0] - 2021-03-10
### Added
- `Unregister` method is added to `Scheduler` to remove a registered entry.
## [0.15.0] - 2021-01-31
**IMPORTATNT**: All `Inspector` related code are moved to subpackage "github.com/hibiken/asynq/inspeq"
### Changed
- `Inspector` related code are moved to subpackage "github.com/hibken/asynq/inspeq".
- `RedisConnOpt` interface has changed slightly. If you have been passing `RedisClientOpt`, `RedisFailoverClientOpt`, or `RedisClusterClientOpt` as a pointer,
update your code to pass as a value.
- `ErrorMsg` field in `RetryTask` and `ArchivedTask` was renamed to `LastError`.
### Added
- `MaxRetry`, `Retried`, `LastError` fields were added to all task types returned from `Inspector`.
- `MemoryUsage` field was added to `QueueStats`.
- `DeleteAllPendingTasks`, `ArchiveAllPendingTasks` were added to `Inspector`
- `DeleteTaskByKey` and `ArchiveTaskByKey` now supports deleting/archiving `PendingTask`.
- asynq CLI now supports deleting/archiving pending tasks.
## [0.14.1] - 2021-01-19
### Fixed
- `go.mod` file for CLI
## [0.14.0] - 2021-01-14
**IMPORTATNT**: Please run `asynq migrate` command to migrate from the previous versions.
### Changed
- Renamed `DeadTask` to `ArchivedTask`.
- Renamed the operation `Kill` to `Archive` in `Inpsector`.
- Print stack trace when Handler panics.
- Include a file name and a line number in the error message when recovering from a panic.
### Added
- `DefaultRetryDelayFunc` is now a public API, which can be used in the custom `RetryDelayFunc`.
- `SkipRetry` error is added to be used as a return value from `Handler`.
- `Servers` method is added to `Inspector`
- `CancelActiveTask` method is added to `Inspector`.
- `ListSchedulerEnqueueEvents` method is added to `Inspector`.
- `SchedulerEntries` method is added to `Inspector`.
- `DeleteQueue` method is added to `Inspector`.
## [0.13.1] - 2020-11-22
### Fixed
- Fixed processor to wait for specified time duration before forcefully shutdown workers.
## [0.13.0] - 2020-10-13
### Added
- `Scheduler` type is added to enable periodic tasks. See the godoc for its APIs and [wiki](https://github.com/hibiken/asynq/wiki/Periodic-Tasks) for the getting-started guide.
### Changed
- interface `Option` has changed. See the godoc for the new interface.
This change would have no impact as long as you are using exported functions (e.g. `MaxRetry`, `Queue`, etc)
to create `Option`s.
### Added
- `Payload.String() string` method is added
- `Payload.MarshalJSON() ([]byte, error)` method is added
## [0.12.0] - 2020-09-12
**IMPORTANT**: If you are upgrading from a previous version, please install the latest version of the CLI `go get -u github.com/hibiken/asynq/tools/asynq` and run `asynq migrate` command. No process should be writing to Redis while you run the migration command.
## The semantics of queue have changed
Previously, we called tasks that are ready to be processed _"Enqueued tasks"_, and other tasks that are scheduled to be processed in the future _"Scheduled tasks"_, etc.
We changed the semantics of _"Enqueue"_ slightly; All tasks that client pushes to Redis are _Enqueued_ to a queue. Within a queue, tasks will transition from one state to another.
Possible task states are:
- `Pending`: task is ready to be processed (previously called "Enqueued")
- `Active`: tasks is currently being processed (previously called "InProgress")
- `Scheduled`: task is scheduled to be processed in the future
- `Retry`: task failed to be processed and will be retried again in the future
- `Dead`: task has exhausted all of its retries and stored for manual inspection purpose
**These semantics change is reflected in the new `Inspector` API and CLI commands.**
---
### Changed
#### `Client`
Use `ProcessIn` or `ProcessAt` option to schedule a task instead of `EnqueueIn` or `EnqueueAt`.
| Previously | v0.12.0 |
| --------------------------- | ------------------------------------------ |
| `client.EnqueueAt(t, task)` | `client.Enqueue(task, asynq.ProcessAt(t))` |
| `client.EnqueueIn(d, task)` | `client.Enqueue(task, asynq.ProcessIn(d))` |
#### `Inspector`
All Inspector methods are scoped to a queue, and the methods take `qname (string)` as the first argument.
`EnqueuedTask` is renamed to `PendingTask` and its corresponding methods.
`InProgressTask` is renamed to `ActiveTask` and its corresponding methods.
Command "Enqueue" is replaced by the verb "Run" (e.g. `EnqueueAllScheduledTasks` --> `RunAllScheduledTasks`)
#### `CLI`
CLI commands are restructured to use subcommands. Commands are organized into a few management commands:
To view details on any command, use `asynq help <command> <subcommand>`.
- `asynq stats`
- `asynq queue [ls inspect history rm pause unpause]`
- `asynq task [ls cancel delete kill run delete-all kill-all run-all]`
- `asynq server [ls]`
### Added
#### `RedisConnOpt`
- `RedisClusterClientOpt` is added to connect to Redis Cluster.
- `Username` field is added to all `RedisConnOpt` types in order to authenticate connection when Redis ACLs are used.
#### `Client`
- `ProcessIn(d time.Duration) Option` and `ProcessAt(t time.Time) Option` are added to replace `EnqueueIn` and `EnqueueAt` functionality.
#### `Inspector`
- `Queues() ([]string, error)` method is added to get all queue names.
- `ClusterKeySlot(qname string) (int64, error)` method is added to get queue's hash slot in Redis cluster.
- `ClusterNodes(qname string) ([]ClusterNode, error)` method is added to get a list of Redis cluster nodes for the given queue.
- `Close() error` method is added to close connection with redis.
### `Handler`
- `GetQueueName(ctx context.Context) (string, bool)` helper is added to extract queue name from a context.
## [0.11.0] - 2020-07-28
### Added
- `Inspector` type was added to monitor and mutate state of queues and tasks.
- `HealthCheckFunc` and `HealthCheckInterval` fields were added to `Config` to allow user to specify a callback
function to check for broker connection.
## [0.10.0] - 2020-07-06
### Changed
- All tasks now requires timeout or deadline. By default, timeout is set to 30 mins.
- Tasks that exceed its deadline are automatically retried.
- Encoding schema for task message has changed. Please install the latest CLI and run `migrate` command if
you have tasks enqueued with the previous version of asynq.
- API of `(*Client).Enqueue`, `(*Client).EnqueueIn`, and `(*Client).EnqueueAt` has changed to return a `*Result`.
- API of `ErrorHandler` has changed. It now takes context as the first argument and removed `retried`, `maxRetry` from the argument list.
Use `GetRetryCount` and/or `GetMaxRetry` to get the count values.
## [0.9.4] - 2020-06-13
### Fixed
@ -19,7 +411,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Fixes the JSON number overflow issue (https://github.com/hibiken/asynq/issues/166).
## [0.9.2] - 2020-06-08
### Added

128
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
ken.hibino7@gmail.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@ -38,13 +38,14 @@ Thank you! We'll try to respond as quickly as possible.
## Contributing Code
1. Fork this repo
2. Download your fork `git clone https://github.com/your-username/asynq && cd asynq`
2. Download your fork `git clone git@github.com:your-username/asynq.git && cd asynq`
3. Create your branch `git checkout -b your-branch-name`
4. Make and commit your changes
5. Push the branch `git push origin your-branch-name`
6. Create a new pull request
Please try to keep your pull request focused in scope and avoid including unrelated commits.
Please run tests against redis cluster locally with `--redis_cluster` flag to ensure that code works for Redis cluster. TODO: Run tests using Redis cluster on CI.
After you have submitted your pull request, we'll try to get back to you as soon as possible. We may suggest some changes or improvements.

11
Makefile Normal file
View File

@ -0,0 +1,11 @@
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
proto: internal/proto/asynq.proto
protoc -I=$(ROOT_DIR)/internal/proto \
--go_out=$(ROOT_DIR)/internal/proto \
--go_opt=module=github.com/hibiken/asynq/internal/proto \
$(ROOT_DIR)/internal/proto/asynq.proto
.PHONY: lint
lint:
golangci-lint run

285
README.md
View File

@ -1,143 +1,167 @@
# Asynq
<img src="https://user-images.githubusercontent.com/11155743/114697792-ffbfa580-9d26-11eb-8e5b-33bef69476dc.png" alt="Asynq logo" width="360px" />
# Simple, reliable & efficient distributed task queue in Go
[![Build Status](https://travis-ci.com/hibiken/asynq.svg?token=paqzfpSkF4p23s5Ux39b&branch=master)](https://travis-ci.com/hibiken/asynq)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)
[![GoDoc](https://godoc.org/github.com/hibiken/asynq?status.svg)](https://godoc.org/github.com/hibiken/asynq)
[![Go Report Card](https://goreportcard.com/badge/github.com/hibiken/asynq)](https://goreportcard.com/report/github.com/hibiken/asynq)
![Build Status](https://github.com/hibiken/asynq/workflows/build/badge.svg)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Gitter chat](https://badges.gitter.im/go-asynq/gitter.svg)](https://gitter.im/go-asynq/community)
[![codecov](https://codecov.io/gh/hibiken/asynq/branch/master/graph/badge.svg)](https://codecov.io/gh/hibiken/asynq)
## Overview
Asynq is a Go library for queueing tasks and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
Asynq is a Go library for queueing tasks and processing them asynchronously with workers. It's backed by [Redis](https://redis.io/) and is designed to be scalable yet easy to get started.
Highlevel overview of how Asynq works:
- Client puts task on a queue
- Server pulls task off queues and starts a worker goroutine for each task
- Client puts tasks on a queue
- Server pulls tasks off queues and starts a worker goroutine for each task
- Tasks are processed concurrently by multiple workers
Task queues are used as a mechanism to distribute work across multiple machines.
A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
Task queues are used as a mechanism to distribute work across multiple machines. A system can consist of multiple worker servers and brokers, giving way to high availability and horizontal scaling.
![Task Queue Diagram](/docs/assets/overview.png)
**Example use case**
## Stability and Compatibility
**Important Note**: Current major version is zero (v0.x.x) to accomodate rapid development and fast iteration while getting early feedback from users (Feedback on APIs are appreciated!). The public API could change without a major version update before v1.0.0 release.
**Status**: The library is currently undergoing heavy development with frequent, breaking API changes.
![Task Queue Diagram](https://user-images.githubusercontent.com/11155743/116358505-656f5f80-a806-11eb-9c16-94e49dab0f99.jpg)
## Features
- Guaranteed [at least one execution](https://www.cloudcomputingpatterns.org/at_least_once_delivery/) of a task
- Scheduling of tasks
- Durability since tasks are written to Redis
- [Retries](https://github.com/hibiken/asynq/wiki/Task-Retry) of failed tasks
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#weighted-priority-queues)
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Priority-Queues#strict-priority-queues)
- Automatic recovery of tasks in the event of a worker crash
- [Weighted priority queues](https://github.com/hibiken/asynq/wiki/Queue-Priority#weighted-priority)
- [Strict priority queues](https://github.com/hibiken/asynq/wiki/Queue-Priority#strict-priority)
- Low latency to add a task since writes are fast in Redis
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
- Allow [aggregating group of tasks](https://github.com/hibiken/asynq/wiki/Task-aggregation) to batch multiple successive operations
- [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive)
- [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for HA
- [Periodic Tasks](https://github.com/hibiken/asynq/wiki/Periodic-Tasks)
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for high availability
- Integration with [Prometheus](https://prometheus.io/) to collect and visualize queue metrics
- [Web UI](#web-ui) to inspect and remote-control queues and tasks
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Quickstart
## Stability and Compatibility
First, make sure you are running a Redis server locally.
**Status**: The library relatively stable and is currently undergoing **moderate development** with less frequent breaking API changes.
> ☝️ **Important Note**: Current major version is zero (`v0.x.x`) to accommodate rapid development and fast iteration while getting early feedback from users (_feedback on APIs are appreciated!_). The public API could change without a major version update before `v1.0.0` release.
### Redis Cluster Compatibility
Some of the lua scripts in this library may not be compatible with Redis Cluster.
## Sponsoring
If you are using this package in production, **please consider sponsoring the project to show your support!**
## Quickstart
Make sure you have Go installed ([download](https://golang.org/dl/)). The **last two** Go versions are supported (See https://go.dev/dl).
Initialize your project by creating a folder and then running `go mod init github.com/your/repo` ([learn more](https://blog.golang.org/using-go-modules)) inside the folder. Then install Asynq library with the [`go get`](https://golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them) command:
```sh
$ redis-server
go get -u github.com/hibiken/asynq
```
Make sure you're running a Redis server locally or from a [Docker](https://hub.docker.com/_/redis) container. Version `4.0` or higher is required.
Next, write a package that encapsulates task creation and task handling.
```go
package tasks
import (
"context"
"encoding/json"
"fmt"
"log"
"time"
"github.com/hibiken/asynq"
)
// A list of task types.
const (
EmailDelivery = "email:deliver"
ImageProcessing = "image:process"
TypeEmailDelivery = "email:deliver"
TypeImageResize = "image:resize"
)
type EmailDeliveryPayload struct {
UserID int
TemplateID string
}
type ImageResizePayload struct {
SourceURL string
}
//----------------------------------------------
// Write a function NewXXXTask to create a task.
// A task consists of a type and a payload.
//----------------------------------------------
func NewEmailDeliveryTask(userID int, tmplID string) *asynq.Task {
payload := map[string]interface{}{"user_id": userID, "template_id": tmplID}
return asynq.NewTask(EmailDelivery, payload)
func NewEmailDeliveryTask(userID int, tmplID string) (*asynq.Task, error) {
payload, err := json.Marshal(EmailDeliveryPayload{UserID: userID, TemplateID: tmplID})
if err != nil {
return nil, err
}
return asynq.NewTask(TypeEmailDelivery, payload), nil
}
func NewImageProcessingTask(src, dst string) *asynq.Task {
payload := map[string]interface{}{"src": src, "dst": dst}
return asynq.NewTask(ImageProcessing, payload)
func NewImageResizeTask(src string) (*asynq.Task, error) {
payload, err := json.Marshal(ImageResizePayload{SourceURL: src})
if err != nil {
return nil, err
}
// task options can be passed to NewTask, which can be overridden at enqueue time.
return asynq.NewTask(TypeImageResize, payload, asynq.MaxRetry(5), asynq.Timeout(20 * time.Minute)), nil
}
//---------------------------------------------------------------
// Write a function HandleXXXTask to handle the input task.
// Note that it satisfies the asynq.HandlerFunc interface.
//
// Handler doesn't need to be a function. You can define a type
//
// Handler doesn't need to be a function. You can define a type
// that satisfies asynq.Handler interface. See examples below.
//---------------------------------------------------------------
func HandleEmailDeliveryTask(ctx context.Context, t *asynq.Task) error {
userID, err := t.Payload.GetInt("user_id")
if err != nil {
return err
var p EmailDeliveryPayload
if err := json.Unmarshal(t.Payload(), &p); err != nil {
return fmt.Errorf("json.Unmarshal failed: %v: %w", err, asynq.SkipRetry)
}
tmplID, err := t.Payload.GetString("template_id")
if err != nil {
return err
}
fmt.Printf("Send Email to User: user_id = %d, template_id = %s\n", userID, tmplID)
// Email delivery logic ...
log.Printf("Sending Email to User: user_id=%d, template_id=%s", p.UserID, p.TemplateID)
// Email delivery code ...
return nil
}
// ImageProcessor implements asynq.Handler interface.
type ImageProcesser struct {
type ImageProcessor struct {
// ... fields for struct
}
func (p *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
src, err := t.Payload.GetString("src")
if err != nil {
return err
func (processor *ImageProcessor) ProcessTask(ctx context.Context, t *asynq.Task) error {
var p ImageResizePayload
if err := json.Unmarshal(t.Payload(), &p); err != nil {
return fmt.Errorf("json.Unmarshal failed: %v: %w", err, asynq.SkipRetry)
}
dst, err := t.Payload.GetString("dst")
if err != nil {
return err
}
fmt.Printf("Process image: src = %s, dst = %s\n", src, dst)
// Image processing logic ...
log.Printf("Resizing image: src=%s", p.SourceURL)
// Image resizing code ...
return nil
}
func NewImageProcessor() *ImageProcessor {
// ... return an instance
return &ImageProcessor{}
}
```
In your web application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on the queue.
A task will be processed asynchronously by a background worker as soon as the task gets enqueued.
Scheduled tasks will be stored in Redis and will be enqueued at the specified time.
In your application code, import the above package and use [`Client`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Client) to put tasks on queues.
```go
package main
import (
"log"
"time"
"github.com/hibiken/asynq"
@ -147,64 +171,57 @@ import (
const redisAddr = "127.0.0.1:6379"
func main() {
r := asynq.RedisClientOpt{Addr: redisAddr}
c := asynq.NewClient(r)
defer c.Close()
client := asynq.NewClient(asynq.RedisClientOpt{Addr: redisAddr})
defer client.Close()
// ------------------------------------------------------
// Example 1: Enqueue task to be processed immediately.
// Use (*Client).Enqueue method.
// ------------------------------------------------------
t := tasks.NewEmailDeliveryTask(42, "some:template:id")
err := c.Enqueue(t)
task, err := tasks.NewEmailDeliveryTask(42, "some:template:id")
if err != nil {
log.Fatal("could not enqueue task: %v", err)
log.Fatalf("could not create task: %v", err)
}
info, err := client.Enqueue(task)
if err != nil {
log.Fatalf("could not enqueue task: %v", err)
}
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
// ------------------------------------------------------------
// Example 2: Schedule task to be processed in the future.
// Use (*Client).EnqueueIn or (*Client).EnqueueAt.
// Use ProcessIn or ProcessAt option.
// ------------------------------------------------------------
t = tasks.NewEmailDeliveryTask(42, "other:template:id")
err = c.EnqueueIn(24*time.Hour, t)
info, err = client.Enqueue(task, asynq.ProcessIn(24*time.Hour))
if err != nil {
log.Fatal("could not schedule task: %v", err)
log.Fatalf("could not schedule task: %v", err)
}
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
// ----------------------------------------------------------------------------
// Example 3: Set options to tune task processing behavior.
// Example 3: Set other options to tune task processing behavior.
// Options include MaxRetry, Queue, Timeout, Deadline, Unique etc.
// ----------------------------------------------------------------------------
c.SetDefaultOptions(tasks.ImageProcessing, asynq.MaxRetry(10), asynq.Timeout(time.Minute))
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
err = c.Enqueue(t)
task, err = tasks.NewImageResizeTask("https://example.com/myassets/image.jpg")
if err != nil {
log.Fatal("could not enqueue task: %v", err)
log.Fatalf("could not create task: %v", err)
}
// ---------------------------------------------------------------------------
// Example 4: Pass options to tune task processing behavior at enqueue time.
// Options passed at enqueue time override default ones, if any.
// ---------------------------------------------------------------------------
t = tasks.NewImageProcessingTask("some/blobstore/url", "other/blobstore/url")
err = c.Enqueue(t, asynq.Queue("critical"), asynq.Timeout(30*time.Second))
info, err = client.Enqueue(task, asynq.MaxRetry(10), asynq.Timeout(3 * time.Minute))
if err != nil {
log.Fatal("could not enqueue task: %v", err)
log.Fatalf("could not enqueue task: %v", err)
}
log.Printf("enqueued task: id=%s queue=%s", info.ID, info.Queue)
}
```
Next, create a worker server to process these tasks in the background.
To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
Next, start a worker server to process these tasks in the background. To start the background workers, use [`Server`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Server) and provide your [`Handler`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#Handler) to process the tasks.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`"net/http"`](https://golang.org/pkg/net/http/) Handler.
You can optionally use [`ServeMux`](https://pkg.go.dev/github.com/hibiken/asynq?tab=doc#ServeMux) to create a handler, just as you would with [`net/http`](https://golang.org/pkg/net/http/) Handler.
```go
package main
@ -219,24 +236,25 @@ import (
const redisAddr = "127.0.0.1:6379"
func main() {
r := asynq.RedisClientOpt{Addr: redisAddr}
srv := asynq.NewServer(r, asynq.Config{
// Specify how many concurrent workers to use
Concurrency: 10,
// Optionally specify multiple queues with different priority.
Queues: map[string]int{
"critical": 6,
"default": 3,
"low": 1,
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: redisAddr},
asynq.Config{
// Specify how many concurrent workers to use
Concurrency: 10,
// Optionally specify multiple queues with different priority.
Queues: map[string]int{
"critical": 6,
"default": 3,
"low": 1,
},
// See the godoc for other configuration options
},
// See the godoc for other configuration options
})
)
// mux maps a type to a handler
mux := asynq.NewServeMux()
mux.HandleFunc(tasks.EmailDelivery, tasks.HandleEmailDeliveryTask)
mux.Handle(tasks.ImageProcessing, tasks.NewImageProcessor())
mux.HandleFunc(tasks.TypeEmailDelivery, tasks.HandleEmailDeliveryTask)
mux.Handle(tasks.TypeImageResize, tasks.NewImageProcessor())
// ...register other handlers...
if err := srv.Run(mux); err != nil {
@ -245,52 +263,55 @@ func main() {
}
```
For a more detailed walk-through of the library, see our [Getting Started Guide](https://github.com/hibiken/asynq/wiki/Getting-Started).
For a more detailed walk-through of the library, see our [Getting Started](https://github.com/hibiken/asynq/wiki/Getting-Started) guide.
To Learn more about `asynq` features and APIs, see our [Wiki](https://github.com/hibiken/asynq/wiki) and [godoc](https://godoc.org/github.com/hibiken/asynq).
To learn more about `asynq` features and APIs, see the package [godoc](https://godoc.org/github.com/hibiken/asynq).
## Web UI
[Asynqmon](https://github.com/hibiken/asynqmon) is a web based tool for monitoring and administrating Asynq queues and tasks.
Here's a few screenshots of the Web UI:
**Queues view**
![Web UI Queues View](https://user-images.githubusercontent.com/11155743/114697016-07327f00-9d26-11eb-808c-0ac841dc888e.png)
**Tasks view**
![Web UI TasksView](https://user-images.githubusercontent.com/11155743/114697070-1f0a0300-9d26-11eb-855c-d3ec263865b7.png)
**Metrics view**
<img width="1532" alt="Screen Shot 2021-12-19 at 4 37 19 PM" src="https://user-images.githubusercontent.com/10953044/146777420-cae6c476-bac6-469c-acce-b2f6584e8707.png">
**Settings and adaptive dark mode**
![Web UI Settings and adaptive dark mode](https://user-images.githubusercontent.com/11155743/114697149-3517c380-9d26-11eb-9f7a-ae2dd00aad5b.png)
For details on how to use the tool, refer to the tool's [README](https://github.com/hibiken/asynqmon#readme).
## Command Line Tool
Asynq ships with a command line tool to inspect the state of queues and tasks.
Here's an example of running the `stats` command.
![Gif](/docs/assets/demo.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
## Installation
To install `asynq` library, run the following command:
```sh
go get -u github.com/hibiken/asynq
```
To install the CLI tool, run the following command:
```sh
go get -u github.com/hibiken/asynq/tools/asynq
go install github.com/hibiken/asynq/tools/asynq@latest
```
## Requirements
Here's an example of running the `asynq dash` command:
| Dependency | Version |
| -------------------------- | ------- |
| [Redis](https://redis.io/) | v2.8+ |
| [Go](https://golang.org/) | v1.13+ |
![Gif](/docs/assets/dash.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).
## Contributing
We are open to, and grateful for, any contributions (Github issues/pull-requests, feedback on Gitter channel, etc) made by the community.
We are open to, and grateful for, any contributions (GitHub issues/PRs, feedback on [Gitter channel](https://gitter.im/go-asynq/community), etc) made by the community.
Please see the [Contribution Guide](/CONTRIBUTING.md) before contributing.
## Acknowledgements
- [Sidekiq](https://github.com/mperham/sidekiq) : Many of the design ideas are taken from sidekiq and its Web UI
- [RQ](https://github.com/rq/rq) : Client APIs are inspired by rq library.
- [Cobra](https://github.com/spf13/cobra) : Asynq CLI is built with cobra
## License
Asynq is released under the MIT license. See [LICENSE](https://github.com/hibiken/asynq/blob/master/LICENSE).
Copyright (c) 2019-present [Ken Hibino](https://github.com/hibiken) and [Contributors](https://github.com/hibiken/asynq/graphs/contributors). `Asynq` is free and open-source software licensed under the [MIT License](https://github.com/hibiken/asynq/blob/master/LICENSE). Official logo was created by [Vic Shóstak](https://github.com/koddr) and distributed under [Creative Commons](https://creativecommons.org/publicdomain/zero/1.0/) license (CC0 1.0 Universal).

176
aggregator.go Normal file
View File

@ -0,0 +1,176 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// An aggregator is responsible for checking groups and aggregate into one task
// if any of the grouping condition is met.
type aggregator struct {
logger *log.Logger
broker base.Broker
client *Client
// channel to communicate back to the long running "aggregator" goroutine.
done chan struct{}
// list of queue names to check and aggregate.
queues []string
// Group configurations
gracePeriod time.Duration
maxDelay time.Duration
maxSize int
// User provided group aggregator.
ga GroupAggregator
// interval used to check for aggregation
interval time.Duration
// sema is a counting semaphore to ensure the number of active aggregating function
// does not exceed the limit.
sema chan struct{}
}
type aggregatorParams struct {
logger *log.Logger
broker base.Broker
queues []string
gracePeriod time.Duration
maxDelay time.Duration
maxSize int
groupAggregator GroupAggregator
}
const (
// Maximum number of aggregation checks in flight concurrently.
maxConcurrentAggregationChecks = 3
// Default interval used for aggregation checks. If the provided gracePeriod is less than
// the default, use the gracePeriod.
defaultAggregationCheckInterval = 7 * time.Second
)
func newAggregator(params aggregatorParams) *aggregator {
interval := defaultAggregationCheckInterval
if params.gracePeriod < interval {
interval = params.gracePeriod
}
return &aggregator{
logger: params.logger,
broker: params.broker,
client: &Client{broker: params.broker},
done: make(chan struct{}),
queues: params.queues,
gracePeriod: params.gracePeriod,
maxDelay: params.maxDelay,
maxSize: params.maxSize,
ga: params.groupAggregator,
sema: make(chan struct{}, maxConcurrentAggregationChecks),
interval: interval,
}
}
func (a *aggregator) shutdown() {
if a.ga == nil {
return
}
a.logger.Debug("Aggregator shutting down...")
// Signal the aggregator goroutine to stop.
a.done <- struct{}{}
}
func (a *aggregator) start(wg *sync.WaitGroup) {
if a.ga == nil {
return
}
wg.Add(1)
go func() {
defer wg.Done()
ticker := time.NewTicker(a.interval)
for {
select {
case <-a.done:
a.logger.Debug("Waiting for all aggregation checks to finish...")
// block until all aggregation checks released the token
for i := 0; i < cap(a.sema); i++ {
a.sema <- struct{}{}
}
a.logger.Debug("Aggregator done")
ticker.Stop()
return
case t := <-ticker.C:
a.exec(t)
}
}
}()
}
func (a *aggregator) exec(t time.Time) {
select {
case a.sema <- struct{}{}: // acquire token
go a.aggregate(t)
default:
// If the semaphore blocks, then we are currently running max number of
// aggregation checks. Skip this round and log warning.
a.logger.Warnf("Max number of aggregation checks in flight. Skipping")
}
}
func (a *aggregator) aggregate(t time.Time) {
defer func() { <-a.sema /* release token */ }()
for _, qname := range a.queues {
groups, err := a.broker.ListGroups(qname)
if err != nil {
a.logger.Errorf("Failed to list groups in queue: %q", qname)
continue
}
for _, gname := range groups {
aggregationSetID, err := a.broker.AggregationCheck(
qname, gname, t, a.gracePeriod, a.maxDelay, a.maxSize)
if err != nil {
a.logger.Errorf("Failed to run aggregation check: queue=%q group=%q", qname, gname)
continue
}
if aggregationSetID == "" {
a.logger.Debugf("No aggregation needed at this time: queue=%q group=%q", qname, gname)
continue
}
// Aggregate and enqueue.
msgs, deadline, err := a.broker.ReadAggregationSet(qname, gname, aggregationSetID)
if err != nil {
a.logger.Errorf("Failed to read aggregation set: queue=%q, group=%q, setID=%q",
qname, gname, aggregationSetID)
continue
}
tasks := make([]*Task, len(msgs))
for i, m := range msgs {
tasks[i] = NewTask(m.Type, m.Payload)
}
aggregatedTask := a.ga.Aggregate(gname, tasks)
ctx, cancel := context.WithDeadline(context.Background(), deadline)
if _, err := a.client.EnqueueContext(ctx, aggregatedTask, Queue(qname)); err != nil {
a.logger.Errorf("Failed to enqueue aggregated task (queue=%q, group=%q, setID=%q): %v",
qname, gname, aggregationSetID, err)
cancel()
continue
}
if err := a.broker.DeleteAggregationSet(ctx, qname, gname, aggregationSetID); err != nil {
a.logger.Warnf("Failed to delete aggregation set: queue=%q, group=%q, setID=%q",
qname, gname, aggregationSetID)
}
cancel()
}
}
}

165
aggregator_test.go Normal file
View File

@ -0,0 +1,165 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
)
func TestAggregator(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
client := Client{broker: rdbClient}
tests := []struct {
desc string
gracePeriod time.Duration
maxDelay time.Duration
maxSize int
aggregateFunc func(gname string, tasks []*Task) *Task
tasks []*Task // tasks to enqueue
enqueueFrequency time.Duration // time between one enqueue event to another
waitTime time.Duration // time to wait
wantGroups map[string]map[string][]base.Z
wantPending map[string][]*base.TaskMessage
}{
{
desc: "group older than the grace period should be aggregated",
gracePeriod: 1 * time.Second,
maxDelay: 0, // no maxdelay limit
maxSize: 0, // no maxsize limit
aggregateFunc: func(gname string, tasks []*Task) *Task {
return NewTask(gname, nil, MaxRetry(len(tasks))) // use max retry to see how many tasks were aggregated
},
tasks: []*Task{
NewTask("task1", nil, Group("mygroup")),
NewTask("task2", nil, Group("mygroup")),
NewTask("task3", nil, Group("mygroup")),
},
enqueueFrequency: 300 * time.Millisecond,
waitTime: 3 * time.Second,
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {},
},
},
wantPending: map[string][]*base.TaskMessage{
"default": {
h.NewTaskMessageBuilder().SetType("mygroup").SetRetry(3).Build(),
},
},
},
{
desc: "group older than the max-delay should be aggregated",
gracePeriod: 2 * time.Second,
maxDelay: 4 * time.Second,
maxSize: 0, // no maxsize limit
aggregateFunc: func(gname string, tasks []*Task) *Task {
return NewTask(gname, nil, MaxRetry(len(tasks))) // use max retry to see how many tasks were aggregated
},
tasks: []*Task{
NewTask("task1", nil, Group("mygroup")), // time 0
NewTask("task2", nil, Group("mygroup")), // time 1s
NewTask("task3", nil, Group("mygroup")), // time 2s
NewTask("task4", nil, Group("mygroup")), // time 3s
},
enqueueFrequency: 1 * time.Second,
waitTime: 4 * time.Second,
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {},
},
},
wantPending: map[string][]*base.TaskMessage{
"default": {
h.NewTaskMessageBuilder().SetType("mygroup").SetRetry(4).Build(),
},
},
},
{
desc: "group reached the max-size should be aggregated",
gracePeriod: 1 * time.Minute,
maxDelay: 0, // no maxdelay limit
maxSize: 5,
aggregateFunc: func(gname string, tasks []*Task) *Task {
return NewTask(gname, nil, MaxRetry(len(tasks))) // use max retry to see how many tasks were aggregated
},
tasks: []*Task{
NewTask("task1", nil, Group("mygroup")),
NewTask("task2", nil, Group("mygroup")),
NewTask("task3", nil, Group("mygroup")),
NewTask("task4", nil, Group("mygroup")),
NewTask("task5", nil, Group("mygroup")),
},
enqueueFrequency: 300 * time.Millisecond,
waitTime: defaultAggregationCheckInterval * 2,
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {},
},
},
wantPending: map[string][]*base.TaskMessage{
"default": {
h.NewTaskMessageBuilder().SetType("mygroup").SetRetry(5).Build(),
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
aggregator := newAggregator(aggregatorParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default"},
gracePeriod: tc.gracePeriod,
maxDelay: tc.maxDelay,
maxSize: tc.maxSize,
groupAggregator: GroupAggregatorFunc(tc.aggregateFunc),
})
var wg sync.WaitGroup
aggregator.start(&wg)
for _, task := range tc.tasks {
if _, err := client.Enqueue(task); err != nil {
t.Errorf("%s: Client Enqueue failed: %v", tc.desc, err)
aggregator.shutdown()
wg.Wait()
continue
}
time.Sleep(tc.enqueueFrequency)
}
time.Sleep(tc.waitTime)
for qname, groups := range tc.wantGroups {
for gname, want := range groups {
gotGroup := h.GetGroupEntries(t, r, qname, gname)
if diff := cmp.Diff(want, gotGroup, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s: mismatch found in %q; (-want,+got)\n%s", tc.desc, base.GroupKey(qname, gname), diff)
}
}
}
for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.SortMsgOpt, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s: mismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
}
}
aggregator.shutdown()
wg.Wait()
}
}

460
asynq.go
View File

@ -5,40 +5,238 @@
package asynq
import (
"context"
"crypto/tls"
"fmt"
"net"
"net/url"
"strconv"
"strings"
"time"
"github.com/go-redis/redis/v7"
"github.com/redis/go-redis/v9"
"github.com/hibiken/asynq/internal/base"
)
// Task represents a unit of work to be performed.
type Task struct {
// Type indicates the type of task to be performed.
Type string
// typename indicates the type of task to be performed.
typename string
// Payload holds data needed to perform the task.
Payload Payload
// payload holds data needed to perform the task.
payload []byte
// opts holds options for the task.
opts []Option
// w is the ResultWriter for the task.
w *ResultWriter
}
// NewTask returns a new Task given a type name and payload data.
func (t *Task) Type() string { return t.typename }
func (t *Task) Payload() []byte { return t.payload }
// ResultWriter returns a pointer to the ResultWriter associated with the task.
//
// The payload values must be serializable.
func NewTask(typename string, payload map[string]interface{}) *Task {
// Nil pointer is returned if called on a newly created task (i.e. task created by calling NewTask).
// Only the tasks passed to Handler.ProcessTask have a valid ResultWriter pointer.
func (t *Task) ResultWriter() *ResultWriter { return t.w }
// NewTask returns a new Task given a type name and payload data.
// Options can be passed to configure task processing behavior.
func NewTask(typename string, payload []byte, opts ...Option) *Task {
return &Task{
Type: typename,
Payload: Payload{payload},
typename: typename,
payload: payload,
opts: opts,
}
}
// newTask creates a task with the given typename, payload and ResultWriter.
func newTask(typename string, payload []byte, w *ResultWriter) *Task {
return &Task{
typename: typename,
payload: payload,
w: w,
}
}
// A TaskInfo describes a task and its metadata.
type TaskInfo struct {
// ID is the identifier of the task.
ID string
// Queue is the name of the queue in which the task belongs.
Queue string
// Type is the type name of the task.
Type string
// Payload is the payload data of the task.
Payload []byte
// State indicates the task state.
State TaskState
// MaxRetry is the maximum number of times the task can be retried.
MaxRetry int
// Retried is the number of times the task has retried so far.
Retried int
// LastErr is the error message from the last failure.
LastErr string
// LastFailedAt is the time time of the last failure if any.
// If the task has no failures, LastFailedAt is zero time (i.e. time.Time{}).
LastFailedAt time.Time
// Timeout is the duration the task can be processed by Handler before being retried,
// zero if not specified
Timeout time.Duration
// Deadline is the deadline for the task, zero value if not specified.
Deadline time.Time
// Group is the name of the group in which the task belongs.
//
// Tasks in the same queue can be grouped together by Group name and will be aggregated into one task
// by a Server processing the queue.
//
// Empty string (default) indicates task does not belong to any groups, and no aggregation will be applied to the task.
Group string
// NextProcessAt is the time the task is scheduled to be processed,
// zero if not applicable.
NextProcessAt time.Time
// IsOrphaned describes whether the task is left in active state with no worker processing it.
// An orphaned task indicates that the worker has crashed or experienced network failures and was not able to
// extend its lease on the task.
//
// This task will be recovered by running a server against the queue the task is in.
// This field is only applicable to tasks with TaskStateActive.
IsOrphaned bool
// Retention is duration of the retention period after the task is successfully processed.
Retention time.Duration
// CompletedAt is the time when the task is processed successfully.
// Zero value (i.e. time.Time{}) indicates no value.
CompletedAt time.Time
// Result holds the result data associated with the task.
// Use ResultWriter to write result data from the Handler.
Result []byte
}
// If t is non-zero, returns time converted from t as unix time in seconds.
// If t is zero, returns zero value of time.Time.
func fromUnixTimeOrZero(t int64) time.Time {
if t == 0 {
return time.Time{}
}
return time.Unix(t, 0)
}
func newTaskInfo(msg *base.TaskMessage, state base.TaskState, nextProcessAt time.Time, result []byte) *TaskInfo {
info := TaskInfo{
ID: msg.ID,
Queue: msg.Queue,
Type: msg.Type,
Payload: msg.Payload, // Do we need to make a copy?
MaxRetry: msg.Retry,
Retried: msg.Retried,
LastErr: msg.ErrorMsg,
Group: msg.GroupKey,
Timeout: time.Duration(msg.Timeout) * time.Second,
Deadline: fromUnixTimeOrZero(msg.Deadline),
Retention: time.Duration(msg.Retention) * time.Second,
NextProcessAt: nextProcessAt,
LastFailedAt: fromUnixTimeOrZero(msg.LastFailedAt),
CompletedAt: fromUnixTimeOrZero(msg.CompletedAt),
Result: result,
}
switch state {
case base.TaskStateActive:
info.State = TaskStateActive
case base.TaskStatePending:
info.State = TaskStatePending
case base.TaskStateScheduled:
info.State = TaskStateScheduled
case base.TaskStateRetry:
info.State = TaskStateRetry
case base.TaskStateArchived:
info.State = TaskStateArchived
case base.TaskStateCompleted:
info.State = TaskStateCompleted
case base.TaskStateAggregating:
info.State = TaskStateAggregating
default:
panic(fmt.Sprintf("internal error: unknown state: %d", state))
}
return &info
}
// TaskState denotes the state of a task.
type TaskState int
const (
// Indicates that the task is currently being processed by Handler.
TaskStateActive TaskState = iota + 1
// Indicates that the task is ready to be processed by Handler.
TaskStatePending
// Indicates that the task is scheduled to be processed some time in the future.
TaskStateScheduled
// Indicates that the task has previously failed and scheduled to be processed some time in the future.
TaskStateRetry
// Indicates that the task is archived and stored for inspection purposes.
TaskStateArchived
// Indicates that the task is processed successfully and retained until the retention TTL expires.
TaskStateCompleted
// Indicates that the task is waiting in a group to be aggregated into one task.
TaskStateAggregating
)
func (s TaskState) String() string {
switch s {
case TaskStateActive:
return "active"
case TaskStatePending:
return "pending"
case TaskStateScheduled:
return "scheduled"
case TaskStateRetry:
return "retry"
case TaskStateArchived:
return "archived"
case TaskStateCompleted:
return "completed"
case TaskStateAggregating:
return "aggregating"
}
panic("asynq: unknown task state")
}
// RedisConnOpt is a discriminated union of types that represent Redis connection configuration option.
//
// RedisConnOpt represents a sum of following types:
//
// RedisClientOpt | *RedisClientOpt | RedisFailoverClientOpt | *RedisFailoverClientOpt
type RedisConnOpt interface{}
// - RedisClientOpt
// - RedisFailoverClientOpt
// - RedisClusterClientOpt
type RedisConnOpt interface {
// MakeRedisClient returns a new redis client instance.
// Return value is intentionally opaque to hide the implementation detail of redis client.
MakeRedisClient() interface{}
}
// RedisClientOpt is used to create a redis client that connects
// to a redis server directly.
@ -50,13 +248,38 @@ type RedisClientOpt struct {
// Redis server address in "host:port" format.
Addr string
// Redis server password.
// Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string
// Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select.
DB int
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimout.
WriteTimeout time.Duration
// Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int
@ -66,6 +289,21 @@ type RedisClientOpt struct {
TLSConfig *tls.Config
}
func (opt RedisClientOpt) MakeRedisClient() interface{} {
return redis.NewClient(&redis.Options{
Network: opt.Network,
Addr: opt.Addr,
Username: opt.Username,
Password: opt.Password,
DB: opt.DB,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
PoolSize: opt.PoolSize,
TLSConfig: opt.TLSConfig,
})
}
// RedisFailoverClientOpt is used to creates a redis client that talks
// to redis sentinels for service discovery and has an automatic failover
// capability.
@ -78,16 +316,44 @@ type RedisFailoverClientOpt struct {
// https://redis.io/topics/sentinel.
SentinelAddrs []string
// Redis sentinel username.
SentinelUsername string
// Redis sentinel password.
SentinelPassword string
// Redis server password.
// Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string
// Redis DB to select after connecting to a server.
// See: https://redis.io/commands/select.
DB int
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimeout
WriteTimeout time.Duration
// Maximum number of socket connections.
// Default is 10 connections per every CPU as reported by runtime.NumCPU.
PoolSize int
@ -97,12 +363,87 @@ type RedisFailoverClientOpt struct {
TLSConfig *tls.Config
}
func (opt RedisFailoverClientOpt) MakeRedisClient() interface{} {
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: opt.MasterName,
SentinelAddrs: opt.SentinelAddrs,
SentinelUsername: opt.SentinelUsername,
SentinelPassword: opt.SentinelPassword,
Username: opt.Username,
Password: opt.Password,
DB: opt.DB,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
PoolSize: opt.PoolSize,
TLSConfig: opt.TLSConfig,
})
}
// RedisClusterClientOpt is used to creates a redis client that connects to
// redis cluster.
type RedisClusterClientOpt struct {
// A seed list of host:port addresses of cluster nodes.
Addrs []string
// The maximum number of retries before giving up.
// Command is retried on network errors and MOVED/ASK redirects.
// Default is 8 retries.
MaxRedirects int
// Username to authenticate the current connection when Redis ACLs are used.
// See: https://redis.io/commands/auth.
Username string
// Password to authenticate the current connection.
// See: https://redis.io/commands/auth.
Password string
// Dial timeout for establishing new connections.
// Default is 5 seconds.
DialTimeout time.Duration
// Timeout for socket reads.
// If timeout is reached, read commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is 3 seconds.
ReadTimeout time.Duration
// Timeout for socket writes.
// If timeout is reached, write commands will fail with a timeout error
// instead of blocking.
//
// Use value -1 for no timeout and 0 for default.
// Default is ReadTimeout.
WriteTimeout time.Duration
// TLS Config used to connect to a server.
// TLS will be negotiated only if this field is set.
TLSConfig *tls.Config
}
func (opt RedisClusterClientOpt) MakeRedisClient() interface{} {
return redis.NewClusterClient(&redis.ClusterOptions{
Addrs: opt.Addrs,
MaxRedirects: opt.MaxRedirects,
Username: opt.Username,
Password: opt.Password,
DialTimeout: opt.DialTimeout,
ReadTimeout: opt.ReadTimeout,
WriteTimeout: opt.WriteTimeout,
TLSConfig: opt.TLSConfig,
})
}
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
// It returns a non-nil error if uri cannot be parsed.
//
// Three URI schemes are supported, which are redis:, redis-socket:, and redis-sentinel:.
// Three URI schemes are supported, which are redis:, rediss:, redis-socket:, and redis-sentinel:.
// Supported formats are:
// redis://[:password@]host[:port][/dbnumber]
// rediss://[:password@]host[:port][/dbnumber]
// redis-socket://[:password@]path[?db=dbnumber]
// redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName]
func ParseRedisURI(uri string) (RedisConnOpt, error) {
@ -111,7 +452,7 @@ func ParseRedisURI(uri string) (RedisConnOpt, error) {
return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err)
}
switch u.Scheme {
case "redis":
case "redis", "rediss":
return parseRedisURI(u)
case "redis-socket":
return parseRedisSocketURI(u)
@ -125,6 +466,8 @@ func ParseRedisURI(uri string) (RedisConnOpt, error) {
func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
var db int
var err error
var redisConnOpt RedisClientOpt
if len(u.Path) > 0 {
xs := strings.Split(strings.Trim(u.Path, "/"), "/")
db, err = strconv.Atoi(xs[0])
@ -136,7 +479,20 @@ func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
if v, ok := u.User.Password(); ok {
password = v
}
return RedisClientOpt{Addr: u.Host, DB: db, Password: password}, nil
if u.Scheme == "rediss" {
h, _, err := net.SplitHostPort(u.Host)
if err != nil {
h = u.Host
}
redisConnOpt.TLSConfig = &tls.Config{ServerName: h}
}
redisConnOpt.Addr = u.Host
redisConnOpt.Password = password
redisConnOpt.DB = db
return redisConnOpt, nil
}
func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) {
@ -167,53 +523,29 @@ func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
if v, ok := u.User.Password(); ok {
password = v
}
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, Password: password}, nil
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, SentinelPassword: password}, nil
}
// createRedisClient returns a redis client given a redis connection configuration.
//
// Passing an unexpected type as a RedisConnOpt argument will cause panic.
func createRedisClient(r RedisConnOpt) *redis.Client {
switch r := r.(type) {
case *RedisClientOpt:
return redis.NewClient(&redis.Options{
Network: r.Network,
Addr: r.Addr,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case RedisClientOpt:
return redis.NewClient(&redis.Options{
Network: r.Network,
Addr: r.Addr,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case *RedisFailoverClientOpt:
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
case RedisFailoverClientOpt:
return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: r.MasterName,
SentinelAddrs: r.SentinelAddrs,
SentinelPassword: r.SentinelPassword,
Password: r.Password,
DB: r.DB,
PoolSize: r.PoolSize,
TLSConfig: r.TLSConfig,
})
default:
panic(fmt.Sprintf("asynq: unexpected type %T for RedisConnOpt", r))
}
// ResultWriter is a client interface to write result data for a task.
// It writes the data to the redis instance the server is connected to.
type ResultWriter struct {
id string // task ID this writer is responsible for
qname string // queue name the task belongs to
broker base.Broker
ctx context.Context // context associated with the task
}
// Write writes the given data as a result of the task the ResultWriter is associated with.
func (w *ResultWriter) Write(data []byte) (n int, err error) {
select {
case <-w.ctx.Done():
return 0, fmt.Errorf("failed to result task result: %v", w.ctx.Err())
default:
}
return w.broker.WriteResult(w.qname, w.id, data)
}
// TaskID returns the ID of the task the ResultWriter is associated with.
func (w *ResultWriter) TaskID() string {
return w.id
}

View File

@ -5,14 +5,17 @@
package asynq
import (
"crypto/tls"
"flag"
"sort"
"strings"
"testing"
"github.com/go-redis/redis/v7"
"github.com/redis/go-redis/v9"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/hibiken/asynq/internal/log"
h "github.com/hibiken/asynq/internal/testutil"
)
//============================================================================
@ -24,6 +27,9 @@ var (
redisAddr string
redisDB int
useRedisCluster bool
redisClusterAddrs string // comma-separated list of host:port
testLogLevel = FatalLevel
)
@ -32,27 +38,56 @@ var testLogger *log.Logger
func init() {
flag.StringVar(&redisAddr, "redis_addr", "localhost:6379", "redis address to use in testing")
flag.IntVar(&redisDB, "redis_db", 14, "redis db number to use in testing")
flag.BoolVar(&useRedisCluster, "redis_cluster", false, "use redis cluster as a broker in testing")
flag.StringVar(&redisClusterAddrs, "redis_cluster_addrs", "localhost:7000,localhost:7001,localhost:7002", "comma separated list of redis server addresses")
flag.Var(&testLogLevel, "loglevel", "log level to use in testing")
testLogger = log.NewLogger(nil)
testLogger.SetLevel(toInternalLogLevel(testLogLevel))
}
func setup(tb testing.TB) *redis.Client {
func setup(tb testing.TB) (r redis.UniversalClient) {
tb.Helper()
r := redis.NewClient(&redis.Options{
Addr: redisAddr,
DB: redisDB,
})
if useRedisCluster {
addrs := strings.Split(redisClusterAddrs, ",")
if len(addrs) == 0 {
tb.Fatal("No redis cluster addresses provided. Please set addresses using --redis_cluster_addrs flag.")
}
r = redis.NewClusterClient(&redis.ClusterOptions{
Addrs: addrs,
})
} else {
r = redis.NewClient(&redis.Options{
Addr: redisAddr,
DB: redisDB,
})
}
// Start each test with a clean slate.
h.FlushDB(tb, r)
return r
}
func getRedisConnOpt(tb testing.TB) RedisConnOpt {
tb.Helper()
if useRedisCluster {
addrs := strings.Split(redisClusterAddrs, ",")
if len(addrs) == 0 {
tb.Fatal("No redis cluster addresses provided. Please set addresses using --redis_cluster_addrs flag.")
}
return RedisClusterClientOpt{
Addrs: addrs,
}
}
return RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
}
var sortTaskOpt = cmp.Transformer("SortMsg", func(in []*Task) []*Task {
out := append([]*Task(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].Type < out[j].Type
return out[i].Type() < out[j].Type()
})
return out
})
@ -66,6 +101,10 @@ func TestParseRedisURI(t *testing.T) {
"redis://localhost:6379",
RedisClientOpt{Addr: "localhost:6379"},
},
{
"rediss://localhost:6379",
RedisClientOpt{Addr: "localhost:6379", TLSConfig: &tls.Config{ServerName: "localhost"}},
},
{
"redis://localhost:6379/3",
RedisClientOpt{Addr: "localhost:6379", DB: 3},
@ -104,9 +143,9 @@ func TestParseRedisURI(t *testing.T) {
{
"redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
Password: "mypassword",
MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
SentinelPassword: "mypassword",
},
},
}
@ -118,7 +157,7 @@ func TestParseRedisURI(t *testing.T) {
continue
}
if diff := cmp.Diff(tc.want, got); diff != "" {
if diff := cmp.Diff(tc.want, got, cmpopts.IgnoreUnexported(tls.Config{})); diff != "" {
t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff)
}
}

View File

@ -6,22 +6,31 @@ package asynq
import (
"context"
"encoding/json"
"fmt"
"sync"
"testing"
"time"
h "github.com/hibiken/asynq/internal/testutil"
)
// Creates a new task of type "task<n>" with payload {"data": n}.
func makeTask(n int) *Task {
b, err := json.Marshal(map[string]int{"data": n})
if err != nil {
panic(err)
}
return NewTask(fmt.Sprintf("task%d", n), b)
}
// Simple E2E Benchmark testing with no scheduled tasks and retries.
func BenchmarkEndToEndSimple(b *testing.B) {
const count = 100000
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
redis := getRedisConnOpt(b)
client := NewClient(redis)
srv := NewServer(redis, Config{
Concurrency: 10,
@ -32,11 +41,11 @@ func BenchmarkEndToEndSimple(b *testing.B) {
})
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
if _, err := client.Enqueue(makeTask(i)); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
client.Close()
var wg sync.WaitGroup
wg.Add(count)
@ -46,7 +55,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
}
b.StartTimer() // end setup
srv.Start(HandlerFunc(handler))
_ = srv.Start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
@ -61,10 +70,7 @@ func BenchmarkEndToEnd(b *testing.B) {
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
redis := getRedisConnOpt(b)
client := NewClient(redis)
srv := NewServer(redis, Config{
Concurrency: 10,
@ -75,28 +81,32 @@ func BenchmarkEndToEnd(b *testing.B) {
})
// Create a bunch of tasks
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
if _, err := client.Enqueue(makeTask(i)); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
if _, err := client.Enqueue(makeTask(i), ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
client.Close()
var wg sync.WaitGroup
wg.Add(count * 2)
handler := func(ctx context.Context, t *Task) error {
n, err := t.Payload.GetInt("data")
if err != nil {
var p map[string]int
if err := json.Unmarshal(t.Payload(), &p); err != nil {
b.Logf("internal error: %v", err)
}
n, ok := p["data"]
if !ok {
n = 1
b.Logf("internal error: could not get data from payload")
}
retried, ok := GetRetryCount(ctx)
if !ok {
b.Logf("internal error: %v", err)
b.Logf("internal error: could not get retry count from context")
}
// Fail 1% of tasks for the first attempt.
if retried == 0 && n%100 == 0 {
@ -107,7 +117,7 @@ func BenchmarkEndToEnd(b *testing.B) {
}
b.StartTimer() // end setup
srv.Start(HandlerFunc(handler))
_ = srv.Start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
@ -127,10 +137,7 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
redis := getRedisConnOpt(b)
client := NewClient(redis)
srv := NewServer(redis, Config{
Concurrency: 10,
@ -143,23 +150,21 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
})
// Create a bunch of tasks
for i := 0; i < highCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("high")); err != nil {
if _, err := client.Enqueue(makeTask(i), Queue("high")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < defaultCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
if _, err := client.Enqueue(makeTask(i)); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
for i := 0; i < lowCount; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t, Queue("low")); err != nil {
if _, err := client.Enqueue(makeTask(i), Queue("low")); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
client.Close()
var wg sync.WaitGroup
wg.Add(highCount + defaultCount + lowCount)
@ -169,7 +174,7 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
}
b.StartTimer() // end setup
srv.Start(HandlerFunc(handler))
_ = srv.Start(HandlerFunc(handler))
wg.Wait()
b.StopTimer() // begin teardown
@ -185,10 +190,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
for n := 0; n < b.N; n++ {
b.StopTimer() // begin setup
setup(b)
redis := &RedisClientOpt{
Addr: redisAddr,
DB: redisDB,
}
redis := getRedisConnOpt(b)
client := NewClient(redis)
srv := NewServer(redis, Config{
Concurrency: 10,
@ -199,15 +201,13 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
})
// Enqueue 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("task%d", i), map[string]interface{}{"data": i})
if err := client.Enqueue(t); err != nil {
if _, err := client.Enqueue(makeTask(i)); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
// Schedule 10,000 tasks.
for i := 0; i < count; i++ {
t := NewTask(fmt.Sprintf("scheduled%d", i), map[string]interface{}{"data": i})
if err := client.EnqueueAt(time.Now().Add(time.Second), t); err != nil {
if _, err := client.Enqueue(makeTask(i), ProcessIn(1*time.Second)); err != nil {
b.Fatalf("could not enqueue a task: %v", err)
}
}
@ -215,15 +215,15 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
handler := func(ctx context.Context, t *Task) error {
return nil
}
srv.Start(HandlerFunc(handler))
_ = srv.Start(HandlerFunc(handler))
b.StartTimer() // end setup
b.Log("Starting enqueueing")
enqueued := 0
for enqueued < 100000 {
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), map[string]interface{}{"data": enqueued})
if err := client.Enqueue(t); err != nil {
t := NewTask(fmt.Sprintf("enqueued%d", enqueued), h.JSON(map[string]interface{}{"data": enqueued}))
if _, err := client.Enqueue(t); err != nil {
b.Logf("could not enqueue task %d: %v", enqueued, err)
continue
}
@ -233,6 +233,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
b.StopTimer() // begin teardown
srv.Stop()
client.Close()
b.StartTimer() // end teardown
}
}

457
client.go
View File

@ -5,16 +5,16 @@
package asynq
import (
"errors"
"context"
"fmt"
"sort"
"strings"
"sync"
"time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/rdb"
"github.com/rs/xid"
"github.com/redis/go-redis/v9"
)
// A Client is responsible for scheduling tasks.
@ -24,30 +24,68 @@ import (
//
// Clients are safe for concurrent use by multiple goroutines.
type Client struct {
mu sync.Mutex
opts map[string][]Option
rdb *rdb.RDB
broker base.Broker
// When a Client has been created with an existing Redis connection, we do
// not want to close it.
sharedConnection bool
}
// NewClient and returns a new Client given a redis connection option.
// NewClient returns a new Client instance given a redis connection option.
func NewClient(r RedisConnOpt) *Client {
rdb := rdb.NewRDB(createRedisClient(r))
return &Client{
opts: make(map[string][]Option),
rdb: rdb,
redisClient, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
}
client := NewClientFromRedisClient(redisClient)
client.sharedConnection = false
return client
}
// NewClientFromRedisClient returns a new instance of Client given a redis.UniversalClient
// Warning: The underlying redis connection pool will not be closed by Asynq, you are responsible for closing it.
func NewClientFromRedisClient(c redis.UniversalClient) *Client {
return &Client{broker: rdb.NewRDB(c), sharedConnection: true}
}
type OptionType int
const (
MaxRetryOpt OptionType = iota
QueueOpt
TimeoutOpt
DeadlineOpt
UniqueOpt
ProcessAtOpt
ProcessInOpt
TaskIDOpt
RetentionOpt
GroupOpt
)
// Option specifies the task processing behavior.
type Option interface{}
type Option interface {
// String returns a string representation of the option.
String() string
// Type describes the type of the option.
Type() OptionType
// Value returns a value used to create this option.
Value() interface{}
}
// Internal option representations.
type (
retryOption int
queueOption string
timeoutOption time.Duration
deadlineOption time.Time
uniqueOption time.Duration
retryOption int
queueOption string
taskIDOption string
timeoutOption time.Duration
deadlineOption time.Time
uniqueOption time.Duration
processAtOption time.Time
processInOption time.Duration
retentionOption time.Duration
groupOption string
)
// MaxRetry returns an option to specify the max number of times
@ -61,200 +99,349 @@ func MaxRetry(n int) Option {
return retryOption(n)
}
func (n retryOption) String() string { return fmt.Sprintf("MaxRetry(%d)", int(n)) }
func (n retryOption) Type() OptionType { return MaxRetryOpt }
func (n retryOption) Value() interface{} { return int(n) }
// Queue returns an option to specify the queue to enqueue the task into.
//
// Queue name is case-insensitive and the lowercased version is used.
func Queue(name string) Option {
return queueOption(strings.ToLower(name))
return queueOption(name)
}
func (name queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(name)) }
func (name queueOption) Type() OptionType { return QueueOpt }
func (name queueOption) Value() interface{} { return string(name) }
// TaskID returns an option to specify the task ID.
func TaskID(id string) Option {
return taskIDOption(id)
}
func (id taskIDOption) String() string { return fmt.Sprintf("TaskID(%q)", string(id)) }
func (id taskIDOption) Type() OptionType { return TaskIDOpt }
func (id taskIDOption) Value() interface{} { return string(id) }
// Timeout returns an option to specify how long a task may run.
// If the timeout elapses before the Handler returns, then the task
// will be retried.
//
// Zero duration means no limit.
//
// If there's a conflicting Deadline option, whichever comes earliest
// will be used.
func Timeout(d time.Duration) Option {
return timeoutOption(d)
}
func (d timeoutOption) String() string { return fmt.Sprintf("Timeout(%v)", time.Duration(d)) }
func (d timeoutOption) Type() OptionType { return TimeoutOpt }
func (d timeoutOption) Value() interface{} { return time.Duration(d) }
// Deadline returns an option to specify the deadline for the given task.
// If it reaches the deadline before the Handler returns, then the task
// will be retried.
//
// If there's a conflicting Timeout option, whichever comes earliest
// will be used.
func Deadline(t time.Time) Option {
return deadlineOption(t)
}
func (t deadlineOption) String() string {
return fmt.Sprintf("Deadline(%v)", time.Time(t).Format(time.UnixDate))
}
func (t deadlineOption) Type() OptionType { return DeadlineOpt }
func (t deadlineOption) Value() interface{} { return time.Time(t) }
// Unique returns an option to enqueue a task only if the given task is unique.
// Task enqueued with this option is guaranteed to be unique within the given ttl.
// Once the task gets processed successfully or once the TTL has expired, another task with the same uniqueness may be enqueued.
// Once the task gets processed successfully or once the TTL has expired,
// another task with the same uniqueness may be enqueued.
// ErrDuplicateTask error is returned when enqueueing a duplicate task.
// TTL duration must be greater than or equal to 1 second.
//
// Uniqueness of a task is based on the following properties:
// - Task Type
// - Task Payload
// - Queue Name
// - Task Type
// - Task Payload
// - Queue Name
func Unique(ttl time.Duration) Option {
return uniqueOption(ttl)
}
func (ttl uniqueOption) String() string { return fmt.Sprintf("Unique(%v)", time.Duration(ttl)) }
func (ttl uniqueOption) Type() OptionType { return UniqueOpt }
func (ttl uniqueOption) Value() interface{} { return time.Duration(ttl) }
// ProcessAt returns an option to specify when to process the given task.
//
// If there's a conflicting ProcessIn option, the last option passed to Enqueue overrides the others.
func ProcessAt(t time.Time) Option {
return processAtOption(t)
}
func (t processAtOption) String() string {
return fmt.Sprintf("ProcessAt(%v)", time.Time(t).Format(time.UnixDate))
}
func (t processAtOption) Type() OptionType { return ProcessAtOpt }
func (t processAtOption) Value() interface{} { return time.Time(t) }
// ProcessIn returns an option to specify when to process the given task relative to the current time.
//
// If there's a conflicting ProcessAt option, the last option passed to Enqueue overrides the others.
func ProcessIn(d time.Duration) Option {
return processInOption(d)
}
func (d processInOption) String() string { return fmt.Sprintf("ProcessIn(%v)", time.Duration(d)) }
func (d processInOption) Type() OptionType { return ProcessInOpt }
func (d processInOption) Value() interface{} { return time.Duration(d) }
// Retention returns an option to specify the duration of retention period for the task.
// If this option is provided, the task will be stored as a completed task after successful processing.
// A completed task will be deleted after the specified duration elapses.
func Retention(d time.Duration) Option {
return retentionOption(d)
}
func (ttl retentionOption) String() string { return fmt.Sprintf("Retention(%v)", time.Duration(ttl)) }
func (ttl retentionOption) Type() OptionType { return RetentionOpt }
func (ttl retentionOption) Value() interface{} { return time.Duration(ttl) }
// Group returns an option to specify the group used for the task.
// Tasks in a given queue with the same group will be aggregated into one task before passed to Handler.
func Group(name string) Option {
return groupOption(name)
}
func (name groupOption) String() string { return fmt.Sprintf("Group(%q)", string(name)) }
func (name groupOption) Type() OptionType { return GroupOpt }
func (name groupOption) Value() interface{} { return string(name) }
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
//
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
var ErrDuplicateTask = errors.New("task already exists")
// ErrTaskIDConflict indicates that the given task could not be enqueued since its task ID already exists.
//
// ErrTaskIDConflict error only applies to tasks enqueued with a TaskID option.
var ErrTaskIDConflict = errors.New("task ID conflicts with another task")
type option struct {
retry int
queue string
taskID string
timeout time.Duration
deadline time.Time
uniqueTTL time.Duration
processAt time.Time
retention time.Duration
group string
}
func composeOptions(opts ...Option) option {
// composeOptions merges user provided options into the default options
// and returns the composed option.
// It also validates the user provided options and returns an error if any of
// the user provided options fail the validations.
func composeOptions(opts ...Option) (option, error) {
res := option{
retry: defaultMaxRetry,
queue: base.DefaultQueueName,
timeout: 0,
deadline: time.Time{},
retry: defaultMaxRetry,
queue: base.DefaultQueueName,
taskID: uuid.NewString(),
timeout: 0, // do not set to defaultTimeout here
deadline: time.Time{},
processAt: time.Now(),
}
for _, opt := range opts {
switch opt := opt.(type) {
case retryOption:
res.retry = int(opt)
case queueOption:
res.queue = string(opt)
qname := string(opt)
if err := base.ValidateQueueName(qname); err != nil {
return option{}, err
}
res.queue = qname
case taskIDOption:
id := string(opt)
if isBlank(id) {
return option{}, errors.New("task ID cannot be empty")
}
res.taskID = id
case timeoutOption:
res.timeout = time.Duration(opt)
case deadlineOption:
res.deadline = time.Time(opt)
case uniqueOption:
res.uniqueTTL = time.Duration(opt)
ttl := time.Duration(opt)
if ttl < 1*time.Second {
return option{}, errors.New("Unique TTL cannot be less than 1s")
}
res.uniqueTTL = ttl
case processAtOption:
res.processAt = time.Time(opt)
case processInOption:
res.processAt = time.Now().Add(time.Duration(opt))
case retentionOption:
res.retention = time.Duration(opt)
case groupOption:
key := string(opt)
if isBlank(key) {
return option{}, errors.New("group key cannot be empty")
}
res.group = key
default:
// ignore unexpected option
}
}
return res
return res, nil
}
// uniqueKey computes the redis key used for the given task.
// It returns an empty string if ttl is zero.
func uniqueKey(t *Task, ttl time.Duration, qname string) string {
if ttl == 0 {
return ""
}
return fmt.Sprintf("%s:%s:%s", t.Type, serializePayload(t.Payload.data), qname)
// isBlank returns true if the given s is empty or consist of all whitespaces.
func isBlank(s string) bool {
return strings.TrimSpace(s) == ""
}
func serializePayload(payload map[string]interface{}) string {
if payload == nil {
return "nil"
}
type entry struct {
k string
v interface{}
}
var es []entry
for k, v := range payload {
es = append(es, entry{k, v})
}
// sort entries by key
sort.Slice(es, func(i, j int) bool { return es[i].k < es[j].k })
var b strings.Builder
for _, e := range es {
if b.Len() > 0 {
b.WriteString(",")
}
b.WriteString(fmt.Sprintf("%s=%v", e.k, e.v))
}
return b.String()
}
const (
// Default max retry count used if nothing is specified.
defaultMaxRetry = 25
// Default max retry count used if nothing is specified.
const defaultMaxRetry = 25
// Default timeout used if both timeout and deadline are not specified.
defaultTimeout = 30 * time.Minute
)
// SetDefaultOptions sets options to be used for a given task type.
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
//
// Default options can be overridden by options passed at enqueue time.
func (c *Client) SetDefaultOptions(taskType string, opts ...Option) {
c.mu.Lock()
defer c.mu.Unlock()
c.opts[taskType] = opts
}
// Value zero indicates no timeout and no deadline.
var (
noTimeout time.Duration = 0
noDeadline time.Time = time.Unix(0, 0)
)
// EnqueueAt schedules task to be enqueued at the specified time.
//
// EnqueueAt returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueAt(t time.Time, task *Task, opts ...Option) error {
return c.enqueueAt(t, task, opts...)
}
// Enqueue enqueues task to be processed immediately.
//
// Enqueue returns nil if the task is enqueued successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) Enqueue(task *Task, opts ...Option) error {
return c.enqueueAt(time.Now(), task, opts...)
}
// EnqueueIn schedules task to be enqueued after the specified delay.
//
// EnqueueIn returns nil if the task is scheduled successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
func (c *Client) EnqueueIn(d time.Duration, task *Task, opts ...Option) error {
return c.enqueueAt(time.Now().Add(d), task, opts...)
}
// Close closes the connection with redis server.
// Close closes the connection with redis.
func (c *Client) Close() error {
return c.rdb.Close()
if c.sharedConnection {
return fmt.Errorf("redis connection is shared so the Client can't be closed through asynq")
}
return c.broker.Close()
}
func (c *Client) enqueueAt(t time.Time, task *Task, opts ...Option) error {
c.mu.Lock()
defer c.mu.Unlock()
if defaults, ok := c.opts[task.Type]; ok {
opts = append(defaults, opts...)
// Enqueue enqueues the given task to a queue.
//
// Enqueue returns TaskInfo and nil error if the task is enqueued successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
// Any options provided to NewTask can be overridden by options passed to Enqueue.
// By default, max retry is set to 25 and timeout is set to 30 minutes.
//
// If no ProcessAt or ProcessIn options are provided, the task will be pending immediately.
//
// Enqueue uses context.Background internally; to specify the context, use EnqueueContext.
func (c *Client) Enqueue(task *Task, opts ...Option) (*TaskInfo, error) {
return c.EnqueueContext(context.Background(), task, opts...)
}
// EnqueueContext enqueues the given task to a queue.
//
// EnqueueContext returns TaskInfo and nil error if the task is enqueued successfully, otherwise returns a non-nil error.
//
// The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others.
// Any options provided to NewTask can be overridden by options passed to Enqueue.
// By default, max retry is set to 25 and timeout is set to 30 minutes.
//
// If no ProcessAt or ProcessIn options are provided, the task will be pending immediately.
//
// The first argument context applies to the enqueue operation. To specify task timeout and deadline, use Timeout and Deadline option instead.
func (c *Client) EnqueueContext(ctx context.Context, task *Task, opts ...Option) (*TaskInfo, error) {
if task == nil {
return nil, fmt.Errorf("task cannot be nil")
}
if strings.TrimSpace(task.Type()) == "" {
return nil, fmt.Errorf("task typename cannot be empty")
}
// merge task options with the options provided at enqueue time.
opts = append(task.opts, opts...)
opt, err := composeOptions(opts...)
if err != nil {
return nil, err
}
deadline := noDeadline
if !opt.deadline.IsZero() {
deadline = opt.deadline
}
timeout := noTimeout
if opt.timeout != 0 {
timeout = opt.timeout
}
if deadline.Equal(noDeadline) && timeout == noTimeout {
// If neither deadline nor timeout are set, use default timeout.
timeout = defaultTimeout
}
var uniqueKey string
if opt.uniqueTTL > 0 {
uniqueKey = base.UniqueKey(opt.queue, task.Type(), task.Payload())
}
opt := composeOptions(opts...)
msg := &base.TaskMessage{
ID: xid.New(),
Type: task.Type,
Payload: task.Payload.data,
ID: opt.taskID,
Type: task.Type(),
Payload: task.Payload(),
Queue: opt.queue,
Retry: opt.retry,
Timeout: opt.timeout.String(),
Deadline: opt.deadline.Format(time.RFC3339),
UniqueKey: uniqueKey(task, opt.uniqueTTL, opt.queue),
Deadline: deadline.Unix(),
Timeout: int64(timeout.Seconds()),
UniqueKey: uniqueKey,
GroupKey: opt.group,
Retention: int64(opt.retention.Seconds()),
}
var err error
now := time.Now()
if t.Before(now) || t.Equal(now) {
err = c.enqueue(msg, opt.uniqueTTL)
var state base.TaskState
if opt.processAt.After(now) {
err = c.schedule(ctx, msg, opt.processAt, opt.uniqueTTL)
state = base.TaskStateScheduled
} else if opt.group != "" {
// Use zero value for processAt since we don't know when the task will be aggregated and processed.
opt.processAt = time.Time{}
err = c.addToGroup(ctx, msg, opt.group, opt.uniqueTTL)
state = base.TaskStateAggregating
} else {
err = c.schedule(msg, t, opt.uniqueTTL)
opt.processAt = now
err = c.enqueue(ctx, msg, opt.uniqueTTL)
state = base.TaskStatePending
}
if err == rdb.ErrDuplicateTask {
return fmt.Errorf("%w", ErrDuplicateTask)
switch {
case errors.Is(err, errors.ErrDuplicateTask):
return nil, fmt.Errorf("%w", ErrDuplicateTask)
case errors.Is(err, errors.ErrTaskIdConflict):
return nil, fmt.Errorf("%w", ErrTaskIDConflict)
case err != nil:
return nil, err
}
return err
return newTaskInfo(msg, state, opt.processAt, nil), nil
}
func (c *Client) enqueue(msg *base.TaskMessage, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
return c.rdb.EnqueueUnique(msg, uniqueTTL)
}
return c.rdb.Enqueue(msg)
// Ping performs a ping against the redis connection.
func (c *Client) Ping() error {
return c.broker.Ping()
}
func (c *Client) schedule(msg *base.TaskMessage, t time.Time, uniqueTTL time.Duration) error {
func (c *Client) enqueue(ctx context.Context, msg *base.TaskMessage, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
ttl := t.Add(uniqueTTL).Sub(time.Now())
return c.rdb.ScheduleUnique(msg, t, ttl)
return c.broker.EnqueueUnique(ctx, msg, uniqueTTL)
}
return c.rdb.Schedule(msg, t)
return c.broker.Enqueue(ctx, msg)
}
func (c *Client) schedule(ctx context.Context, msg *base.TaskMessage, t time.Time, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
ttl := time.Until(t.Add(uniqueTTL))
return c.broker.ScheduleUnique(ctx, msg, t, ttl)
}
return c.broker.Schedule(ctx, msg, t)
}
func (c *Client) addToGroup(ctx context.Context, msg *base.TaskMessage, group string, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
return c.broker.AddToGroupUnique(ctx, msg, group, uniqueTTL)
}
return c.broker.AddToGroup(ctx, msg, group)
}

File diff suppressed because it is too large Load Diff

View File

@ -6,58 +6,16 @@ package asynq
import (
"context"
"time"
"github.com/hibiken/asynq/internal/base"
asynqcontext "github.com/hibiken/asynq/internal/context"
)
// A taskMetadata holds task scoped data to put in context.
type taskMetadata struct {
id string
maxRetry int
retryCount int
}
// ctxKey type is unexported to prevent collisions with context keys defined in
// other packages.
type ctxKey int
// metadataCtxKey is the context key for the task metadata.
// Its value of zero is arbitrary.
const metadataCtxKey ctxKey = 0
// createContext returns a context and cancel function for a given task message.
func createContext(msg *base.TaskMessage) (ctx context.Context, cancel context.CancelFunc) {
metadata := taskMetadata{
id: msg.ID.String(),
maxRetry: msg.Retry,
retryCount: msg.Retried,
}
ctx = context.WithValue(context.Background(), metadataCtxKey, metadata)
timeout, err := time.ParseDuration(msg.Timeout)
if err == nil && timeout != 0 {
ctx, cancel = context.WithTimeout(ctx, timeout)
}
deadline, err := time.Parse(time.RFC3339, msg.Deadline)
if err == nil && !deadline.IsZero() {
ctx, cancel = context.WithDeadline(ctx, deadline)
}
if cancel == nil {
ctx, cancel = context.WithCancel(ctx)
}
return ctx, cancel
}
// GetTaskID extracts a task ID from a context, if any.
//
// ID of a task is guaranteed to be unique.
// ID of a task doesn't change if the task is being retried.
func GetTaskID(ctx context.Context) (id string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.id, true
return asynqcontext.GetTaskID(ctx)
}
// GetRetryCount extracts retry count from a context, if any.
@ -65,21 +23,20 @@ func GetTaskID(ctx context.Context) (id string, ok bool) {
// Return value n indicates the number of times associated task has been
// retried so far.
func GetRetryCount(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.retryCount, true
return asynqcontext.GetRetryCount(ctx)
}
// GetMaxRetry extracts maximum retry from a context, if any.
//
// Return value n indicates the maximum number of times the assoicated task
// Return value n indicates the maximum number of times the associated task
// can be retried if ProcessTask returns a non-nil error.
func GetMaxRetry(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.maxRetry, true
return asynqcontext.GetMaxRetry(ctx)
}
// GetQueueName extracts queue name from a context, if any.
//
// Return value queue indicates which queue the task was pulled from.
func GetQueueName(ctx context.Context) (queue string, ok bool) {
return asynqcontext.GetQueueName(ctx)
}

View File

@ -1,157 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
)
func TestCreateContextWithTimeRestrictions(t *testing.T) {
var (
noTimeout = time.Duration(0)
noDeadline = time.Time{}
)
tests := []struct {
desc string
timeout time.Duration
deadline time.Time
wantDeadline time.Time
}{
{"only with timeout", 10 * time.Second, noDeadline, time.Now().Add(10 * time.Second)},
{"only with deadline", noTimeout, time.Now().Add(time.Hour), time.Now().Add(time.Hour)},
{"with timeout and deadline (timeout < deadline)", 10 * time.Second, time.Now().Add(time.Hour), time.Now().Add(10 * time.Second)},
{"with timeout and deadline (timeout > deadline)", 10 * time.Minute, time.Now().Add(30 * time.Second), time.Now().Add(30 * time.Second)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: xid.New(),
Timeout: tc.timeout.String(),
Deadline: tc.deadline.Format(time.RFC3339),
}
ctx, cancel := createContext(msg)
select {
case x := <-ctx.Done():
t.Errorf("%s: <-ctx.Done() == %v, want nothing (it should block)", tc.desc, x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("%s: ctx.Deadline() returned false, want deadline to be set", tc.desc)
}
if !cmp.Equal(tc.wantDeadline, got, cmpopts.EquateApproxTime(time.Second)) {
t.Errorf("%s: ctx.Deadline() returned %v, want %v", tc.desc, got, tc.wantDeadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
}
}
func TestCreateContextWithoutTimeRestrictions(t *testing.T) {
msg := &base.TaskMessage{
Type: "something",
ID: xid.New(),
Timeout: time.Duration(0).String(), // zero value to indicate no timeout
Deadline: time.Time{}.Format(time.RFC3339), // zero value to indicate no deadline
}
ctx, cancel := createContext(msg)
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
_, ok := ctx.Deadline()
if ok {
t.Error("ctx.Deadline() returned true, want deadline to not be set")
}
cancel()
select {
case <-ctx.Done():
default:
t.Error("ctx.Done() blocked, want it to be non-blocking")
}
}
func TestGetTaskMetadataFromContext(t *testing.T) {
tests := []struct {
desc string
msg *base.TaskMessage
}{
{"with zero retried message", &base.TaskMessage{Type: "something", ID: xid.New(), Retry: 25, Retried: 0}},
{"with non-zero retried message", &base.TaskMessage{Type: "something", ID: xid.New(), Retry: 10, Retried: 5}},
}
for _, tc := range tests {
ctx, _ := createContext(tc.msg)
id, ok := GetTaskID(ctx)
if !ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == false", tc.desc)
}
if ok && id != tc.msg.ID.String() {
t.Errorf("%s: GetTaskID(ctx) returned id == %q, want %q", tc.desc, id, tc.msg.ID.String())
}
retried, ok := GetRetryCount(ctx)
if !ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == false", tc.desc)
}
if ok && retried != tc.msg.Retried {
t.Errorf("%s: GetRetryCount(ctx) returned n == %d want %d", tc.desc, retried, tc.msg.Retried)
}
maxRetry, ok := GetMaxRetry(ctx)
if !ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == false", tc.desc)
}
if ok && maxRetry != tc.msg.Retry {
t.Errorf("%s: GetMaxRetry(ctx) returned n == %d want %d", tc.desc, maxRetry, tc.msg.Retry)
}
}
}
func TestGetTaskMetadataFromContextError(t *testing.T) {
tests := []struct {
desc string
ctx context.Context
}{
{"with background context", context.Background()},
}
for _, tc := range tests {
if _, ok := GetTaskID(tc.ctx); ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == true", tc.desc)
}
if _, ok := GetRetryCount(tc.ctx); ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == true", tc.desc)
}
if _, ok := GetMaxRetry(tc.ctx); ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == true", tc.desc)
}
}
}

49
doc.go
View File

@ -3,40 +3,46 @@
// that can be found in the LICENSE file.
/*
Package asynq provides a framework for asynchronous task processing.
Package asynq provides a framework for Redis based distrubted task queue.
Asynq uses Redis as a message broker. To connect to redis server,
specify the options using one of RedisConnOpt types.
Asynq uses Redis as a message broker. To connect to redis,
specify the connection using one of RedisConnOpt types.
redis = &asynq.RedisClientOpt{
redisConnOpt = asynq.RedisClientOpt{
Addr: "127.0.0.1:6379",
Password: "xxxxx",
DB: 3,
DB: 2,
}
The Client is used to enqueue a task to be processed at the specified time.
The Client is used to enqueue a task.
Task is created with two parameters: its type and payload.
client := asynq.NewClient(redis)
client := asynq.NewClient(redisConnOpt)
t := asynq.NewTask(
"send_email",
map[string]interface{}{"user_id": 42})
// Task is created with two parameters: its type and payload.
// Payload data is simply an array of bytes. It can be encoded in JSON, Protocol Buffer, Gob, etc.
b, err := json.Marshal(ExamplePayload{UserID: 42})
if err != nil {
log.Fatal(err)
}
task := asynq.NewTask("example", b)
// Enqueue the task to be processed immediately.
err := client.Enqueue(t)
info, err := client.Enqueue(task)
// Schedule the task to be processed after one minute.
err = client.EnqueueIn(time.Minute, t)
info, err = client.Enqueue(t, asynq.ProcessIn(1*time.Minute))
The Server is used to run the background task processing with a given
The Server is used to run the task processing workers with a given
handler.
srv := asynq.NewServer(redis, asynq.Config{
srv := asynq.NewServer(redisConnOpt, asynq.Config{
Concurrency: 10,
})
srv.Run(handler)
if err := srv.Run(handler); err != nil {
log.Fatal(err)
}
Handler is an interface type with a method which
takes a task and returns an error. Handler should return nil if
@ -50,10 +56,13 @@ Example of a type that implements the Handler interface.
func (h *TaskHandler) ProcessTask(ctx context.Context, task *asynq.Task) error {
switch task.Type {
case "send_email":
id, err := task.Payload.GetInt("user_id")
// send email
//...
case "example":
var data ExamplePayload
if err := json.Unmarshal(task.Payload(), &data); err != nil {
return err
}
// perform task with the data
default:
return fmt.Errorf("unexpected task type %q", task.Type)
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 279 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

BIN
docs/assets/cluster.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

BIN
docs/assets/dash.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 809 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 983 KiB

After

Width:  |  Height:  |  Size: 329 KiB

View File

@ -5,10 +5,12 @@
package asynq_test
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"time"
"github.com/hibiken/asynq"
"golang.org/x/sys/unix"
@ -29,7 +31,7 @@ func ExampleServer_Run() {
}
}
func ExampleServer_Stop() {
func ExampleServer_Shutdown() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
@ -46,10 +48,10 @@ func ExampleServer_Stop() {
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs // wait for termination signal
srv.Stop()
srv.Shutdown()
}
func ExampleServer_Quiet() {
func ExampleServer_Stop() {
srv := asynq.NewServer(
asynq.RedisClientOpt{Addr: ":6379"},
asynq.Config{Concurrency: 20},
@ -69,13 +71,32 @@ func ExampleServer_Quiet() {
for {
s := <-sigs
if s == unix.SIGTSTP {
srv.Quiet() // stop processing new tasks
srv.Stop() // stop processing new tasks
continue
}
break
break // received SIGTERM or SIGINT signal
}
srv.Stop()
srv.Shutdown()
}
func ExampleScheduler() {
scheduler := asynq.NewScheduler(
asynq.RedisClientOpt{Addr: ":6379"},
&asynq.SchedulerOpts{Location: time.Local},
)
if _, err := scheduler.Register("* * * * *", asynq.NewTask("task1", nil)); err != nil {
log.Fatal(err)
}
if _, err := scheduler.Register("@every 30s", asynq.NewTask("task2", nil)); err != nil {
log.Fatal(err)
}
// Run blocks and waits for os signal to terminate the program.
if err := scheduler.Run(); err != nil {
log.Fatal(err)
}
}
func ExampleParseRedisURI() {
@ -93,3 +114,20 @@ func ExampleParseRedisURI() {
// localhost:6379
// 10
}
func ExampleResultWriter() {
// ResultWriter is only accessible in Handler.
h := func(ctx context.Context, task *asynq.Task) error {
// .. do task processing work
res := []byte("task result data")
n, err := task.ResultWriter().Write(res) // implements io.Writer
if err != nil {
return fmt.Errorf("failed to write task result: %v", err)
}
log.Printf(" %d bytes written", n)
return nil
}
_ = h
}

77
forwarder.go Normal file
View File

@ -0,0 +1,77 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// A forwarder is responsible for moving scheduled and retry tasks to pending state
// so that the tasks get processed by the workers.
type forwarder struct {
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "forwarder" goroutine.
done chan struct{}
// list of queue names to check and enqueue.
queues []string
// poll interval on average
avgInterval time.Duration
}
type forwarderParams struct {
logger *log.Logger
broker base.Broker
queues []string
interval time.Duration
}
func newForwarder(params forwarderParams) *forwarder {
return &forwarder{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
queues: params.queues,
avgInterval: params.interval,
}
}
func (f *forwarder) shutdown() {
f.logger.Debug("Forwarder shutting down...")
// Signal the forwarder goroutine to stop polling.
f.done <- struct{}{}
}
// start starts the "forwarder" goroutine.
func (f *forwarder) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
timer := time.NewTimer(f.avgInterval)
for {
select {
case <-f.done:
f.logger.Debug("Forwarder done")
return
case <-timer.C:
f.exec()
timer.Reset(f.avgInterval)
}
}
}()
}
func (f *forwarder) exec() {
if err := f.broker.ForwardIfReady(f.queues...); err != nil {
f.logger.Errorf("Failed to forward scheduled tasks: %v", err)
}
}

137
forwarder_test.go Normal file
View File

@ -0,0 +1,137 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
)
func TestForwarder(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newForwarder(forwarderParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default", "critical"},
interval: pollInterval,
})
t1 := h.NewTaskMessageWithQueue("gen_thumbnail", nil, "default")
t2 := h.NewTaskMessageWithQueue("send_email", nil, "critical")
t3 := h.NewTaskMessageWithQueue("reindex", nil, "default")
t4 := h.NewTaskMessageWithQueue("sync", nil, "critical")
now := time.Now()
tests := []struct {
initScheduled map[string][]base.Z // scheduled queue initial state
initRetry map[string][]base.Z // retry queue initial state
initPending map[string][]*base.TaskMessage // default queue initial state
wait time.Duration // wait duration before checking for final state
wantScheduled map[string][]*base.TaskMessage // schedule queue final state
wantRetry map[string][]*base.TaskMessage // retry queue final state
wantPending map[string][]*base.TaskMessage // default queue final state
}{
{
initScheduled: map[string][]base.Z{
"default": {{Message: t1, Score: now.Add(time.Hour).Unix()}},
"critical": {{Message: t2, Score: now.Add(-2 * time.Second).Unix()}},
},
initRetry: map[string][]base.Z{
"default": {{Message: t3, Score: time.Now().Add(-500 * time.Millisecond).Unix()}},
"critical": {},
},
initPending: map[string][]*base.TaskMessage{
"default": {},
"critical": {t4},
},
wait: pollInterval * 2,
wantScheduled: map[string][]*base.TaskMessage{
"default": {t1},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantPending: map[string][]*base.TaskMessage{
"default": {t3},
"critical": {t2, t4},
},
},
{
initScheduled: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Unix()},
{Message: t3, Score: now.Add(-500 * time.Millisecond).Unix()},
},
"critical": {
{Message: t2, Score: now.Add(-2 * time.Second).Unix()},
},
},
initRetry: map[string][]base.Z{
"default": {},
"critical": {},
},
initPending: map[string][]*base.TaskMessage{
"default": {},
"critical": {t4},
},
wait: pollInterval * 2,
wantScheduled: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantPending: map[string][]*base.TaskMessage{
"default": {t1, t3},
"critical": {t2, t4},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedAllScheduledQueues(t, r, tc.initScheduled) // initialize scheduled queue
h.SeedAllRetryQueues(t, r, tc.initRetry) // initialize retry queue
h.SeedAllPendingQueues(t, r, tc.initPending) // initialize default queue
var wg sync.WaitGroup
s.start(&wg)
time.Sleep(tc.wait)
s.shutdown()
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledMessages(t, r, qname)
if diff := cmp.Diff(want, gotScheduled, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.ScheduledKey(qname), diff)
}
}
for qname, want := range tc.wantRetry {
gotRetry := h.GetRetryMessages(t, r, qname)
if diff := cmp.Diff(want, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.RetryKey(qname), diff)
}
}
for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running forwarder: (-want, +got)\n%s", base.PendingKey(qname), diff)
}
}
}
}

24
go.mod
View File

@ -1,14 +1,20 @@
module github.com/hibiken/asynq
go 1.13
go 1.22
require (
github.com/go-redis/redis/v7 v7.2.0
github.com/google/go-cmp v0.4.0
github.com/rs/xid v1.2.1
github.com/spf13/cast v1.3.1
go.uber.org/goleak v0.10.0
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/yaml.v2 v2.2.7 // indirect
github.com/google/go-cmp v0.6.0
github.com/google/uuid v1.6.0
github.com/redis/go-redis/v9 v9.7.0
github.com/robfig/cron/v3 v3.0.1
github.com/spf13/cast v1.7.0
go.uber.org/goleak v1.3.0
golang.org/x/sys v0.27.0
golang.org/x/time v0.8.0
google.golang.org/protobuf v1.35.2
)
require (
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
)

108
go.sum
View File

@ -1,74 +1,42 @@
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-redis/redis/v7 v7.0.0-beta.4 h1:p6z7Pde69EGRWvlC++y8aFcaWegyrKHzOBGo0zUACTQ=
github.com/go-redis/redis/v7 v7.0.0-beta.4/go.mod h1:xhhSbUMTsleRPur+Vgx9sUHtyN33bdjxY+9/0n9Ig8s=
github.com/go-redis/redis/v7 v7.2.0 h1:CrCexy/jYWZjW0AyVoHlcJUeZN19VWlbepTh1Vq6dJs=
github.com/go-redis/redis/v7 v7.2.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092 h1:4QSRKanuywn15aTZvI/mIDEgPQpswuFndXpOj3rKEco=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2 h1:CCH4IOTTfewWjGOlSp+zGcjutRKlBEZQ6wTn8ozI/nI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e h1:o3PsSEY8E4eXWkXrIP9YJALUkVZqzHJT5DOasTyn8Vs=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e h1:9vRrk9YW2BTzLP0VCB9ZDjU4cPqkg+IDWL7XgxA1yxQ=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s=
golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

80
healthcheck.go Normal file
View File

@ -0,0 +1,80 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// healthchecker is responsible for pinging broker periodically
// and call user provided HeathCheckFunc with the ping result.
type healthchecker struct {
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "healthchecker" goroutine.
done chan struct{}
// interval between healthchecks.
interval time.Duration
// function to call periodically.
healthcheckFunc func(error)
}
type healthcheckerParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
healthcheckFunc func(error)
}
func newHealthChecker(params healthcheckerParams) *healthchecker {
return &healthchecker{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
interval: params.interval,
healthcheckFunc: params.healthcheckFunc,
}
}
func (hc *healthchecker) shutdown() {
if hc.healthcheckFunc == nil {
return
}
hc.logger.Debug("Healthchecker shutting down...")
// Signal the healthchecker goroutine to stop.
hc.done <- struct{}{}
}
func (hc *healthchecker) start(wg *sync.WaitGroup) {
if hc.healthcheckFunc == nil {
return
}
wg.Add(1)
go func() {
defer wg.Done()
timer := time.NewTimer(hc.interval)
for {
select {
case <-hc.done:
hc.logger.Debug("Healthchecker done")
timer.Stop()
return
case <-timer.C:
err := hc.broker.Ping()
hc.healthcheckFunc(err)
timer.Reset(hc.interval)
}
}
}()
}

103
healthcheck_test.go Normal file
View File

@ -0,0 +1,103 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
)
func TestHealthChecker(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
var (
// mu guards called and e variables.
mu sync.Mutex
called int
e error
)
checkFn := func(err error) {
mu.Lock()
defer mu.Unlock()
called++
e = err
}
hc := newHealthChecker(healthcheckerParams{
logger: testLogger,
broker: rdbClient,
interval: 1 * time.Second,
healthcheckFunc: checkFn,
})
hc.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second)
mu.Lock()
if called == 0 {
t.Errorf("Healthchecker did not call the provided HealthCheckFunc")
}
if e != nil {
t.Errorf("HealthCheckFunc was called with non-nil error: %v", e)
}
mu.Unlock()
hc.shutdown()
}
func TestHealthCheckerWhenRedisDown(t *testing.T) {
// Make sure that healthchecker goroutine doesn't panic
// if it cannot connect to redis.
defer func() {
if r := recover(); r != nil {
t.Errorf("panic occurred: %v", r)
}
}()
r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r)
var (
// mu guards called and e variables.
mu sync.Mutex
called int
e error
)
checkFn := func(err error) {
mu.Lock()
defer mu.Unlock()
called++
e = err
}
hc := newHealthChecker(healthcheckerParams{
logger: testLogger,
broker: testBroker,
interval: 1 * time.Second,
healthcheckFunc: checkFn,
})
testBroker.Sleep()
hc.start(&sync.WaitGroup{})
time.Sleep(2 * time.Second)
mu.Lock()
if called == 0 {
t.Errorf("Healthchecker did not call the provided HealthCheckFunc")
}
if e == nil {
t.Errorf("HealthCheckFunc was called with nil; want non-nil error")
}
mu.Unlock()
hc.shutdown()
}

View File

@ -9,9 +9,10 @@ import (
"sync"
"time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/rs/xid"
"github.com/hibiken/asynq/internal/timeutil"
)
// heartbeater is responsible for writing process info to redis periodically to
@ -19,6 +20,7 @@ import (
type heartbeater struct {
logger *log.Logger
broker base.Broker
clock timeutil.Clock
// channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{}
@ -38,13 +40,13 @@ type heartbeater struct {
// heartbeater goroutine. In other words, confine these variables
// to this goroutine only.
started time.Time
workers map[string]workerStat
workers map[string]*workerInfo
// status is shared with other goroutine but is concurrency safe.
status *base.ServerStatus
// state is shared with other goroutine but is concurrency safe.
state *serverState
// channels to receive updates on active workers.
starting <-chan *base.TaskMessage
starting <-chan *workerInfo
finished <-chan *base.TaskMessage
}
@ -55,8 +57,8 @@ type heartbeaterParams struct {
concurrency int
queues map[string]int
strictPriority bool
status *base.ServerStatus
starting <-chan *base.TaskMessage
state *serverState
starting <-chan *workerInfo
finished <-chan *base.TaskMessage
}
@ -69,34 +71,40 @@ func newHeartbeater(params heartbeaterParams) *heartbeater {
return &heartbeater{
logger: params.logger,
broker: params.broker,
clock: timeutil.NewRealClock(),
done: make(chan struct{}),
interval: params.interval,
host: host,
pid: os.Getpid(),
serverID: xid.New().String(),
serverID: uuid.New().String(),
concurrency: params.concurrency,
queues: params.queues,
strictPriority: params.strictPriority,
status: params.status,
workers: make(map[string]workerStat),
state: params.state,
workers: make(map[string]*workerInfo),
starting: params.starting,
finished: params.finished,
}
}
func (h *heartbeater) terminate() {
func (h *heartbeater) shutdown() {
h.logger.Debug("Heartbeater shutting down...")
// Signal the heartbeater goroutine to stop.
h.done <- struct{}{}
}
// A workerStat records the message a worker is working on
// and the time the worker has started processing the message.
type workerStat struct {
// A workerInfo holds an active worker information.
type workerInfo struct {
// the task message the worker is processing.
msg *base.TaskMessage
// the time the worker has started processing the message.
started time.Time
msg *base.TaskMessage
// deadline the worker has to finish processing the task by.
deadline time.Time
// lease the worker holds for the task.
lease *base.Lease
}
func (h *heartbeater) start(wg *sync.WaitGroup) {
@ -104,7 +112,7 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
go func() {
defer wg.Done()
h.started = time.Now()
h.started = h.clock.Now()
h.beat()
@ -112,7 +120,9 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
for {
select {
case <-h.done:
h.broker.ClearServerState(h.host, h.pid, h.serverID)
if err := h.broker.ClearServerState(h.host, h.pid, h.serverID); err != nil {
h.logger.Errorf("Failed to clear server state: %v", err)
}
h.logger.Debug("Heartbeater done")
timer.Stop()
return
@ -121,17 +131,22 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
h.beat()
timer.Reset(h.interval)
case msg := <-h.starting:
h.workers[msg.ID.String()] = workerStat{time.Now(), msg}
case w := <-h.starting:
h.workers[w.msg.ID] = w
case msg := <-h.finished:
delete(h.workers, msg.ID.String())
delete(h.workers, msg.ID)
}
}
}()
}
// beat extends lease for workers and writes server/worker info to redis.
func (h *heartbeater) beat() {
h.state.mu.Lock()
srvStatus := h.state.value.String()
h.state.mu.Unlock()
info := base.ServerInfo{
Host: h.host,
PID: h.pid,
@ -139,27 +154,49 @@ func (h *heartbeater) beat() {
Concurrency: h.concurrency,
Queues: h.queues,
StrictPriority: h.strictPriority,
Status: h.status.String(),
Status: srvStatus,
Started: h.started,
ActiveWorkerCount: len(h.workers),
}
var ws []*base.WorkerInfo
for id, stat := range h.workers {
idsByQueue := make(map[string][]string)
for id, w := range h.workers {
ws = append(ws, &base.WorkerInfo{
Host: h.host,
PID: h.pid,
ID: id,
Type: stat.msg.Type,
Queue: stat.msg.Queue,
Payload: stat.msg.Payload,
Started: stat.started,
Host: h.host,
PID: h.pid,
ServerID: h.serverID,
ID: id,
Type: w.msg.Type,
Queue: w.msg.Queue,
Payload: w.msg.Payload,
Started: w.started,
Deadline: w.deadline,
})
// Check lease before adding to the set to make sure not to extend the lease if the lease is already expired.
if w.lease.IsValid() {
idsByQueue[w.msg.Queue] = append(idsByQueue[w.msg.Queue], id)
} else {
w.lease.NotifyExpiration() // notify processor if the lease is expired
}
}
// Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed.
if err := h.broker.WriteServerState(&info, ws, h.interval*2); err != nil {
h.logger.Errorf("could not write server state data: %v", err)
h.logger.Errorf("Failed to write server state data: %v", err)
}
for qname, ids := range idsByQueue {
expirationTime, err := h.broker.ExtendLease(qname, ids...)
if err != nil {
h.logger.Errorf("Failed to extend lease for tasks %v: %v", ids, err)
continue
}
for _, id := range ids {
if l := h.workers[id].lease; !l.Reset(expirationTime) {
h.logger.Warnf("Lease reset failed for %s; lease deadline: %v", id, l.Deadline())
}
}
}
}

View File

@ -5,30 +5,154 @@
package asynq
import (
"context"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
h "github.com/hibiken/asynq/internal/testutil"
"github.com/hibiken/asynq/internal/timeutil"
)
// Test goes through a few phases.
//
// Phase1: Simulate Server startup; Simulate starting tasks listed in startedWorkers
// Phase2: Simulate finishing tasks listed in finishedTasks
// Phase3: Simulate Server shutdown;
func TestHeartbeater(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
now := time.Now()
const elapsedTime = 10 * time.Second // simulated time elapsed between phase1 and phase2
clock := timeutil.NewSimulatedClock(time.Time{}) // time will be set in each test
t1 := h.NewTaskMessageWithQueue("task1", nil, "default")
t2 := h.NewTaskMessageWithQueue("task2", nil, "default")
t3 := h.NewTaskMessageWithQueue("task3", nil, "default")
t4 := h.NewTaskMessageWithQueue("task4", nil, "custom")
t5 := h.NewTaskMessageWithQueue("task5", nil, "custom")
t6 := h.NewTaskMessageWithQueue("task6", nil, "default")
// Note: intentionally set to time less than now.Add(rdb.LeaseDuration) to test lease extension is working.
lease1 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease2 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease3 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease4 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease5 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease6 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
tests := []struct {
interval time.Duration
desc string
// Interval between heartbeats.
interval time.Duration
// Server info.
host string
pid int
queues map[string]int
concurrency int
active map[string][]*base.TaskMessage // initial active set state
lease map[string][]base.Z // initial lease set state
wantLease1 map[string][]base.Z // expected lease set state after starting all startedWorkers
wantLease2 map[string][]base.Z // expected lease set state after finishing all finishedTasks
startedWorkers []*workerInfo // workerInfo to send via the started channel
finishedTasks []*base.TaskMessage // tasks to send via the finished channel
startTime time.Time // simulated start time
elapsedTime time.Duration // simulated time elapsed between starting and finishing processing tasks
}{
{time.Second, "localhost", 45678, map[string]int{"default": 1}, 10},
{
desc: "With single queue",
interval: 2 * time.Second,
host: "localhost",
pid: 45678,
queues: map[string]int{"default": 1},
concurrency: 10,
active: map[string][]*base.TaskMessage{
"default": {t1, t2, t3},
},
lease: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Add(10 * time.Second).Unix()},
{Message: t2, Score: now.Add(10 * time.Second).Unix()},
{Message: t3, Score: now.Add(10 * time.Second).Unix()},
},
},
startedWorkers: []*workerInfo{
{msg: t1, started: now, deadline: now.Add(2 * time.Minute), lease: lease1},
{msg: t2, started: now, deadline: now.Add(2 * time.Minute), lease: lease2},
{msg: t3, started: now, deadline: now.Add(2 * time.Minute), lease: lease3},
},
finishedTasks: []*base.TaskMessage{t1, t2},
wantLease1: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Add(rdb.LeaseDuration).Unix()},
{Message: t2, Score: now.Add(rdb.LeaseDuration).Unix()},
{Message: t3, Score: now.Add(rdb.LeaseDuration).Unix()},
},
},
wantLease2: map[string][]base.Z{
"default": {
{Message: t3, Score: now.Add(elapsedTime).Add(rdb.LeaseDuration).Unix()},
},
},
startTime: now,
elapsedTime: elapsedTime,
},
{
desc: "With multiple queue",
interval: 2 * time.Second,
host: "localhost",
pid: 45678,
queues: map[string]int{"default": 1, "custom": 2},
concurrency: 10,
active: map[string][]*base.TaskMessage{
"default": {t6},
"custom": {t4, t5},
},
lease: map[string][]base.Z{
"default": {
{Message: t6, Score: now.Add(10 * time.Second).Unix()},
},
"custom": {
{Message: t4, Score: now.Add(10 * time.Second).Unix()},
{Message: t5, Score: now.Add(10 * time.Second).Unix()},
},
},
startedWorkers: []*workerInfo{
{msg: t6, started: now, deadline: now.Add(2 * time.Minute), lease: lease6},
{msg: t4, started: now, deadline: now.Add(2 * time.Minute), lease: lease4},
{msg: t5, started: now, deadline: now.Add(2 * time.Minute), lease: lease5},
},
finishedTasks: []*base.TaskMessage{t6, t5},
wantLease1: map[string][]base.Z{
"default": {
{Message: t6, Score: now.Add(rdb.LeaseDuration).Unix()},
},
"custom": {
{Message: t4, Score: now.Add(rdb.LeaseDuration).Unix()},
{Message: t5, Score: now.Add(rdb.LeaseDuration).Unix()},
},
},
wantLease2: map[string][]base.Z{
"default": {},
"custom": {
{Message: t4, Score: now.Add(elapsedTime).Add(rdb.LeaseDuration).Unix()},
},
},
startTime: now,
elapsedTime: elapsedTime,
},
}
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
@ -36,8 +160,15 @@ func TestHeartbeater(t *testing.T) {
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedAllActiveQueues(t, r, tc.active)
h.SeedAllLease(t, r, tc.lease)
status := base.NewServerStatus(base.StatusIdle)
clock.SetTime(tc.startTime)
rdbClient.SetClock(clock)
srvState := &serverState{}
startingCh := make(chan *workerInfo)
finishedCh := make(chan *base.TaskMessage)
hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: rdbClient,
@ -45,77 +176,139 @@ func TestHeartbeater(t *testing.T) {
concurrency: tc.concurrency,
queues: tc.queues,
strictPriority: false,
status: status,
starting: make(chan *base.TaskMessage),
finished: make(chan *base.TaskMessage),
state: srvState,
starting: startingCh,
finished: finishedCh,
})
hb.clock = clock
// Change host and pid fields for testing purpose.
hb.host = tc.host
hb.pid = tc.pid
status.Set(base.StatusRunning)
//===================
// Start Phase1
//===================
srvState.mu.Lock()
srvState.value = srvStateActive // simulating Server.Start
srvState.mu.Unlock()
var wg sync.WaitGroup
hb.start(&wg)
want := &base.ServerInfo{
Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: time.Now(),
Status: "running",
// Simulate processor starting to work on tasks.
for _, w := range tc.startedWorkers {
startingCh <- w
}
// allow for heartbeater to write to redis
// Wait for heartbeater to write to redis
time.Sleep(tc.interval * 2)
ss, err := rdbClient.ListServers()
if err != nil {
t.Errorf("could not read server info from redis: %v", err)
hb.terminate()
t.Errorf("%s: could not read server info from redis: %v", tc.desc, err)
hb.shutdown()
continue
}
if len(ss) != 1 {
t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss))
hb.terminate()
t.Errorf("%s: (*RDB).ListServers returned %d server info, want 1", tc.desc, len(ss))
hb.shutdown()
continue
}
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate()
wantInfo := &base.ServerInfo{
Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: now,
Status: "active",
ActiveWorkerCount: len(tc.startedWorkers),
}
if diff := cmp.Diff(wantInfo, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("%s: redis stored server status %+v, want %+v; (-want, +got)\n%s", tc.desc, ss[0], wantInfo, diff)
hb.shutdown()
continue
}
// status change
status.Set(base.StatusStopped)
for qname, wantLease := range tc.wantLease1 {
gotLease := h.GetLeaseEntries(t, r, qname)
if diff := cmp.Diff(wantLease, gotLease, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s: mismatch found in %q: (-want,+got):\n%s", tc.desc, base.LeaseKey(qname), diff)
}
}
// allow for heartbeater to write to redis
for _, w := range tc.startedWorkers {
if want := now.Add(rdb.LeaseDuration); w.lease.Deadline() != want {
t.Errorf("%s: lease deadline for %v is set to %v, want %v", tc.desc, w.msg, w.lease.Deadline(), want)
}
}
//===================
// Start Phase2
//===================
clock.AdvanceTime(tc.elapsedTime)
// Simulate processor finished processing tasks.
for _, msg := range tc.finishedTasks {
if err := rdbClient.Done(context.Background(), msg); err != nil {
t.Fatalf("RDB.Done failed: %v", err)
}
finishedCh <- msg
}
// Wait for heartbeater to write to redis
time.Sleep(tc.interval * 2)
want.Status = "stopped"
for qname, wantLease := range tc.wantLease2 {
gotLease := h.GetLeaseEntries(t, r, qname)
if diff := cmp.Diff(wantLease, gotLease, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s: mismatch found in %q: (-want,+got):\n%s", tc.desc, base.LeaseKey(qname), diff)
}
}
//===================
// Start Phase3
//===================
// Server state change; simulating Server.Shutdown
srvState.mu.Lock()
srvState.value = srvStateClosed
srvState.mu.Unlock()
// Wait for heartbeater to write to redis
time.Sleep(tc.interval * 2)
wantInfo = &base.ServerInfo{
Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: now,
Status: "closed",
ActiveWorkerCount: len(tc.startedWorkers) - len(tc.finishedTasks),
}
ss, err = rdbClient.ListServers()
if err != nil {
t.Errorf("could not read process status from redis: %v", err)
hb.terminate()
t.Errorf("%s: could not read server status from redis: %v", tc.desc, err)
hb.shutdown()
continue
}
if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss))
hb.terminate()
t.Errorf("%s: (*RDB).ListServers returned %d server info, want 1", tc.desc, len(ss))
hb.shutdown()
continue
}
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff)
hb.terminate()
if diff := cmp.Diff(wantInfo, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("%s: redis stored process status %+v, want %+v; (-want, +got)\n%s", tc.desc, ss[0], wantInfo, diff)
hb.shutdown()
continue
}
hb.terminate()
hb.shutdown()
}
}
@ -128,7 +321,9 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
}
}()
r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r)
state := &serverState{value: srvStateActive}
hb := newHeartbeater(heartbeaterParams{
logger: testLogger,
broker: testBroker,
@ -136,8 +331,8 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
concurrency: 10,
queues: map[string]int{"default": 1},
strictPriority: false,
status: base.NewServerStatus(base.StatusRunning),
starting: make(chan *base.TaskMessage),
state: state,
starting: make(chan *workerInfo),
finished: make(chan *base.TaskMessage),
})
@ -148,5 +343,5 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
// wait for heartbeater to try writing data to redis
time.Sleep(2 * time.Second)
hb.terminate()
hb.shutdown()
}

1026
inspector.go Normal file

File diff suppressed because it is too large Load Diff

3535
inspector_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,280 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package asynqtest defines test helpers for asynq and its internal packages.
package asynqtest
import (
"encoding/json"
"sort"
"testing"
"github.com/go-redis/redis/v7"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/hibiken/asynq/internal/base"
"github.com/rs/xid"
)
// ZSetEntry is an entry in redis sorted set.
type ZSetEntry struct {
Msg *base.TaskMessage
Score float64
}
// SortMsgOpt is a cmp.Option to sort base.TaskMessage for comparing slice of task messages.
var SortMsgOpt = cmp.Transformer("SortTaskMessages", func(in []*base.TaskMessage) []*base.TaskMessage {
out := append([]*base.TaskMessage(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].ID.String() < out[j].ID.String()
})
return out
})
// SortZSetEntryOpt is an cmp.Option to sort ZSetEntry for comparing slice of zset entries.
var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []ZSetEntry) []ZSetEntry {
out := append([]ZSetEntry(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].Msg.ID.String() < out[j].Msg.ID.String()
})
return out
})
// SortServerInfoOpt is a cmp.Option to sort base.ServerInfo for comparing slice of process info.
var SortServerInfoOpt = cmp.Transformer("SortServerInfo", func(in []*base.ServerInfo) []*base.ServerInfo {
out := append([]*base.ServerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host
}
return out[i].PID < out[j].PID
})
return out
})
// SortWorkerInfoOpt is a cmp.Option to sort base.WorkerInfo for comparing slice of worker info.
var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo {
out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].ID < out[j].ID
})
return out
})
// SortStringSliceOpt is a cmp.Option to sort string slice.
var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []string {
out := append([]string(nil), in...)
sort.Strings(out)
return out
})
// IgnoreIDOpt is an cmp.Option to ignore ID field in task messages when comparing.
var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
func NewTaskMessage(taskType string, payload map[string]interface{}) *base.TaskMessage {
return &base.TaskMessage{
ID: xid.New(),
Type: taskType,
Queue: base.DefaultQueueName,
Retry: 25,
Payload: payload,
}
}
// NewTaskMessageWithQueue returns a new instance of TaskMessage given a
// task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload map[string]interface{}, qname string) *base.TaskMessage {
return &base.TaskMessage{
ID: xid.New(),
Type: taskType,
Queue: qname,
Retry: 25,
Payload: payload,
}
}
// MustMarshal marshals given task message and returns a json string.
// Calling test will fail if marshaling errors out.
func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
tb.Helper()
data, err := json.Marshal(msg)
if err != nil {
tb.Fatal(err)
}
return string(data)
}
// MustUnmarshal unmarshals given string into task message struct.
// Calling test will fail if unmarshaling errors out.
func MustUnmarshal(tb testing.TB, data string) *base.TaskMessage {
tb.Helper()
var msg base.TaskMessage
err := json.Unmarshal([]byte(data), &msg)
if err != nil {
tb.Fatal(err)
}
return &msg
}
// MustMarshalSlice marshals a slice of task messages and return a slice of
// json strings. Calling test will fail if marshaling errors out.
func MustMarshalSlice(tb testing.TB, msgs []*base.TaskMessage) []string {
tb.Helper()
var data []string
for _, m := range msgs {
data = append(data, MustMarshal(tb, m))
}
return data
}
// MustUnmarshalSlice unmarshals a slice of strings into a slice of task message structs.
// Calling test will fail if marshaling errors out.
func MustUnmarshalSlice(tb testing.TB, data []string) []*base.TaskMessage {
tb.Helper()
var msgs []*base.TaskMessage
for _, s := range data {
msgs = append(msgs, MustUnmarshal(tb, s))
}
return msgs
}
// FlushDB deletes all the keys of the currently selected DB.
func FlushDB(tb testing.TB, r *redis.Client) {
tb.Helper()
if err := r.FlushDB().Err(); err != nil {
tb.Fatal(err)
}
}
// SeedEnqueuedQueue initializes the specified queue with the given messages.
//
// If queue name option is not passed, it defaults to the default queue.
func SeedEnqueuedQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage, queueOpt ...string) {
tb.Helper()
queue := base.DefaultQueue
if len(queueOpt) > 0 {
queue = base.QueueKey(queueOpt[0])
}
r.SAdd(base.AllQueues, queue)
seedRedisList(tb, r, queue, msgs)
}
// SeedInProgressQueue initializes the in-progress queue with the given messages.
func SeedInProgressQueue(tb testing.TB, r *redis.Client, msgs []*base.TaskMessage) {
tb.Helper()
seedRedisList(tb, r, base.InProgressQueue, msgs)
}
// SeedScheduledQueue initializes the scheduled queue with the given messages.
func SeedScheduledQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) {
tb.Helper()
seedRedisZSet(tb, r, base.ScheduledQueue, entries)
}
// SeedRetryQueue initializes the retry queue with the given messages.
func SeedRetryQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) {
tb.Helper()
seedRedisZSet(tb, r, base.RetryQueue, entries)
}
// SeedDeadQueue initializes the dead queue with the given messages.
func SeedDeadQueue(tb testing.TB, r *redis.Client, entries []ZSetEntry) {
tb.Helper()
seedRedisZSet(tb, r, base.DeadQueue, entries)
}
func seedRedisList(tb testing.TB, c *redis.Client, key string, msgs []*base.TaskMessage) {
data := MustMarshalSlice(tb, msgs)
for _, s := range data {
if err := c.LPush(key, s).Err(); err != nil {
tb.Fatal(err)
}
}
}
func seedRedisZSet(tb testing.TB, c *redis.Client, key string, items []ZSetEntry) {
for _, item := range items {
z := &redis.Z{Member: MustMarshal(tb, item.Msg), Score: float64(item.Score)}
if err := c.ZAdd(key, z).Err(); err != nil {
tb.Fatal(err)
}
}
}
// GetEnqueuedMessages returns all task messages in the specified queue.
//
// If queue name option is not passed, it defaults to the default queue.
func GetEnqueuedMessages(tb testing.TB, r *redis.Client, queueOpt ...string) []*base.TaskMessage {
tb.Helper()
queue := base.DefaultQueue
if len(queueOpt) > 0 {
queue = base.QueueKey(queueOpt[0])
}
return getListMessages(tb, r, queue)
}
// GetInProgressMessages returns all task messages in the in-progress queue.
func GetInProgressMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage {
tb.Helper()
return getListMessages(tb, r, base.InProgressQueue)
}
// GetScheduledMessages returns all task messages in the scheduled queue.
func GetScheduledMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage {
tb.Helper()
return getZSetMessages(tb, r, base.ScheduledQueue)
}
// GetRetryMessages returns all task messages in the retry queue.
func GetRetryMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage {
tb.Helper()
return getZSetMessages(tb, r, base.RetryQueue)
}
// GetDeadMessages returns all task messages in the dead queue.
func GetDeadMessages(tb testing.TB, r *redis.Client) []*base.TaskMessage {
tb.Helper()
return getZSetMessages(tb, r, base.DeadQueue)
}
// GetScheduledEntries returns all task messages and its score in the scheduled queue.
func GetScheduledEntries(tb testing.TB, r *redis.Client) []ZSetEntry {
tb.Helper()
return getZSetEntries(tb, r, base.ScheduledQueue)
}
// GetRetryEntries returns all task messages and its score in the retry queue.
func GetRetryEntries(tb testing.TB, r *redis.Client) []ZSetEntry {
tb.Helper()
return getZSetEntries(tb, r, base.RetryQueue)
}
// GetDeadEntries returns all task messages and its score in the dead queue.
func GetDeadEntries(tb testing.TB, r *redis.Client) []ZSetEntry {
tb.Helper()
return getZSetEntries(tb, r, base.DeadQueue)
}
func getListMessages(tb testing.TB, r *redis.Client, list string) []*base.TaskMessage {
data := r.LRange(list, 0, -1).Val()
return MustUnmarshalSlice(tb, data)
}
func getZSetMessages(tb testing.TB, r *redis.Client, zset string) []*base.TaskMessage {
data := r.ZRange(zset, 0, -1).Val()
return MustUnmarshalSlice(tb, data)
}
func getZSetEntries(tb testing.TB, r *redis.Client, zset string) []ZSetEntry {
data := r.ZRangeWithScores(zset, 0, -1).Val()
var entries []ZSetEntry
for _, z := range data {
entries = append(entries, ZSetEntry{
Msg: MustUnmarshal(tb, z.Member.(string)),
Score: z.Score,
})
}
return entries
}

View File

@ -7,61 +7,228 @@ package base
import (
"context"
"encoding/json"
"crypto/md5"
"encoding/hex"
"fmt"
"strings"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/rs/xid"
"github.com/hibiken/asynq/internal/errors"
pb "github.com/hibiken/asynq/internal/proto"
"github.com/hibiken/asynq/internal/timeutil"
"github.com/redis/go-redis/v9"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/timestamppb"
)
// Version of asynq library and CLI.
const Version = "0.25.1"
// DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default"
// Redis keys
// DefaultQueue is the redis key for the default queue.
var DefaultQueue = PendingKey(DefaultQueueName)
// Global Redis keys.
const (
AllServers = "asynq:servers" // ZSET
serversPrefix = "asynq:servers:" // STRING - asynq:ps:<host>:<pid>:<serverid>
AllWorkers = "asynq:workers" // ZSET
workersPrefix = "asynq:workers:" // HASH - asynq:workers:<host:<pid>:<serverid>
processedPrefix = "asynq:processed:" // STRING - asynq:processed:<yyyy-mm-dd>
failurePrefix = "asynq:failure:" // STRING - asynq:failure:<yyyy-mm-dd>
QueuePrefix = "asynq:queues:" // LIST - asynq:queues:<qname>
AllQueues = "asynq:queues" // SET
DefaultQueue = QueuePrefix + DefaultQueueName // LIST
ScheduledQueue = "asynq:scheduled" // ZSET
RetryQueue = "asynq:retry" // ZSET
DeadQueue = "asynq:dead" // ZSET
InProgressQueue = "asynq:in_progress" // LIST
PausedQueues = "asynq:paused" // SET
CancelChannel = "asynq:cancel" // PubSub channel
AllServers = "asynq:servers" // ZSET
AllWorkers = "asynq:workers" // ZSET
AllSchedulers = "asynq:schedulers" // ZSET
AllQueues = "asynq:queues" // SET
CancelChannel = "asynq:cancel" // PubSub channel
)
// QueueKey returns a redis key for the given queue name.
func QueueKey(qname string) string {
return QueuePrefix + strings.ToLower(qname)
// TaskState denotes the state of a task.
type TaskState int
const (
TaskStateActive TaskState = iota + 1
TaskStatePending
TaskStateScheduled
TaskStateRetry
TaskStateArchived
TaskStateCompleted
TaskStateAggregating // describes a state where task is waiting in a group to be aggregated
)
func (s TaskState) String() string {
switch s {
case TaskStateActive:
return "active"
case TaskStatePending:
return "pending"
case TaskStateScheduled:
return "scheduled"
case TaskStateRetry:
return "retry"
case TaskStateArchived:
return "archived"
case TaskStateCompleted:
return "completed"
case TaskStateAggregating:
return "aggregating"
}
panic(fmt.Sprintf("internal error: unknown task state %d", s))
}
// ProcessedKey returns a redis key for processed count for the given day.
func ProcessedKey(t time.Time) string {
return processedPrefix + t.UTC().Format("2006-01-02")
func TaskStateFromString(s string) (TaskState, error) {
switch s {
case "active":
return TaskStateActive, nil
case "pending":
return TaskStatePending, nil
case "scheduled":
return TaskStateScheduled, nil
case "retry":
return TaskStateRetry, nil
case "archived":
return TaskStateArchived, nil
case "completed":
return TaskStateCompleted, nil
case "aggregating":
return TaskStateAggregating, nil
}
return 0, errors.E(errors.FailedPrecondition, fmt.Sprintf("%q is not supported task state", s))
}
// FailureKey returns a redis key for failure count for the given day.
func FailureKey(t time.Time) string {
return failurePrefix + t.UTC().Format("2006-01-02")
// ValidateQueueName validates a given qname to be used as a queue name.
// Returns nil if valid, otherwise returns non-nil error.
func ValidateQueueName(qname string) error {
if len(strings.TrimSpace(qname)) == 0 {
return fmt.Errorf("queue name must contain one or more characters")
}
return nil
}
// QueueKeyPrefix returns a prefix for all keys in the given queue.
func QueueKeyPrefix(qname string) string {
return "asynq:{" + qname + "}:"
}
// TaskKeyPrefix returns a prefix for task key.
func TaskKeyPrefix(qname string) string {
return QueueKeyPrefix(qname) + "t:"
}
// TaskKey returns a redis key for the given task message.
func TaskKey(qname, id string) string {
return TaskKeyPrefix(qname) + id
}
// PendingKey returns a redis key for the given queue name.
func PendingKey(qname string) string {
return QueueKeyPrefix(qname) + "pending"
}
// ActiveKey returns a redis key for the active tasks.
func ActiveKey(qname string) string {
return QueueKeyPrefix(qname) + "active"
}
// ScheduledKey returns a redis key for the scheduled tasks.
func ScheduledKey(qname string) string {
return QueueKeyPrefix(qname) + "scheduled"
}
// RetryKey returns a redis key for the retry tasks.
func RetryKey(qname string) string {
return QueueKeyPrefix(qname) + "retry"
}
// ArchivedKey returns a redis key for the archived tasks.
func ArchivedKey(qname string) string {
return QueueKeyPrefix(qname) + "archived"
}
// LeaseKey returns a redis key for the lease.
func LeaseKey(qname string) string {
return QueueKeyPrefix(qname) + "lease"
}
func CompletedKey(qname string) string {
return QueueKeyPrefix(qname) + "completed"
}
// PausedKey returns a redis key to indicate that the given queue is paused.
func PausedKey(qname string) string {
return QueueKeyPrefix(qname) + "paused"
}
// ProcessedTotalKey returns a redis key for total processed count for the given queue.
func ProcessedTotalKey(qname string) string {
return QueueKeyPrefix(qname) + "processed"
}
// FailedTotalKey returns a redis key for total failure count for the given queue.
func FailedTotalKey(qname string) string {
return QueueKeyPrefix(qname) + "failed"
}
// ProcessedKey returns a redis key for processed count for the given day for the queue.
func ProcessedKey(qname string, t time.Time) string {
return QueueKeyPrefix(qname) + "processed:" + t.UTC().Format("2006-01-02")
}
// FailedKey returns a redis key for failure count for the given day for the queue.
func FailedKey(qname string, t time.Time) string {
return QueueKeyPrefix(qname) + "failed:" + t.UTC().Format("2006-01-02")
}
// ServerInfoKey returns a redis key for process info.
func ServerInfoKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d:%s", serversPrefix, hostname, pid, sid)
func ServerInfoKey(hostname string, pid int, serverID string) string {
return fmt.Sprintf("asynq:servers:{%s:%d:%s}", hostname, pid, serverID)
}
// WorkersKey returns a redis key for the workers given hostname, pid, and server ID.
func WorkersKey(hostname string, pid int, sid string) string {
return fmt.Sprintf("%s%s:%d:%s", workersPrefix, hostname, pid, sid)
func WorkersKey(hostname string, pid int, serverID string) string {
return fmt.Sprintf("asynq:workers:{%s:%d:%s}", hostname, pid, serverID)
}
// SchedulerEntriesKey returns a redis key for the scheduler entries given scheduler ID.
func SchedulerEntriesKey(schedulerID string) string {
return "asynq:schedulers:{" + schedulerID + "}"
}
// SchedulerHistoryKey returns a redis key for the scheduler's history for the given entry.
func SchedulerHistoryKey(entryID string) string {
return "asynq:scheduler_history:" + entryID
}
// UniqueKey returns a redis key with the given type, payload, and queue name.
func UniqueKey(qname, tasktype string, payload []byte) string {
if payload == nil {
return QueueKeyPrefix(qname) + "unique:" + tasktype + ":"
}
checksum := md5.Sum(payload)
return QueueKeyPrefix(qname) + "unique:" + tasktype + ":" + hex.EncodeToString(checksum[:])
}
// GroupKeyPrefix returns a prefix for group key.
func GroupKeyPrefix(qname string) string {
return QueueKeyPrefix(qname) + "g:"
}
// GroupKey returns a redis key used to group tasks belong in the same group.
func GroupKey(qname, gkey string) string {
return GroupKeyPrefix(qname) + gkey
}
// AggregationSetKey returns a redis key used for an aggregation set.
func AggregationSetKey(qname, gname, setID string) string {
return GroupKey(qname, gname) + ":" + setID
}
// AllGroups return a redis key used to store all group keys used in a given queue.
func AllGroups(qname string) string {
return QueueKeyPrefix(qname) + "groups"
}
// AllAggregationSets returns a redis key used to store all aggregation sets (set of tasks staged to be aggregated)
// in a given queue.
func AllAggregationSets(qname string) string {
return QueueKeyPrefix(qname) + "aggregation_sets"
}
// TaskMessage is the internal representation of a task with additional metadata fields.
@ -71,10 +238,10 @@ type TaskMessage struct {
Type string
// Payload holds data needed to process the task.
Payload map[string]interface{}
Payload []byte
// ID is a unique identifier for each task.
ID xid.ID
ID string
// Queue is a name this message should be enqueued to.
Queue string
@ -88,102 +255,106 @@ type TaskMessage struct {
// ErrorMsg holds the error message from the last failure.
ErrorMsg string
// Timeout specifies how long a task may run.
// The string value should be compatible with time.Duration.ParseDuration.
// Time of last failure in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
//
// Zero means no limit.
Timeout string
// Use zero to indicate no last failure
LastFailedAt int64
// Deadline specifies the deadline for the task.
// Task won't be processed if it exceeded its deadline.
// The string shoulbe be in RFC3339 format.
// Timeout specifies timeout in seconds.
// If task processing doesn't complete within the timeout, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the archive.
//
// time.Time's zero value means no deadline.
Deadline string
// Use zero to indicate no timeout.
Timeout int64
// Deadline specifies the deadline for the task in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// If task processing doesn't complete before the deadline, the task will be retried
// if retry count is remaining. Otherwise it will be moved to the archive.
//
// Use zero to indicate no deadline.
Deadline int64
// UniqueKey holds the redis key used for uniqueness lock for this task.
//
// Empty string indicates that no uniqueness lock was used.
UniqueKey string
// GroupKey holds the group key used for task aggregation.
//
// Empty string indicates no aggregation is used for this task.
GroupKey string
// Retention specifies the number of seconds the task should be retained after completion.
Retention int64
// CompletedAt is the time the task was processed successfully in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
//
// Use zero to indicate no value.
CompletedAt int64
}
// EncodeMessage marshals the given task message in JSON and returns an encoded string.
func EncodeMessage(msg *TaskMessage) (string, error) {
b, err := json.Marshal(msg)
if err != nil {
return "", err
// EncodeMessage marshals the given task message and returns an encoded bytes.
func EncodeMessage(msg *TaskMessage) ([]byte, error) {
if msg == nil {
return nil, fmt.Errorf("cannot encode nil message")
}
return string(b), nil
return proto.Marshal(&pb.TaskMessage{
Type: msg.Type,
Payload: msg.Payload,
Id: msg.ID,
Queue: msg.Queue,
Retry: int32(msg.Retry),
Retried: int32(msg.Retried),
ErrorMsg: msg.ErrorMsg,
LastFailedAt: msg.LastFailedAt,
Timeout: msg.Timeout,
Deadline: msg.Deadline,
UniqueKey: msg.UniqueKey,
GroupKey: msg.GroupKey,
Retention: msg.Retention,
CompletedAt: msg.CompletedAt,
})
}
// DecodeMessage unmarshals the given encoded string and returns a decoded task message.
func DecodeMessage(s string) (*TaskMessage, error) {
d := json.NewDecoder(strings.NewReader(s))
d.UseNumber()
var msg TaskMessage
if err := d.Decode(&msg); err != nil {
// DecodeMessage unmarshals the given bytes and returns a decoded task message.
func DecodeMessage(data []byte) (*TaskMessage, error) {
var pbmsg pb.TaskMessage
if err := proto.Unmarshal(data, &pbmsg); err != nil {
return nil, err
}
return &msg, nil
return &TaskMessage{
Type: pbmsg.GetType(),
Payload: pbmsg.GetPayload(),
ID: pbmsg.GetId(),
Queue: pbmsg.GetQueue(),
Retry: int(pbmsg.GetRetry()),
Retried: int(pbmsg.GetRetried()),
ErrorMsg: pbmsg.GetErrorMsg(),
LastFailedAt: pbmsg.GetLastFailedAt(),
Timeout: pbmsg.GetTimeout(),
Deadline: pbmsg.GetDeadline(),
UniqueKey: pbmsg.GetUniqueKey(),
GroupKey: pbmsg.GetGroupKey(),
Retention: pbmsg.GetRetention(),
CompletedAt: pbmsg.GetCompletedAt(),
}, nil
}
// ServerStatus represents status of a server.
// ServerStatus methods are concurrency safe.
type ServerStatus struct {
mu sync.Mutex
val ServerStatusValue
// TaskInfo describes a task message and its metadata.
type TaskInfo struct {
Message *TaskMessage
State TaskState
NextProcessAt time.Time
Result []byte
}
// NewServerStatus returns a new status instance given an initial value.
func NewServerStatus(v ServerStatusValue) *ServerStatus {
return &ServerStatus{val: v}
}
type ServerStatusValue int
const (
// StatusIdle indicates the server is in idle state.
StatusIdle ServerStatusValue = iota
// StatusRunning indicates the servier is up and processing tasks.
StatusRunning
// StatusQuiet indicates the server is up but not processing new tasks.
StatusQuiet
// StatusStopped indicates the server server has been stopped.
StatusStopped
)
var statuses = []string{
"idle",
"running",
"quiet",
"stopped",
}
func (s *ServerStatus) String() string {
s.mu.Lock()
defer s.mu.Unlock()
if StatusIdle <= s.val && s.val <= StatusStopped {
return statuses[s.val]
}
return "unknown status"
}
// Get returns the status value.
func (s *ServerStatus) Get() ServerStatusValue {
s.mu.Lock()
v := s.val
s.mu.Unlock()
return v
}
// Set sets the status value.
func (s *ServerStatus) Set(v ServerStatusValue) {
s.mu.Lock()
s.val = v
s.mu.Unlock()
// Z represents sorted set member.
type Z struct {
Message *TaskMessage
Score int64
}
// ServerInfo holds information about a running server.
@ -199,20 +370,214 @@ type ServerInfo struct {
ActiveWorkerCount int
}
// WorkerInfo holds information about a running worker.
type WorkerInfo struct {
Host string
PID int
ID string
Type string
Queue string
Payload map[string]interface{}
Started time.Time
// EncodeServerInfo marshals the given ServerInfo and returns the encoded bytes.
func EncodeServerInfo(info *ServerInfo) ([]byte, error) {
if info == nil {
return nil, fmt.Errorf("cannot encode nil server info")
}
queues := make(map[string]int32, len(info.Queues))
for q, p := range info.Queues {
queues[q] = int32(p)
}
started := timestamppb.New(info.Started)
return proto.Marshal(&pb.ServerInfo{
Host: info.Host,
Pid: int32(info.PID),
ServerId: info.ServerID,
Concurrency: int32(info.Concurrency),
Queues: queues,
StrictPriority: info.StrictPriority,
Status: info.Status,
StartTime: started,
ActiveWorkerCount: int32(info.ActiveWorkerCount),
})
}
// Cancelations is a collection that holds cancel functions for all in-progress tasks.
// DecodeServerInfo decodes the given bytes into ServerInfo.
func DecodeServerInfo(b []byte) (*ServerInfo, error) {
var pbmsg pb.ServerInfo
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
queues := make(map[string]int, len(pbmsg.GetQueues()))
for q, p := range pbmsg.GetQueues() {
queues[q] = int(p)
}
startTime := pbmsg.GetStartTime()
return &ServerInfo{
Host: pbmsg.GetHost(),
PID: int(pbmsg.GetPid()),
ServerID: pbmsg.GetServerId(),
Concurrency: int(pbmsg.GetConcurrency()),
Queues: queues,
StrictPriority: pbmsg.GetStrictPriority(),
Status: pbmsg.GetStatus(),
Started: startTime.AsTime(),
ActiveWorkerCount: int(pbmsg.GetActiveWorkerCount()),
}, nil
}
// WorkerInfo holds information about a running worker.
type WorkerInfo struct {
Host string
PID int
ServerID string
ID string
Type string
Payload []byte
Queue string
Started time.Time
Deadline time.Time
}
// EncodeWorkerInfo marshals the given WorkerInfo and returns the encoded bytes.
func EncodeWorkerInfo(info *WorkerInfo) ([]byte, error) {
if info == nil {
return nil, fmt.Errorf("cannot encode nil worker info")
}
startTime := timestamppb.New(info.Started)
deadline := timestamppb.New(info.Deadline)
return proto.Marshal(&pb.WorkerInfo{
Host: info.Host,
Pid: int32(info.PID),
ServerId: info.ServerID,
TaskId: info.ID,
TaskType: info.Type,
TaskPayload: info.Payload,
Queue: info.Queue,
StartTime: startTime,
Deadline: deadline,
})
}
// DecodeWorkerInfo decodes the given bytes into WorkerInfo.
func DecodeWorkerInfo(b []byte) (*WorkerInfo, error) {
var pbmsg pb.WorkerInfo
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
startTime := pbmsg.GetStartTime()
deadline := pbmsg.GetDeadline()
return &WorkerInfo{
Host: pbmsg.GetHost(),
PID: int(pbmsg.GetPid()),
ServerID: pbmsg.GetServerId(),
ID: pbmsg.GetTaskId(),
Type: pbmsg.GetTaskType(),
Payload: pbmsg.GetTaskPayload(),
Queue: pbmsg.GetQueue(),
Started: startTime.AsTime(),
Deadline: deadline.AsTime(),
}, nil
}
// SchedulerEntry holds information about a periodic task registered with a scheduler.
type SchedulerEntry struct {
// Identifier of this entry.
ID string
// Spec describes the schedule of this entry.
Spec string
// Type is the task type of the periodic task.
Type string
// Payload is the payload of the periodic task.
Payload []byte
// Opts is the options for the periodic task.
Opts []string
// Next shows the next time the task will be enqueued.
Next time.Time
// Prev shows the last time the task was enqueued.
// Zero time if task was never enqueued.
Prev time.Time
}
// EncodeSchedulerEntry marshals the given entry and returns an encoded bytes.
func EncodeSchedulerEntry(entry *SchedulerEntry) ([]byte, error) {
if entry == nil {
return nil, fmt.Errorf("cannot encode nil scheduler entry")
}
next := timestamppb.New(entry.Next)
prev := timestamppb.New(entry.Prev)
return proto.Marshal(&pb.SchedulerEntry{
Id: entry.ID,
Spec: entry.Spec,
TaskType: entry.Type,
TaskPayload: entry.Payload,
EnqueueOptions: entry.Opts,
NextEnqueueTime: next,
PrevEnqueueTime: prev,
})
}
// DecodeSchedulerEntry unmarshals the given bytes and returns a decoded SchedulerEntry.
func DecodeSchedulerEntry(b []byte) (*SchedulerEntry, error) {
var pbmsg pb.SchedulerEntry
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
next := pbmsg.GetNextEnqueueTime()
prev := pbmsg.GetPrevEnqueueTime()
return &SchedulerEntry{
ID: pbmsg.GetId(),
Spec: pbmsg.GetSpec(),
Type: pbmsg.GetTaskType(),
Payload: pbmsg.GetTaskPayload(),
Opts: pbmsg.GetEnqueueOptions(),
Next: next.AsTime(),
Prev: prev.AsTime(),
}, nil
}
// SchedulerEnqueueEvent holds information about an enqueue event by a scheduler.
type SchedulerEnqueueEvent struct {
// ID of the task that was enqueued.
TaskID string
// Time the task was enqueued.
EnqueuedAt time.Time
}
// EncodeSchedulerEnqueueEvent marshals the given event
// and returns an encoded bytes.
func EncodeSchedulerEnqueueEvent(event *SchedulerEnqueueEvent) ([]byte, error) {
if event == nil {
return nil, fmt.Errorf("cannot encode nil enqueue event")
}
enqueuedAt := timestamppb.New(event.EnqueuedAt)
return proto.Marshal(&pb.SchedulerEnqueueEvent{
TaskId: event.TaskID,
EnqueueTime: enqueuedAt,
})
}
// DecodeSchedulerEnqueueEvent unmarshals the given bytes
// and returns a decoded SchedulerEnqueueEvent.
func DecodeSchedulerEnqueueEvent(b []byte) (*SchedulerEnqueueEvent, error) {
var pbmsg pb.SchedulerEnqueueEvent
if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err
}
enqueuedAt := pbmsg.GetEnqueueTime()
return &SchedulerEnqueueEvent{
TaskID: pbmsg.GetTaskId(),
EnqueuedAt: enqueuedAt.AsTime(),
}, nil
}
// Cancelations is a collection that holds cancel functions for all active tasks.
//
// Cancelations are safe for concurrent use by multipel goroutines.
// Cancelations are safe for concurrent use by multiple goroutines.
type Cancelations struct {
mu sync.Mutex
cancelFuncs map[string]context.CancelFunc
@ -247,34 +612,114 @@ func (c *Cancelations) Get(id string) (fn context.CancelFunc, ok bool) {
return fn, ok
}
// GetAll returns all cancel funcs.
func (c *Cancelations) GetAll() []context.CancelFunc {
c.mu.Lock()
defer c.mu.Unlock()
var res []context.CancelFunc
for _, fn := range c.cancelFuncs {
res = append(res, fn)
// Lease is a time bound lease for worker to process task.
// It provides a communication channel between lessor and lessee about lease expiration.
type Lease struct {
once sync.Once
ch chan struct{}
Clock timeutil.Clock
mu sync.Mutex
expireAt time.Time // guarded by mu
}
func NewLease(expirationTime time.Time) *Lease {
return &Lease{
ch: make(chan struct{}),
expireAt: expirationTime,
Clock: timeutil.NewRealClock(),
}
return res
}
// Reset changes the lease to expire at the given time.
// It returns true if the lease is still valid and reset operation was successful, false if the lease had been expired.
func (l *Lease) Reset(expirationTime time.Time) bool {
if !l.IsValid() {
return false
}
l.mu.Lock()
defer l.mu.Unlock()
l.expireAt = expirationTime
return true
}
// Sends a notification to lessee about expired lease
// Returns true if notification was sent, returns false if the lease is still valid and notification was not sent.
func (l *Lease) NotifyExpiration() bool {
if l.IsValid() {
return false
}
l.once.Do(l.closeCh)
return true
}
func (l *Lease) closeCh() {
close(l.ch)
}
// Done returns a communication channel from which the lessee can read to get notified when lessor notifies about lease expiration.
func (l *Lease) Done() <-chan struct{} {
return l.ch
}
// Deadline returns the expiration time of the lease.
func (l *Lease) Deadline() time.Time {
l.mu.Lock()
defer l.mu.Unlock()
return l.expireAt
}
// IsValid returns true if the lease's expiration time is in the future or equals to the current time,
// returns false otherwise.
func (l *Lease) IsValid() bool {
now := l.Clock.Now()
l.mu.Lock()
defer l.mu.Unlock()
return l.expireAt.After(now) || l.expireAt.Equal(now)
}
// Broker is a message broker that supports operations to manage task queues.
//
// See rdb.RDB as a reference implementation.
type Broker interface {
Enqueue(msg *TaskMessage) error
EnqueueUnique(msg *TaskMessage, ttl time.Duration) error
Dequeue(qnames ...string) (*TaskMessage, error)
Done(msg *TaskMessage) error
Requeue(msg *TaskMessage) error
Schedule(msg *TaskMessage, processAt time.Time) error
ScheduleUnique(msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string) error
Kill(msg *TaskMessage, errMsg string) error
CheckAndEnqueue() error
Ping() error
Close() error
Enqueue(ctx context.Context, msg *TaskMessage) error
EnqueueUnique(ctx context.Context, msg *TaskMessage, ttl time.Duration) error
Dequeue(qnames ...string) (*TaskMessage, time.Time, error)
Done(ctx context.Context, msg *TaskMessage) error
MarkAsComplete(ctx context.Context, msg *TaskMessage) error
Requeue(ctx context.Context, msg *TaskMessage) error
Schedule(ctx context.Context, msg *TaskMessage, processAt time.Time) error
ScheduleUnique(ctx context.Context, msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(ctx context.Context, msg *TaskMessage, processAt time.Time, errMsg string, isFailure bool) error
Archive(ctx context.Context, msg *TaskMessage, errMsg string) error
ForwardIfReady(qnames ...string) error
// Group aggregation related methods
AddToGroup(ctx context.Context, msg *TaskMessage, gname string) error
AddToGroupUnique(ctx context.Context, msg *TaskMessage, groupKey string, ttl time.Duration) error
ListGroups(qname string) ([]string, error)
AggregationCheck(qname, gname string, t time.Time, gracePeriod, maxDelay time.Duration, maxSize int) (aggregationSetID string, err error)
ReadAggregationSet(qname, gname, aggregationSetID string) ([]*TaskMessage, time.Time, error)
DeleteAggregationSet(ctx context.Context, qname, gname, aggregationSetID string) error
ReclaimStaleAggregationSets(qname string) error
// Task retention related method
DeleteExpiredCompletedTasks(qname string, batchSize int) error
// Lease related methods
ListLeaseExpired(cutoff time.Time, qnames ...string) ([]*TaskMessage, error)
ExtendLease(qname string, ids ...string) (time.Time, error)
// State snapshot related methods
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
ClearServerState(host string, pid int, serverID string) error
// Cancelation related methods
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers
PublishCancelation(id string) error
Close() error
WriteResult(qname, id string, data []byte) (n int, err error)
}

View File

@ -6,61 +6,240 @@ package base
import (
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/rs/xid"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/timeutil"
)
func TestTaskKey(t *testing.T) {
id := uuid.NewString()
tests := []struct {
qname string
id string
want string
}{
{"default", id, fmt.Sprintf("asynq:{default}:t:%s", id)},
}
for _, tc := range tests {
got := TaskKey(tc.qname, tc.id)
if got != tc.want {
t.Errorf("TaskKey(%q, %s) = %q, want %q", tc.qname, tc.id, got, tc.want)
}
}
}
func TestQueueKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"custom", "asynq:queues:custom"},
{"default", "asynq:{default}:pending"},
{"custom", "asynq:{custom}:pending"},
}
for _, tc := range tests {
got := QueueKey(tc.qname)
got := PendingKey(tc.qname)
if got != tc.want {
t.Errorf("QueueKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestProcessedKey(t *testing.T) {
func TestActiveKey(t *testing.T) {
tests := []struct {
input time.Time
qname string
want string
}{
{time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:processed:2019-11-14"},
{time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:processed:2020-12-01"},
{time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:processed:2020-01-06"},
{"default", "asynq:{default}:active"},
{"custom", "asynq:{custom}:active"},
}
for _, tc := range tests {
got := ProcessedKey(tc.input)
got := ActiveKey(tc.qname)
if got != tc.want {
t.Errorf("ActiveKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestLeaseKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:lease"},
{"custom", "asynq:{custom}:lease"},
}
for _, tc := range tests {
got := LeaseKey(tc.qname)
if got != tc.want {
t.Errorf("LeaseKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestScheduledKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:scheduled"},
{"custom", "asynq:{custom}:scheduled"},
}
for _, tc := range tests {
got := ScheduledKey(tc.qname)
if got != tc.want {
t.Errorf("ScheduledKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestRetryKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:retry"},
{"custom", "asynq:{custom}:retry"},
}
for _, tc := range tests {
got := RetryKey(tc.qname)
if got != tc.want {
t.Errorf("RetryKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestArchivedKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:archived"},
{"custom", "asynq:{custom}:archived"},
}
for _, tc := range tests {
got := ArchivedKey(tc.qname)
if got != tc.want {
t.Errorf("ArchivedKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestCompletedKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:completed"},
{"custom", "asynq:{custom}:completed"},
}
for _, tc := range tests {
got := CompletedKey(tc.qname)
if got != tc.want {
t.Errorf("CompletedKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestPausedKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:paused"},
{"custom", "asynq:{custom}:paused"},
}
for _, tc := range tests {
got := PausedKey(tc.qname)
if got != tc.want {
t.Errorf("PausedKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestProcessedTotalKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:processed"},
{"custom", "asynq:{custom}:processed"},
}
for _, tc := range tests {
got := ProcessedTotalKey(tc.qname)
if got != tc.want {
t.Errorf("ProcessedTotalKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestFailedTotalKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:failed"},
{"custom", "asynq:{custom}:failed"},
}
for _, tc := range tests {
got := FailedTotalKey(tc.qname)
if got != tc.want {
t.Errorf("FailedTotalKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestProcessedKey(t *testing.T) {
tests := []struct {
qname string
input time.Time
want string
}{
{"default", time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:{default}:processed:2019-11-14"},
{"critical", time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:{critical}:processed:2020-12-01"},
{"default", time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:{default}:processed:2020-01-06"},
}
for _, tc := range tests {
got := ProcessedKey(tc.qname, tc.input)
if got != tc.want {
t.Errorf("ProcessedKey(%v) = %q, want %q", tc.input, got, tc.want)
}
}
}
func TestFailureKey(t *testing.T) {
func TestFailedKey(t *testing.T) {
tests := []struct {
qname string
input time.Time
want string
}{
{time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:failure:2019-11-14"},
{time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:failure:2020-12-01"},
{time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:failure:2020-01-06"},
{"default", time.Date(2019, 11, 14, 10, 30, 1, 1, time.UTC), "asynq:{default}:failed:2019-11-14"},
{"custom", time.Date(2020, 12, 1, 1, 0, 1, 1, time.UTC), "asynq:{custom}:failed:2020-12-01"},
{"low", time.Date(2020, 1, 6, 15, 02, 1, 1, time.UTC), "asynq:{low}:failed:2020-01-06"},
}
for _, tc := range tests {
got := FailureKey(tc.input)
got := FailedKey(tc.qname, tc.input)
if got != tc.want {
t.Errorf("FailureKey(%v) = %q, want %q", tc.input, got, tc.want)
}
@ -74,8 +253,8 @@ func TestServerInfoKey(t *testing.T) {
sid string
want string
}{
{"localhost", 9876, "server123", "asynq:servers:localhost:9876:server123"},
{"127.0.0.1", 1234, "server987", "asynq:servers:127.0.0.1:1234:server987"},
{"localhost", 9876, "server123", "asynq:servers:{localhost:9876:server123}"},
{"127.0.0.1", 1234, "server987", "asynq:servers:{127.0.0.1:1234:server987}"},
}
for _, tc := range tests {
@ -94,8 +273,8 @@ func TestWorkersKey(t *testing.T) {
sid string
want string
}{
{"localhost", 9876, "server1", "asynq:workers:localhost:9876:server1"},
{"127.0.0.1", 1234, "server2", "asynq:workers:127.0.0.1:1234:server2"},
{"localhost", 9876, "server1", "asynq:workers:{localhost:9876:server1}"},
{"127.0.0.1", 1234, "server2", "asynq:workers:{127.0.0.1:1234:server2}"},
}
for _, tc := range tests {
@ -107,30 +286,246 @@ func TestWorkersKey(t *testing.T) {
}
}
func TestSchedulerEntriesKey(t *testing.T) {
tests := []struct {
schedulerID string
want string
}{
{"localhost:9876:scheduler123", "asynq:schedulers:{localhost:9876:scheduler123}"},
{"127.0.0.1:1234:scheduler987", "asynq:schedulers:{127.0.0.1:1234:scheduler987}"},
}
for _, tc := range tests {
got := SchedulerEntriesKey(tc.schedulerID)
if got != tc.want {
t.Errorf("SchedulerEntriesKey(%q) = %q, want %q", tc.schedulerID, got, tc.want)
}
}
}
func TestSchedulerHistoryKey(t *testing.T) {
tests := []struct {
entryID string
want string
}{
{"entry876", "asynq:scheduler_history:entry876"},
{"entry345", "asynq:scheduler_history:entry345"},
}
for _, tc := range tests {
got := SchedulerHistoryKey(tc.entryID)
if got != tc.want {
t.Errorf("SchedulerHistoryKey(%q) = %q, want %q",
tc.entryID, got, tc.want)
}
}
}
func toBytes(m map[string]interface{}) []byte {
b, err := json.Marshal(m)
if err != nil {
panic(err)
}
return b
}
func TestUniqueKey(t *testing.T) {
payload1 := toBytes(map[string]interface{}{"a": 123, "b": "hello", "c": true})
payload2 := toBytes(map[string]interface{}{"b": "hello", "c": true, "a": 123})
payload3 := toBytes(map[string]interface{}{
"address": map[string]string{"line": "123 Main St", "city": "Boston", "state": "MA"},
"names": []string{"bob", "mike", "rob"}})
payload4 := toBytes(map[string]interface{}{
"time": time.Date(2020, time.July, 28, 0, 0, 0, 0, time.UTC),
"duration": time.Hour})
checksum := func(data []byte) string {
sum := md5.Sum(data)
return hex.EncodeToString(sum[:])
}
tests := []struct {
desc string
qname string
tasktype string
payload []byte
want string
}{
{
"with primitive types",
"default",
"email:send",
payload1,
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload1)),
},
{
"with unsorted keys",
"default",
"email:send",
payload2,
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload2)),
},
{
"with composite types",
"default",
"email:send",
payload3,
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload3)),
},
{
"with complex types",
"default",
"email:send",
payload4,
fmt.Sprintf("asynq:{default}:unique:email:send:%s", checksum(payload4)),
},
{
"with nil payload",
"default",
"reindex",
nil,
"asynq:{default}:unique:reindex:",
},
}
for _, tc := range tests {
got := UniqueKey(tc.qname, tc.tasktype, tc.payload)
if got != tc.want {
t.Errorf("%s: UniqueKey(%q, %q, %v) = %q, want %q", tc.desc, tc.qname, tc.tasktype, tc.payload, got, tc.want)
}
}
}
func TestGroupKey(t *testing.T) {
tests := []struct {
qname string
gkey string
want string
}{
{
qname: "default",
gkey: "mygroup",
want: "asynq:{default}:g:mygroup",
},
{
qname: "custom",
gkey: "foo",
want: "asynq:{custom}:g:foo",
},
}
for _, tc := range tests {
got := GroupKey(tc.qname, tc.gkey)
if got != tc.want {
t.Errorf("GroupKey(%q, %q) = %q, want %q", tc.qname, tc.gkey, got, tc.want)
}
}
}
func TestAggregationSetKey(t *testing.T) {
tests := []struct {
qname string
gname string
setID string
want string
}{
{
qname: "default",
gname: "mygroup",
setID: "12345",
want: "asynq:{default}:g:mygroup:12345",
},
{
qname: "custom",
gname: "foo",
setID: "98765",
want: "asynq:{custom}:g:foo:98765",
},
}
for _, tc := range tests {
got := AggregationSetKey(tc.qname, tc.gname, tc.setID)
if got != tc.want {
t.Errorf("AggregationSetKey(%q, %q, %q) = %q, want %q", tc.qname, tc.gname, tc.setID, got, tc.want)
}
}
}
func TestAllGroups(t *testing.T) {
tests := []struct {
qname string
want string
}{
{
qname: "default",
want: "asynq:{default}:groups",
},
{
qname: "custom",
want: "asynq:{custom}:groups",
},
}
for _, tc := range tests {
got := AllGroups(tc.qname)
if got != tc.want {
t.Errorf("AllGroups(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestAllAggregationSets(t *testing.T) {
tests := []struct {
qname string
want string
}{
{
qname: "default",
want: "asynq:{default}:aggregation_sets",
},
{
qname: "custom",
want: "asynq:{custom}:aggregation_sets",
},
}
for _, tc := range tests {
got := AllAggregationSets(tc.qname)
if got != tc.want {
t.Errorf("AllAggregationSets(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestMessageEncoding(t *testing.T) {
id := xid.New()
id := uuid.NewString()
tests := []struct {
in *TaskMessage
out *TaskMessage
}{
{
in: &TaskMessage{
Type: "task1",
Payload: map[string]interface{}{"a": 1, "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: "0",
Type: "task1",
Payload: toBytes(map[string]interface{}{"a": 1, "b": "hello!", "c": true}),
ID: id,
Queue: "default",
GroupKey: "mygroup",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
Retention: 3600,
},
out: &TaskMessage{
Type: "task1",
Payload: map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true},
ID: id,
Queue: "default",
Retry: 10,
Retried: 0,
Timeout: "0",
Type: "task1",
Payload: toBytes(map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true}),
ID: id,
Queue: "default",
GroupKey: "mygroup",
Retry: 10,
Retried: 0,
Timeout: 1800,
Deadline: 1692311100,
Retention: 3600,
},
},
}
@ -153,28 +548,143 @@ func TestMessageEncoding(t *testing.T) {
}
}
// Test for status being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestStatusConcurrentAccess(t *testing.T) {
status := NewServerStatus(StatusIdle)
func TestServerInfoEncoding(t *testing.T) {
tests := []struct {
info ServerInfo
}{
{
info: ServerInfo{
Host: "127.0.0.1",
PID: 9876,
ServerID: "abc123",
Concurrency: 10,
Queues: map[string]int{"default": 1, "critical": 2},
StrictPriority: false,
Status: "active",
Started: time.Now().Add(-3 * time.Hour),
ActiveWorkerCount: 8,
},
},
}
var wg sync.WaitGroup
for _, tc := range tests {
encoded, err := EncodeServerInfo(&tc.info)
if err != nil {
t.Errorf("EncodeServerInfo(info) returned error: %v", err)
continue
}
decoded, err := DecodeServerInfo(encoded)
if err != nil {
t.Errorf("DecodeServerInfo(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.info, decoded); diff != "" {
t.Errorf("Decoded ServerInfo == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.info, diff)
}
}
}
wg.Add(1)
go func() {
defer wg.Done()
status.Get()
status.String()
}()
func TestWorkerInfoEncoding(t *testing.T) {
tests := []struct {
info WorkerInfo
}{
{
info: WorkerInfo{
Host: "127.0.0.1",
PID: 9876,
ServerID: "abc123",
ID: uuid.NewString(),
Type: "taskA",
Payload: toBytes(map[string]interface{}{"foo": "bar"}),
Queue: "default",
Started: time.Now().Add(-3 * time.Hour),
Deadline: time.Now().Add(30 * time.Second),
},
},
}
wg.Add(1)
go func() {
defer wg.Done()
status.Set(StatusStopped)
status.String()
}()
for _, tc := range tests {
encoded, err := EncodeWorkerInfo(&tc.info)
if err != nil {
t.Errorf("EncodeWorkerInfo(info) returned error: %v", err)
continue
}
decoded, err := DecodeWorkerInfo(encoded)
if err != nil {
t.Errorf("DecodeWorkerInfo(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.info, decoded); diff != "" {
t.Errorf("Decoded WorkerInfo == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.info, diff)
}
}
}
wg.Wait()
func TestSchedulerEntryEncoding(t *testing.T) {
tests := []struct {
entry SchedulerEntry
}{
{
entry: SchedulerEntry{
ID: uuid.NewString(),
Spec: "* * * * *",
Type: "task_A",
Payload: toBytes(map[string]interface{}{"foo": "bar"}),
Opts: []string{"Queue('email')"},
Next: time.Now().Add(30 * time.Second).UTC(),
Prev: time.Now().Add(-2 * time.Minute).UTC(),
},
},
}
for _, tc := range tests {
encoded, err := EncodeSchedulerEntry(&tc.entry)
if err != nil {
t.Errorf("EncodeSchedulerEntry(entry) returned error: %v", err)
continue
}
decoded, err := DecodeSchedulerEntry(encoded)
if err != nil {
t.Errorf("DecodeSchedulerEntry(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.entry, decoded); diff != "" {
t.Errorf("Decoded SchedulerEntry == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.entry, diff)
}
}
}
func TestSchedulerEnqueueEventEncoding(t *testing.T) {
tests := []struct {
event SchedulerEnqueueEvent
}{
{
event: SchedulerEnqueueEvent{
TaskID: uuid.NewString(),
EnqueuedAt: time.Now().Add(-30 * time.Second).UTC(),
},
},
}
for _, tc := range tests {
encoded, err := EncodeSchedulerEnqueueEvent(&tc.event)
if err != nil {
t.Errorf("EncodeSchedulerEnqueueEvent(event) returned error: %v", err)
continue
}
decoded, err := DecodeSchedulerEnqueueEvent(encoded)
if err != nil {
t.Errorf("DecodeSchedulerEnqueueEvent(encoded) returned error: %v", err)
continue
}
if diff := cmp.Diff(&tc.event, decoded); diff != "" {
t.Errorf("Decoded SchedulerEnqueueEvent == %+v, want %+v;(-want,+got)\n%s",
decoded, tc.event, diff)
}
}
}
// Test for cancelations being accessed by multiple goroutines.
@ -220,9 +730,76 @@ func TestCancelationsConcurrentAccess(t *testing.T) {
if ok {
t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2)
}
}
funcs := c.GetAll()
if len(funcs) != 2 {
t.Errorf("(*Cancelations).GetAll() returns %d functions, want 2", len(funcs))
func TestLeaseReset(t *testing.T) {
now := time.Now()
clock := timeutil.NewSimulatedClock(now)
l := NewLease(now.Add(30 * time.Second))
l.Clock = clock
// Check initial state
if !l.IsValid() {
t.Errorf("lease should be valid when expiration is set to a future time")
}
if want := now.Add(30 * time.Second); l.Deadline() != want {
t.Errorf("Lease.Deadline() = %v, want %v", l.Deadline(), want)
}
// Test Reset
if !l.Reset(now.Add(45 * time.Second)) {
t.Fatalf("Lease.Reset returned false when extending")
}
if want := now.Add(45 * time.Second); l.Deadline() != want {
t.Errorf("After Reset: Lease.Deadline() = %v, want %v", l.Deadline(), want)
}
clock.AdvanceTime(1 * time.Minute) // simulate lease expiration
if l.IsValid() {
t.Errorf("lease should be invalid after expiration")
}
// Reset should return false if lease is expired.
if l.Reset(time.Now().Add(20 * time.Second)) {
t.Errorf("Lease.Reset should return false after expiration")
}
}
func TestLeaseNotifyExpiration(t *testing.T) {
now := time.Now()
clock := timeutil.NewSimulatedClock(now)
l := NewLease(now.Add(30 * time.Second))
l.Clock = clock
select {
case <-l.Done():
t.Fatalf("Lease.Done() did not block")
default:
}
if l.NotifyExpiration() {
t.Fatalf("Lease.NotifyExpiration() should return false when lease is still valid")
}
clock.AdvanceTime(1 * time.Minute) // simulate lease expiration
if l.IsValid() {
t.Errorf("Lease should be invalid after expiration")
}
if !l.NotifyExpiration() {
t.Errorf("Lease.NotifyExpiration() return return true after expiration")
}
if !l.NotifyExpiration() {
t.Errorf("It should be leagal to call Lease.NotifyExpiration multiple times")
}
select {
case <-l.Done():
// expected
default:
t.Errorf("Lease.Done() blocked after call to Lease.NotifyExpiration()")
}
}

View File

@ -0,0 +1,87 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package context
import (
"context"
"time"
"github.com/hibiken/asynq/internal/base"
)
// A taskMetadata holds task scoped data to put in context.
type taskMetadata struct {
id string
maxRetry int
retryCount int
qname string
}
// ctxKey type is unexported to prevent collisions with context keys defined in
// other packages.
type ctxKey int
// metadataCtxKey is the context key for the task metadata.
// Its value of zero is arbitrary.
const metadataCtxKey ctxKey = 0
// New returns a context and cancel function for a given task message.
func New(base context.Context, msg *base.TaskMessage, deadline time.Time) (context.Context, context.CancelFunc) {
metadata := taskMetadata{
id: msg.ID,
maxRetry: msg.Retry,
retryCount: msg.Retried,
qname: msg.Queue,
}
ctx := context.WithValue(base, metadataCtxKey, metadata)
return context.WithDeadline(ctx, deadline)
}
// GetTaskID extracts a task ID from a context, if any.
//
// ID of a task is guaranteed to be unique.
// ID of a task doesn't change if the task is being retried.
func GetTaskID(ctx context.Context) (id string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.id, true
}
// GetRetryCount extracts retry count from a context, if any.
//
// Return value n indicates the number of times associated task has been
// retried so far.
func GetRetryCount(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.retryCount, true
}
// GetMaxRetry extracts maximum retry from a context, if any.
//
// Return value n indicates the maximum number of times the associated task
// can be retried if ProcessTask returns a non-nil error.
func GetMaxRetry(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return 0, false
}
return metadata.maxRetry, true
}
// GetQueueName extracts queue name from a context, if any.
//
// Return value qname indicates which queue the task was pulled from.
func GetQueueName(ctx context.Context) (qname string, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)
if !ok {
return "", false
}
return metadata.qname, true
}

View File

@ -0,0 +1,207 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package context
import (
"context"
"fmt"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
)
func TestCreateContextWithFutureDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.NewString(),
Payload: nil,
}
ctx, cancel := New(context.Background(), msg, tc.deadline)
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
}
}
func TestCreateContextWithBaseContext(t *testing.T) {
type ctxKey string
type ctxValue string
var key ctxKey = "key"
var value ctxValue = "value"
tests := []struct {
baseCtx context.Context
validate func(ctx context.Context, t *testing.T) error
}{
{
baseCtx: context.WithValue(context.Background(), key, value),
validate: func(ctx context.Context, t *testing.T) error {
got, ok := ctx.Value(key).(ctxValue)
if !ok {
return fmt.Errorf("ctx.Value().(ctxValue) returned false, expected to be true")
}
if want := value; got != want {
return fmt.Errorf("ctx.Value().(ctxValue) returned unknown value (%v), expected to be %s", got, value)
}
return nil
},
},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.NewString(),
Payload: nil,
}
ctx, cancel := New(tc.baseCtx, msg, time.Now().Add(30*time.Minute))
defer cancel()
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
if err := tc.validate(ctx, t); err != nil {
t.Errorf("%v", err)
}
}
}
func TestCreateContextWithPastDeadline(t *testing.T) {
tests := []struct {
deadline time.Time
}{
{time.Now().Add(-2 * time.Hour)},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.NewString(),
Payload: nil,
}
ctx, cancel := New(context.Background(), msg, tc.deadline)
defer cancel()
select {
case <-ctx.Done():
default:
t.Errorf("ctx.Done() blocked, want it to be non-blocking")
}
got, ok := ctx.Deadline()
if !ok {
t.Errorf("ctx.Deadline() returned false, want deadline to be set")
}
if !cmp.Equal(tc.deadline, got) {
t.Errorf("ctx.Deadline() returned %v, want %v", got, tc.deadline)
}
}
}
func TestGetTaskMetadataFromContext(t *testing.T) {
tests := []struct {
desc string
msg *base.TaskMessage
}{
{"with zero retried message", &base.TaskMessage{Type: "something", ID: uuid.NewString(), Retry: 25, Retried: 0, Timeout: 1800, Queue: "default"}},
{"with non-zero retried message", &base.TaskMessage{Type: "something", ID: uuid.NewString(), Retry: 10, Retried: 5, Timeout: 1800, Queue: "default"}},
{"with custom queue name", &base.TaskMessage{Type: "something", ID: uuid.NewString(), Retry: 25, Retried: 0, Timeout: 1800, Queue: "custom"}},
}
for _, tc := range tests {
ctx, cancel := New(context.Background(), tc.msg, time.Now().Add(30*time.Minute))
defer cancel()
id, ok := GetTaskID(ctx)
if !ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == false", tc.desc)
}
if ok && id != tc.msg.ID {
t.Errorf("%s: GetTaskID(ctx) returned id == %q, want %q", tc.desc, id, tc.msg.ID)
}
retried, ok := GetRetryCount(ctx)
if !ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == false", tc.desc)
}
if ok && retried != tc.msg.Retried {
t.Errorf("%s: GetRetryCount(ctx) returned n == %d want %d", tc.desc, retried, tc.msg.Retried)
}
maxRetry, ok := GetMaxRetry(ctx)
if !ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == false", tc.desc)
}
if ok && maxRetry != tc.msg.Retry {
t.Errorf("%s: GetMaxRetry(ctx) returned n == %d want %d", tc.desc, maxRetry, tc.msg.Retry)
}
qname, ok := GetQueueName(ctx)
if !ok {
t.Errorf("%s: GetQueueName(ctx) returned ok == false", tc.desc)
}
if ok && qname != tc.msg.Queue {
t.Errorf("%s: GetQueueName(ctx) returned qname == %q, want %q", tc.desc, qname, tc.msg.Queue)
}
}
}
func TestGetTaskMetadataFromContextError(t *testing.T) {
tests := []struct {
desc string
ctx context.Context
}{
{"with background context", context.Background()},
}
for _, tc := range tests {
if _, ok := GetTaskID(tc.ctx); ok {
t.Errorf("%s: GetTaskID(ctx) returned ok == true", tc.desc)
}
if _, ok := GetRetryCount(tc.ctx); ok {
t.Errorf("%s: GetRetryCount(ctx) returned ok == true", tc.desc)
}
if _, ok := GetMaxRetry(tc.ctx); ok {
t.Errorf("%s: GetMaxRetry(ctx) returned ok == true", tc.desc)
}
if _, ok := GetQueueName(tc.ctx); ok {
t.Errorf("%s: GetQueueName(ctx) returned ok == true", tc.desc)
}
}
}

303
internal/errors/errors.go Normal file
View File

@ -0,0 +1,303 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package errors defines the error type and functions used by
// asynq and its internal packages.
package errors
// Note: This package is inspired by a blog post about error handling in project Upspin
// https://commandcenter.blogspot.com/2017/12/error-handling-in-upspin.html.
import (
"errors"
"fmt"
"log"
"runtime"
"strings"
)
// Error is the type that implements the error interface.
// It contains a number of fields, each of different type.
// An Error value may leave some values unset.
type Error struct {
Code Code
Op Op
Err error
}
func (e *Error) DebugString() string {
var b strings.Builder
if e.Op != "" {
b.WriteString(string(e.Op))
}
if e.Code != Unspecified {
if b.Len() > 0 {
b.WriteString(": ")
}
b.WriteString(e.Code.String())
}
if e.Err != nil {
if b.Len() > 0 {
b.WriteString(": ")
}
b.WriteString(e.Err.Error())
}
return b.String()
}
func (e *Error) Error() string {
var b strings.Builder
if e.Code != Unspecified {
b.WriteString(e.Code.String())
}
if e.Err != nil {
if b.Len() > 0 {
b.WriteString(": ")
}
b.WriteString(e.Err.Error())
}
return b.String()
}
func (e *Error) Unwrap() error {
return e.Err
}
// Code defines the canonical error code.
type Code uint8
// List of canonical error codes.
const (
Unspecified Code = iota
NotFound
FailedPrecondition
Internal
AlreadyExists
Unknown
// Note: If you add a new value here, make sure to update String method.
)
func (c Code) String() string {
switch c {
case Unspecified:
return "ERROR_CODE_UNSPECIFIED"
case NotFound:
return "NOT_FOUND"
case FailedPrecondition:
return "FAILED_PRECONDITION"
case Internal:
return "INTERNAL_ERROR"
case AlreadyExists:
return "ALREADY_EXISTS"
case Unknown:
return "UNKNOWN"
}
panic(fmt.Sprintf("unknown error code %d", c))
}
// Op describes an operation, usually as the package and method,
// such as "rdb.Enqueue".
type Op string
// E builds an error value from its arguments.
// There must be at least one argument or E panics.
// The type of each argument determines its meaning.
// If more than one argument of a given type is presented,
// only the last one is recorded.
//
// The types are:
// errors.Op
// The operation being performed, usually the method
// being invoked (Get, Put, etc.).
// errors.Code
// The canonical error code, such as NOT_FOUND.
// string
// Treated as an error message and assigned to the
// Err field after a call to errors.New.
// error
// The underlying error that triggered this one.
//
// If the error is printed, only those items that have been
// set to non-zero values will appear in the result.
func E(args ...interface{}) error {
if len(args) == 0 {
panic("call to errors.E with no arguments")
}
e := &Error{}
for _, arg := range args {
switch arg := arg.(type) {
case Op:
e.Op = arg
case Code:
e.Code = arg
case error:
e.Err = arg
case string:
e.Err = errors.New(arg)
default:
_, file, line, _ := runtime.Caller(1)
log.Printf("errors.E: bad call from %s:%d: %v", file, line, args)
return fmt.Errorf("unknown type %T, value %v in error call", arg, arg)
}
}
return e
}
// CanonicalCode returns the canonical code of the given error if one is present.
// Otherwise it returns Unspecified.
func CanonicalCode(err error) Code {
if err == nil {
return Unspecified
}
e, ok := err.(*Error)
if !ok {
return Unspecified
}
if e.Code == Unspecified {
return CanonicalCode(e.Err)
}
return e.Code
}
/******************************************
Domain Specific Error Types & Values
*******************************************/
var (
// ErrNoProcessableTask indicates that there are no tasks ready to be processed.
ErrNoProcessableTask = errors.New("no tasks are ready for processing")
// ErrDuplicateTask indicates that another task with the same unique key holds the uniqueness lock.
ErrDuplicateTask = errors.New("task already exists")
// ErrTaskIdConflict indicates that another task with the same task ID already exist
ErrTaskIdConflict = errors.New("task id conflicts with another task")
)
// TaskNotFoundError indicates that a task with the given ID does not exist
// in the given queue.
type TaskNotFoundError struct {
Queue string // queue name
ID string // task id
}
func (e *TaskNotFoundError) Error() string {
return fmt.Sprintf("cannot find task with id=%s in queue %q", e.ID, e.Queue)
}
// IsTaskNotFound reports whether any error in err's chain is of type TaskNotFoundError.
func IsTaskNotFound(err error) bool {
var target *TaskNotFoundError
return As(err, &target)
}
// QueueNotFoundError indicates that a queue with the given name does not exist.
type QueueNotFoundError struct {
Queue string // queue name
}
func (e *QueueNotFoundError) Error() string {
return fmt.Sprintf("queue %q does not exist", e.Queue)
}
// IsQueueNotFound reports whether any error in err's chain is of type QueueNotFoundError.
func IsQueueNotFound(err error) bool {
var target *QueueNotFoundError
return As(err, &target)
}
// QueueNotEmptyError indicates that the given queue is not empty.
type QueueNotEmptyError struct {
Queue string // queue name
}
func (e *QueueNotEmptyError) Error() string {
return fmt.Sprintf("queue %q is not empty", e.Queue)
}
// IsQueueNotEmpty reports whether any error in err's chain is of type QueueNotEmptyError.
func IsQueueNotEmpty(err error) bool {
var target *QueueNotEmptyError
return As(err, &target)
}
// TaskAlreadyArchivedError indicates that the task in question is already archived.
type TaskAlreadyArchivedError struct {
Queue string // queue name
ID string // task id
}
func (e *TaskAlreadyArchivedError) Error() string {
return fmt.Sprintf("task is already archived: id=%s, queue=%s", e.ID, e.Queue)
}
// IsTaskAlreadyArchived reports whether any error in err's chain is of type TaskAlreadyArchivedError.
func IsTaskAlreadyArchived(err error) bool {
var target *TaskAlreadyArchivedError
return As(err, &target)
}
// RedisCommandError indicates that the given redis command returned error.
type RedisCommandError struct {
Command string // redis command (e.g. LRANGE, ZADD, etc)
Err error // underlying error
}
func (e *RedisCommandError) Error() string {
return fmt.Sprintf("redis command error: %s failed: %v", strings.ToUpper(e.Command), e.Err)
}
func (e *RedisCommandError) Unwrap() error { return e.Err }
// IsRedisCommandError reports whether any error in err's chain is of type RedisCommandError.
func IsRedisCommandError(err error) bool {
var target *RedisCommandError
return As(err, &target)
}
// PanicError defines an error when occurred a panic error.
type PanicError struct {
ErrMsg string
}
func (e *PanicError) Error() string {
return fmt.Sprintf("panic error cause by: %s", e.ErrMsg)
}
// IsPanicError reports whether any error in err's chain is of type PanicError.
func IsPanicError(err error) bool {
var target *PanicError
return As(err, &target)
}
/*************************************************
Standard Library errors package functions
*************************************************/
// New returns an error that formats as the given text.
// Each call to New returns a distinct error value even if the text is identical.
//
// This function is the errors.New function from the standard library (https://golang.org/pkg/errors/#New).
// It is exported from this package for import convenience.
func New(text string) error { return errors.New(text) }
// Is reports whether any error in err's chain matches target.
//
// This function is the errors.Is function from the standard library (https://golang.org/pkg/errors/#Is).
// It is exported from this package for import convenience.
func Is(err, target error) bool { return errors.Is(err, target) }
// As finds the first error in err's chain that matches target, and if so, sets target to that error value and returns true.
// Otherwise, it returns false.
//
// This function is the errors.As function from the standard library (https://golang.org/pkg/errors/#As).
// It is exported from this package for import convenience.
func As(err error, target interface{}) bool { return errors.As(err, target) }
// Unwrap returns the result of calling the Unwrap method on err, if err's type contains an Unwrap method returning error.
// Otherwise, Unwrap returns nil.
//
// This function is the errors.Unwrap function from the standard library (https://golang.org/pkg/errors/#Unwrap).
// It is exported from this package for import convenience.
func Unwrap(err error) error { return errors.Unwrap(err) }

View File

@ -0,0 +1,182 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package errors
import "testing"
func TestErrorDebugString(t *testing.T) {
// DebugString should include Op since its meant to be used by
// maintainers/contributors of the asynq package.
tests := []struct {
desc string
err error
want string
}{
{
desc: "With Op, Code, and string",
err: E(Op("rdb.DeleteTask"), NotFound, "cannot find task with id=123"),
want: "rdb.DeleteTask: NOT_FOUND: cannot find task with id=123",
},
{
desc: "With Op, Code and error",
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
want: `rdb.DeleteTask: NOT_FOUND: cannot find task with id=123 in queue "default"`,
},
}
for _, tc := range tests {
if got := tc.err.(*Error).DebugString(); got != tc.want {
t.Errorf("%s: got=%q, want=%q", tc.desc, got, tc.want)
}
}
}
func TestErrorString(t *testing.T) {
// String method should omit Op since op is an internal detail
// and we don't want to provide it to users of the package.
tests := []struct {
desc string
err error
want string
}{
{
desc: "With Op, Code, and string",
err: E(Op("rdb.DeleteTask"), NotFound, "cannot find task with id=123"),
want: "NOT_FOUND: cannot find task with id=123",
},
{
desc: "With Op, Code and error",
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
want: `NOT_FOUND: cannot find task with id=123 in queue "default"`,
},
}
for _, tc := range tests {
if got := tc.err.Error(); got != tc.want {
t.Errorf("%s: got=%q, want=%q", tc.desc, got, tc.want)
}
}
}
func TestErrorIs(t *testing.T) {
var ErrCustom = New("custom sentinel error")
tests := []struct {
desc string
err error
target error
want bool
}{
{
desc: "should unwrap one level",
err: E(Op("rdb.DeleteTask"), ErrCustom),
target: ErrCustom,
want: true,
},
}
for _, tc := range tests {
if got := Is(tc.err, tc.target); got != tc.want {
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
}
}
}
func TestErrorAs(t *testing.T) {
tests := []struct {
desc string
err error
target interface{}
want bool
}{
{
desc: "should unwrap one level",
err: E(Op("rdb.DeleteTask"), NotFound, &QueueNotFoundError{Queue: "email"}),
target: &QueueNotFoundError{},
want: true,
},
}
for _, tc := range tests {
if got := As(tc.err, &tc.target); got != tc.want {
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
}
}
}
func TestErrorPredicates(t *testing.T) {
tests := []struct {
desc string
fn func(err error) bool
err error
want bool
}{
{
desc: "IsTaskNotFound should detect presence of TaskNotFoundError in err's chain",
fn: IsTaskNotFound,
err: E(Op("rdb.ArchiveTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "9876"}),
want: true,
},
{
desc: "IsTaskNotFound should detect absence of TaskNotFoundError in err's chain",
fn: IsTaskNotFound,
err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}),
want: false,
},
{
desc: "IsQueueNotFound should detect presence of QueueNotFoundError in err's chain",
fn: IsQueueNotFound,
err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}),
want: true,
},
{
desc: "IsPanicError should detect presence of PanicError in err's chain",
fn: IsPanicError,
err: E(Op("unknown"), Unknown, &PanicError{ErrMsg: "Something went wrong"}),
want: true,
},
}
for _, tc := range tests {
if got := tc.fn(tc.err); got != tc.want {
t.Errorf("%s: got=%t, want=%t", tc.desc, got, tc.want)
}
}
}
func TestCanonicalCode(t *testing.T) {
tests := []struct {
desc string
err error
want Code
}{
{
desc: "without nesting",
err: E(Op("rdb.DeleteTask"), NotFound, &TaskNotFoundError{Queue: "default", ID: "123"}),
want: NotFound,
},
{
desc: "with nesting",
err: E(FailedPrecondition, E(NotFound)),
want: FailedPrecondition,
},
{
desc: "returns Unspecified if err is not *Error",
err: New("some other error"),
want: Unspecified,
},
{
desc: "returns Unspecified if err is nil",
err: nil,
want: Unspecified,
},
}
for _, tc := range tests {
if got := CanonicalCode(tc.err); got != tc.want {
t.Errorf("%s: got=%s, want=%s", tc.desc, got, tc.want)
}
}
}

846
internal/proto/asynq.pb.go Normal file
View File

@ -0,0 +1,846 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.2
// protoc v3.19.6
// source: asynq.proto
package proto
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// TaskMessage is the internal representation of a task with additional
// metadata fields.
// Next ID: 15
type TaskMessage struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Type indicates the kind of the task to be performed.
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
// Payload holds data needed to process the task.
Payload []byte `protobuf:"bytes,2,opt,name=payload,proto3" json:"payload,omitempty"`
// Unique identifier for the task.
Id string `protobuf:"bytes,3,opt,name=id,proto3" json:"id,omitempty"`
// Name of the queue to which this task belongs.
Queue string `protobuf:"bytes,4,opt,name=queue,proto3" json:"queue,omitempty"`
// Max number of retries for this task.
Retry int32 `protobuf:"varint,5,opt,name=retry,proto3" json:"retry,omitempty"`
// Number of times this task has been retried so far.
Retried int32 `protobuf:"varint,6,opt,name=retried,proto3" json:"retried,omitempty"`
// Error message from the last failure.
ErrorMsg string `protobuf:"bytes,7,opt,name=error_msg,json=errorMsg,proto3" json:"error_msg,omitempty"`
// Time of last failure in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no last failure.
LastFailedAt int64 `protobuf:"varint,11,opt,name=last_failed_at,json=lastFailedAt,proto3" json:"last_failed_at,omitempty"`
// Timeout specifies timeout in seconds.
// Use zero to indicate no timeout.
Timeout int64 `protobuf:"varint,8,opt,name=timeout,proto3" json:"timeout,omitempty"`
// Deadline specifies the deadline for the task in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no deadline.
Deadline int64 `protobuf:"varint,9,opt,name=deadline,proto3" json:"deadline,omitempty"`
// UniqueKey holds the redis key used for uniqueness lock for this task.
// Empty string indicates that no uniqueness lock was used.
UniqueKey string `protobuf:"bytes,10,opt,name=unique_key,json=uniqueKey,proto3" json:"unique_key,omitempty"`
// GroupKey is a name of the group used for task aggregation.
// This field is optional and empty value means no aggregation for the task.
GroupKey string `protobuf:"bytes,14,opt,name=group_key,json=groupKey,proto3" json:"group_key,omitempty"`
// Retention period specified in a number of seconds.
// The task will be stored in redis as a completed task until the TTL
// expires.
Retention int64 `protobuf:"varint,12,opt,name=retention,proto3" json:"retention,omitempty"`
// Time when the task completed in success in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// This field is populated if result_ttl > 0 upon completion.
CompletedAt int64 `protobuf:"varint,13,opt,name=completed_at,json=completedAt,proto3" json:"completed_at,omitempty"`
}
func (x *TaskMessage) Reset() {
*x = TaskMessage{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *TaskMessage) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*TaskMessage) ProtoMessage() {}
func (x *TaskMessage) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use TaskMessage.ProtoReflect.Descriptor instead.
func (*TaskMessage) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{0}
}
func (x *TaskMessage) GetType() string {
if x != nil {
return x.Type
}
return ""
}
func (x *TaskMessage) GetPayload() []byte {
if x != nil {
return x.Payload
}
return nil
}
func (x *TaskMessage) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *TaskMessage) GetQueue() string {
if x != nil {
return x.Queue
}
return ""
}
func (x *TaskMessage) GetRetry() int32 {
if x != nil {
return x.Retry
}
return 0
}
func (x *TaskMessage) GetRetried() int32 {
if x != nil {
return x.Retried
}
return 0
}
func (x *TaskMessage) GetErrorMsg() string {
if x != nil {
return x.ErrorMsg
}
return ""
}
func (x *TaskMessage) GetLastFailedAt() int64 {
if x != nil {
return x.LastFailedAt
}
return 0
}
func (x *TaskMessage) GetTimeout() int64 {
if x != nil {
return x.Timeout
}
return 0
}
func (x *TaskMessage) GetDeadline() int64 {
if x != nil {
return x.Deadline
}
return 0
}
func (x *TaskMessage) GetUniqueKey() string {
if x != nil {
return x.UniqueKey
}
return ""
}
func (x *TaskMessage) GetGroupKey() string {
if x != nil {
return x.GroupKey
}
return ""
}
func (x *TaskMessage) GetRetention() int64 {
if x != nil {
return x.Retention
}
return 0
}
func (x *TaskMessage) GetCompletedAt() int64 {
if x != nil {
return x.CompletedAt
}
return 0
}
// ServerInfo holds information about a running server.
type ServerInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Host machine the server is running on.
Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"`
// PID of the server process.
Pid int32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
// Unique identifier for this server.
ServerId string `protobuf:"bytes,3,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"`
// Maximum number of concurrency this server will use.
Concurrency int32 `protobuf:"varint,4,opt,name=concurrency,proto3" json:"concurrency,omitempty"`
// List of queue names with their priorities.
// The server will consume tasks from the queues and prioritize
// queues with higher priority numbers.
Queues map[string]int32 `protobuf:"bytes,5,rep,name=queues,proto3" json:"queues,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
// If set, the server will always consume tasks from a queue with higher
// priority.
StrictPriority bool `protobuf:"varint,6,opt,name=strict_priority,json=strictPriority,proto3" json:"strict_priority,omitempty"`
// Status indicates the status of the server.
Status string `protobuf:"bytes,7,opt,name=status,proto3" json:"status,omitempty"`
// Time this server was started.
StartTime *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"`
// Number of workers currently processing tasks.
ActiveWorkerCount int32 `protobuf:"varint,9,opt,name=active_worker_count,json=activeWorkerCount,proto3" json:"active_worker_count,omitempty"`
}
func (x *ServerInfo) Reset() {
*x = ServerInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ServerInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ServerInfo) ProtoMessage() {}
func (x *ServerInfo) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ServerInfo.ProtoReflect.Descriptor instead.
func (*ServerInfo) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{1}
}
func (x *ServerInfo) GetHost() string {
if x != nil {
return x.Host
}
return ""
}
func (x *ServerInfo) GetPid() int32 {
if x != nil {
return x.Pid
}
return 0
}
func (x *ServerInfo) GetServerId() string {
if x != nil {
return x.ServerId
}
return ""
}
func (x *ServerInfo) GetConcurrency() int32 {
if x != nil {
return x.Concurrency
}
return 0
}
func (x *ServerInfo) GetQueues() map[string]int32 {
if x != nil {
return x.Queues
}
return nil
}
func (x *ServerInfo) GetStrictPriority() bool {
if x != nil {
return x.StrictPriority
}
return false
}
func (x *ServerInfo) GetStatus() string {
if x != nil {
return x.Status
}
return ""
}
func (x *ServerInfo) GetStartTime() *timestamppb.Timestamp {
if x != nil {
return x.StartTime
}
return nil
}
func (x *ServerInfo) GetActiveWorkerCount() int32 {
if x != nil {
return x.ActiveWorkerCount
}
return 0
}
// WorkerInfo holds information about a running worker.
type WorkerInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Host matchine this worker is running on.
Host string `protobuf:"bytes,1,opt,name=host,proto3" json:"host,omitempty"`
// PID of the process in which this worker is running.
Pid int32 `protobuf:"varint,2,opt,name=pid,proto3" json:"pid,omitempty"`
// ID of the server in which this worker is running.
ServerId string `protobuf:"bytes,3,opt,name=server_id,json=serverId,proto3" json:"server_id,omitempty"`
// ID of the task this worker is processing.
TaskId string `protobuf:"bytes,4,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
// Type of the task this worker is processing.
TaskType string `protobuf:"bytes,5,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
// Payload of the task this worker is processing.
TaskPayload []byte `protobuf:"bytes,6,opt,name=task_payload,json=taskPayload,proto3" json:"task_payload,omitempty"`
// Name of the queue the task the worker is processing belongs.
Queue string `protobuf:"bytes,7,opt,name=queue,proto3" json:"queue,omitempty"`
// Time this worker started processing the task.
StartTime *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"`
// Deadline by which the worker needs to complete processing
// the task. If worker exceeds the deadline, the task will fail.
Deadline *timestamppb.Timestamp `protobuf:"bytes,9,opt,name=deadline,proto3" json:"deadline,omitempty"`
}
func (x *WorkerInfo) Reset() {
*x = WorkerInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *WorkerInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*WorkerInfo) ProtoMessage() {}
func (x *WorkerInfo) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use WorkerInfo.ProtoReflect.Descriptor instead.
func (*WorkerInfo) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{2}
}
func (x *WorkerInfo) GetHost() string {
if x != nil {
return x.Host
}
return ""
}
func (x *WorkerInfo) GetPid() int32 {
if x != nil {
return x.Pid
}
return 0
}
func (x *WorkerInfo) GetServerId() string {
if x != nil {
return x.ServerId
}
return ""
}
func (x *WorkerInfo) GetTaskId() string {
if x != nil {
return x.TaskId
}
return ""
}
func (x *WorkerInfo) GetTaskType() string {
if x != nil {
return x.TaskType
}
return ""
}
func (x *WorkerInfo) GetTaskPayload() []byte {
if x != nil {
return x.TaskPayload
}
return nil
}
func (x *WorkerInfo) GetQueue() string {
if x != nil {
return x.Queue
}
return ""
}
func (x *WorkerInfo) GetStartTime() *timestamppb.Timestamp {
if x != nil {
return x.StartTime
}
return nil
}
func (x *WorkerInfo) GetDeadline() *timestamppb.Timestamp {
if x != nil {
return x.Deadline
}
return nil
}
// SchedulerEntry holds information about a periodic task registered
// with a scheduler.
type SchedulerEntry struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Identifier of the scheduler entry.
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
// Periodic schedule spec of the entry.
Spec string `protobuf:"bytes,2,opt,name=spec,proto3" json:"spec,omitempty"`
// Task type of the periodic task.
TaskType string `protobuf:"bytes,3,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
// Task payload of the periodic task.
TaskPayload []byte `protobuf:"bytes,4,opt,name=task_payload,json=taskPayload,proto3" json:"task_payload,omitempty"`
// Options used to enqueue the periodic task.
EnqueueOptions []string `protobuf:"bytes,5,rep,name=enqueue_options,json=enqueueOptions,proto3" json:"enqueue_options,omitempty"`
// Next time the task will be enqueued.
NextEnqueueTime *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=next_enqueue_time,json=nextEnqueueTime,proto3" json:"next_enqueue_time,omitempty"`
// Last time the task was enqueued.
// Zero time if task was never enqueued.
PrevEnqueueTime *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=prev_enqueue_time,json=prevEnqueueTime,proto3" json:"prev_enqueue_time,omitempty"`
}
func (x *SchedulerEntry) Reset() {
*x = SchedulerEntry{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SchedulerEntry) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SchedulerEntry) ProtoMessage() {}
func (x *SchedulerEntry) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SchedulerEntry.ProtoReflect.Descriptor instead.
func (*SchedulerEntry) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{3}
}
func (x *SchedulerEntry) GetId() string {
if x != nil {
return x.Id
}
return ""
}
func (x *SchedulerEntry) GetSpec() string {
if x != nil {
return x.Spec
}
return ""
}
func (x *SchedulerEntry) GetTaskType() string {
if x != nil {
return x.TaskType
}
return ""
}
func (x *SchedulerEntry) GetTaskPayload() []byte {
if x != nil {
return x.TaskPayload
}
return nil
}
func (x *SchedulerEntry) GetEnqueueOptions() []string {
if x != nil {
return x.EnqueueOptions
}
return nil
}
func (x *SchedulerEntry) GetNextEnqueueTime() *timestamppb.Timestamp {
if x != nil {
return x.NextEnqueueTime
}
return nil
}
func (x *SchedulerEntry) GetPrevEnqueueTime() *timestamppb.Timestamp {
if x != nil {
return x.PrevEnqueueTime
}
return nil
}
// SchedulerEnqueueEvent holds information about an enqueue event
// by a scheduler.
type SchedulerEnqueueEvent struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// ID of the task that was enqueued.
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
// Time the task was enqueued.
EnqueueTime *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=enqueue_time,json=enqueueTime,proto3" json:"enqueue_time,omitempty"`
}
func (x *SchedulerEnqueueEvent) Reset() {
*x = SchedulerEnqueueEvent{}
if protoimpl.UnsafeEnabled {
mi := &file_asynq_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SchedulerEnqueueEvent) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SchedulerEnqueueEvent) ProtoMessage() {}
func (x *SchedulerEnqueueEvent) ProtoReflect() protoreflect.Message {
mi := &file_asynq_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SchedulerEnqueueEvent.ProtoReflect.Descriptor instead.
func (*SchedulerEnqueueEvent) Descriptor() ([]byte, []int) {
return file_asynq_proto_rawDescGZIP(), []int{4}
}
func (x *SchedulerEnqueueEvent) GetTaskId() string {
if x != nil {
return x.TaskId
}
return ""
}
func (x *SchedulerEnqueueEvent) GetEnqueueTime() *timestamppb.Timestamp {
if x != nil {
return x.EnqueueTime
}
return nil
}
var File_asynq_proto protoreflect.FileDescriptor
var file_asynq_proto_rawDesc = []byte{
0x0a, 0x0b, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x61,
0x73, 0x79, 0x6e, 0x71, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x87, 0x03, 0x0a, 0x0b, 0x54, 0x61, 0x73, 0x6b, 0x4d, 0x65,
0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20,
0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79,
0x6c, 0x6f, 0x61, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c,
0x6f, 0x61, 0x64, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
0x02, 0x69, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x04, 0x20, 0x01,
0x28, 0x09, 0x52, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x72, 0x65, 0x74,
0x72, 0x79, 0x18, 0x05, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x72, 0x65, 0x74, 0x72, 0x79, 0x12,
0x18, 0x0a, 0x07, 0x72, 0x65, 0x74, 0x72, 0x69, 0x65, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x05,
0x52, 0x07, 0x72, 0x65, 0x74, 0x72, 0x69, 0x65, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x65, 0x72, 0x72,
0x6f, 0x72, 0x5f, 0x6d, 0x73, 0x67, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x65, 0x72,
0x72, 0x6f, 0x72, 0x4d, 0x73, 0x67, 0x12, 0x24, 0x0a, 0x0e, 0x6c, 0x61, 0x73, 0x74, 0x5f, 0x66,
0x61, 0x69, 0x6c, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x0b, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0c,
0x6c, 0x61, 0x73, 0x74, 0x46, 0x61, 0x69, 0x6c, 0x65, 0x64, 0x41, 0x74, 0x12, 0x18, 0x0a, 0x07,
0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x18, 0x08, 0x20, 0x01, 0x28, 0x03, 0x52, 0x07, 0x74,
0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x12, 0x1a, 0x0a, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69,
0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69,
0x6e, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x5f, 0x6b, 0x65, 0x79,
0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x4b, 0x65,
0x79, 0x12, 0x1b, 0x0a, 0x09, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x0e,
0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x4b, 0x65, 0x79, 0x12, 0x1c,
0x0a, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x0c, 0x20, 0x01, 0x28,
0x03, 0x52, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x21, 0x0a, 0x0c,
0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x0d, 0x20, 0x01,
0x28, 0x03, 0x52, 0x0b, 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x41, 0x74, 0x22,
0x8f, 0x03, 0x0a, 0x0a, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12,
0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x68, 0x6f,
0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52,
0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x5f, 0x69,
0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x49,
0x64, 0x12, 0x20, 0x0a, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x63, 0x79,
0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72, 0x72, 0x65,
0x6e, 0x63, 0x79, 0x12, 0x35, 0x0a, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x18, 0x05, 0x20,
0x03, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x53, 0x65, 0x72, 0x76,
0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x51, 0x75, 0x65, 0x75, 0x65, 0x73, 0x45, 0x6e, 0x74,
0x72, 0x79, 0x52, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x12, 0x27, 0x0a, 0x0f, 0x73, 0x74,
0x72, 0x69, 0x63, 0x74, 0x5f, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x18, 0x06, 0x20,
0x01, 0x28, 0x08, 0x52, 0x0e, 0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x50, 0x72, 0x69, 0x6f, 0x72,
0x69, 0x74, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x07, 0x20,
0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x39, 0x0a, 0x0a, 0x73,
0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x73, 0x74, 0x61,
0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x2e, 0x0a, 0x13, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65,
0x5f, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x09, 0x20,
0x01, 0x28, 0x05, 0x52, 0x11, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x57, 0x6f, 0x72, 0x6b, 0x65,
0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x1a, 0x39, 0x0a, 0x0b, 0x51, 0x75, 0x65, 0x75, 0x65, 0x73,
0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38,
0x01, 0x22, 0xb1, 0x02, 0x0a, 0x0a, 0x57, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f,
0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28,
0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72,
0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72, 0x76, 0x65,
0x72, 0x49, 0x64, 0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64, 0x18, 0x04,
0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x1b, 0x0a, 0x09,
0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52,
0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74, 0x61, 0x73,
0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x14, 0x0a, 0x05,
0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x71, 0x75, 0x65,
0x75, 0x65, 0x12, 0x39, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65,
0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61,
0x6d, 0x70, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x36, 0x0a,
0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x64, 0x65, 0x61,
0x64, 0x6c, 0x69, 0x6e, 0x65, 0x22, 0xad, 0x02, 0x0a, 0x0e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75,
0x6c, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01,
0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x70, 0x65, 0x63,
0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x73, 0x70, 0x65, 0x63, 0x12, 0x1b, 0x0a, 0x09,
0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74, 0x61, 0x73,
0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x27, 0x0a, 0x0f,
0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18,
0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0e, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x4f, 0x70,
0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x46, 0x0a, 0x11, 0x6e, 0x65, 0x78, 0x74, 0x5f, 0x65, 0x6e,
0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f, 0x6e, 0x65,
0x78, 0x74, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x46, 0x0a,
0x11, 0x70, 0x72, 0x65, 0x76, 0x5f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69,
0x6d, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73,
0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f, 0x70, 0x72, 0x65, 0x76, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75,
0x65, 0x54, 0x69, 0x6d, 0x65, 0x22, 0x6f, 0x0a, 0x15, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c,
0x65, 0x72, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12, 0x17,
0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x3d, 0x0a, 0x0c, 0x65, 0x6e, 0x71, 0x75, 0x65,
0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e,
0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e,
0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0b, 0x65, 0x6e, 0x71, 0x75, 0x65,
0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x42, 0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x69, 0x62, 0x69, 0x6b, 0x65, 0x6e, 0x2f, 0x61, 0x73, 0x79,
0x6e, 0x71, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_asynq_proto_rawDescOnce sync.Once
file_asynq_proto_rawDescData = file_asynq_proto_rawDesc
)
func file_asynq_proto_rawDescGZIP() []byte {
file_asynq_proto_rawDescOnce.Do(func() {
file_asynq_proto_rawDescData = protoimpl.X.CompressGZIP(file_asynq_proto_rawDescData)
})
return file_asynq_proto_rawDescData
}
var file_asynq_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
var file_asynq_proto_goTypes = []any{
(*TaskMessage)(nil), // 0: asynq.TaskMessage
(*ServerInfo)(nil), // 1: asynq.ServerInfo
(*WorkerInfo)(nil), // 2: asynq.WorkerInfo
(*SchedulerEntry)(nil), // 3: asynq.SchedulerEntry
(*SchedulerEnqueueEvent)(nil), // 4: asynq.SchedulerEnqueueEvent
nil, // 5: asynq.ServerInfo.QueuesEntry
(*timestamppb.Timestamp)(nil), // 6: google.protobuf.Timestamp
}
var file_asynq_proto_depIdxs = []int32{
5, // 0: asynq.ServerInfo.queues:type_name -> asynq.ServerInfo.QueuesEntry
6, // 1: asynq.ServerInfo.start_time:type_name -> google.protobuf.Timestamp
6, // 2: asynq.WorkerInfo.start_time:type_name -> google.protobuf.Timestamp
6, // 3: asynq.WorkerInfo.deadline:type_name -> google.protobuf.Timestamp
6, // 4: asynq.SchedulerEntry.next_enqueue_time:type_name -> google.protobuf.Timestamp
6, // 5: asynq.SchedulerEntry.prev_enqueue_time:type_name -> google.protobuf.Timestamp
6, // 6: asynq.SchedulerEnqueueEvent.enqueue_time:type_name -> google.protobuf.Timestamp
7, // [7:7] is the sub-list for method output_type
7, // [7:7] is the sub-list for method input_type
7, // [7:7] is the sub-list for extension type_name
7, // [7:7] is the sub-list for extension extendee
0, // [0:7] is the sub-list for field type_name
}
func init() { file_asynq_proto_init() }
func file_asynq_proto_init() {
if File_asynq_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_asynq_proto_msgTypes[0].Exporter = func(v any, i int) any {
switch v := v.(*TaskMessage); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[1].Exporter = func(v any, i int) any {
switch v := v.(*ServerInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[2].Exporter = func(v any, i int) any {
switch v := v.(*WorkerInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[3].Exporter = func(v any, i int) any {
switch v := v.(*SchedulerEntry); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_asynq_proto_msgTypes[4].Exporter = func(v any, i int) any {
switch v := v.(*SchedulerEnqueueEvent); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_asynq_proto_rawDesc,
NumEnums: 0,
NumMessages: 6,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_asynq_proto_goTypes,
DependencyIndexes: file_asynq_proto_depIdxs,
MessageInfos: file_asynq_proto_msgTypes,
}.Build()
File_asynq_proto = out.File
file_asynq_proto_rawDesc = nil
file_asynq_proto_goTypes = nil
file_asynq_proto_depIdxs = nil
}

168
internal/proto/asynq.proto Normal file
View File

@ -0,0 +1,168 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
syntax = "proto3";
package asynq;
import "google/protobuf/timestamp.proto";
option go_package = "github.com/hibiken/asynq/internal/proto";
// TaskMessage is the internal representation of a task with additional
// metadata fields.
// Next ID: 15
message TaskMessage {
// Type indicates the kind of the task to be performed.
string type = 1;
// Payload holds data needed to process the task.
bytes payload = 2;
// Unique identifier for the task.
string id = 3;
// Name of the queue to which this task belongs.
string queue = 4;
// Max number of retries for this task.
int32 retry = 5;
// Number of times this task has been retried so far.
int32 retried = 6;
// Error message from the last failure.
string error_msg = 7;
// Time of last failure in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no last failure.
int64 last_failed_at = 11;
// Timeout specifies timeout in seconds.
// Use zero to indicate no timeout.
int64 timeout = 8;
// Deadline specifies the deadline for the task in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// Use zero to indicate no deadline.
int64 deadline = 9;
// UniqueKey holds the redis key used for uniqueness lock for this task.
// Empty string indicates that no uniqueness lock was used.
string unique_key = 10;
// GroupKey is a name of the group used for task aggregation.
// This field is optional and empty value means no aggregation for the task.
string group_key = 14;
// Retention period specified in a number of seconds.
// The task will be stored in redis as a completed task until the TTL
// expires.
int64 retention = 12;
// Time when the task completed in success in Unix time,
// the number of seconds elapsed since January 1, 1970 UTC.
// This field is populated if result_ttl > 0 upon completion.
int64 completed_at = 13;
};
// ServerInfo holds information about a running server.
message ServerInfo {
// Host machine the server is running on.
string host = 1;
// PID of the server process.
int32 pid = 2;
// Unique identifier for this server.
string server_id = 3;
// Maximum number of concurrency this server will use.
int32 concurrency = 4;
// List of queue names with their priorities.
// The server will consume tasks from the queues and prioritize
// queues with higher priority numbers.
map<string, int32> queues = 5;
// If set, the server will always consume tasks from a queue with higher
// priority.
bool strict_priority = 6;
// Status indicates the status of the server.
string status = 7;
// Time this server was started.
google.protobuf.Timestamp start_time = 8;
// Number of workers currently processing tasks.
int32 active_worker_count = 9;
};
// WorkerInfo holds information about a running worker.
message WorkerInfo {
// Host matchine this worker is running on.
string host = 1;
// PID of the process in which this worker is running.
int32 pid = 2;
// ID of the server in which this worker is running.
string server_id = 3;
// ID of the task this worker is processing.
string task_id = 4;
// Type of the task this worker is processing.
string task_type = 5;
// Payload of the task this worker is processing.
bytes task_payload = 6;
// Name of the queue the task the worker is processing belongs.
string queue = 7;
// Time this worker started processing the task.
google.protobuf.Timestamp start_time = 8;
// Deadline by which the worker needs to complete processing
// the task. If worker exceeds the deadline, the task will fail.
google.protobuf.Timestamp deadline = 9;
};
// SchedulerEntry holds information about a periodic task registered
// with a scheduler.
message SchedulerEntry {
// Identifier of the scheduler entry.
string id = 1;
// Periodic schedule spec of the entry.
string spec = 2;
// Task type of the periodic task.
string task_type = 3;
// Task payload of the periodic task.
bytes task_payload = 4;
// Options used to enqueue the periodic task.
repeated string enqueue_options = 5;
// Next time the task will be enqueued.
google.protobuf.Timestamp next_enqueue_time = 6;
// Last time the task was enqueued.
// Zero time if task was never enqueued.
google.protobuf.Timestamp prev_enqueue_time = 7;
};
// SchedulerEnqueueEvent holds information about an enqueue event
// by a scheduler.
message SchedulerEnqueueEvent {
// ID of the task that was enqueued.
string task_id = 1;
// Time the task was enqueued.
google.protobuf.Timestamp enqueue_time = 2;
};

View File

@ -5,37 +5,273 @@
package rdb
import (
"context"
"fmt"
"testing"
"time"
"github.com/go-redis/redis/v7"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/testutil"
)
func BenchmarkDone(b *testing.B) {
r := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
DB: 8,
})
h.FlushDB(b, r)
// populate in-progress queue with messages
var inProgress []*base.TaskMessage
for i := 0; i < 40; i++ {
inProgress = append(inProgress,
h.NewTaskMessage("send_email", map[string]interface{}{"subject": "hello", "recipient_id": 123}))
}
h.SeedInProgressQueue(b, r, inProgress)
rdb := NewRDB(r)
func BenchmarkEnqueue(b *testing.B) {
r := setup(b)
ctx := context.Background()
msg := testutil.NewTaskMessage("task1", nil)
b.ResetTimer()
for n := 0; n < b.N; n++ {
for i := 0; i < b.N; i++ {
b.StopTimer()
msg := h.NewTaskMessage("reindex", map[string]interface{}{"config": "path/to/config/file"})
r.LPush(base.InProgressQueue, h.MustMarshal(b, msg))
testutil.FlushDB(b, r.client)
b.StartTimer()
rdb.Done(msg)
if err := r.Enqueue(ctx, msg); err != nil {
b.Fatalf("Enqueue failed: %v", err)
}
}
}
func BenchmarkEnqueueUnique(b *testing.B) {
r := setup(b)
ctx := context.Background()
msg := &base.TaskMessage{
Type: "task1",
Payload: nil,
Queue: base.DefaultQueueName,
UniqueKey: base.UniqueKey("default", "task1", nil),
}
uniqueTTL := 5 * time.Minute
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
b.StartTimer()
if err := r.EnqueueUnique(ctx, msg, uniqueTTL); err != nil {
b.Fatalf("EnqueueUnique failed: %v", err)
}
}
}
func BenchmarkSchedule(b *testing.B) {
r := setup(b)
ctx := context.Background()
msg := testutil.NewTaskMessage("task1", nil)
processAt := time.Now().Add(3 * time.Minute)
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
b.StartTimer()
if err := r.Schedule(ctx, msg, processAt); err != nil {
b.Fatalf("Schedule failed: %v", err)
}
}
}
func BenchmarkScheduleUnique(b *testing.B) {
r := setup(b)
ctx := context.Background()
msg := &base.TaskMessage{
Type: "task1",
Payload: nil,
Queue: base.DefaultQueueName,
UniqueKey: base.UniqueKey("default", "task1", nil),
}
processAt := time.Now().Add(3 * time.Minute)
uniqueTTL := 5 * time.Minute
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
b.StartTimer()
if err := r.ScheduleUnique(ctx, msg, processAt, uniqueTTL); err != nil {
b.Fatalf("EnqueueUnique failed: %v", err)
}
}
}
func BenchmarkDequeueSingleQueue(b *testing.B) {
r := setup(b)
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
for i := 0; i < 10; i++ {
m := testutil.NewTaskMessageWithQueue(
fmt.Sprintf("task%d", i), nil, base.DefaultQueueName)
if err := r.Enqueue(ctx, m); err != nil {
b.Fatalf("Enqueue failed: %v", err)
}
}
b.StartTimer()
if _, _, err := r.Dequeue(base.DefaultQueueName); err != nil {
b.Fatalf("Dequeue failed: %v", err)
}
}
}
func BenchmarkDequeueMultipleQueues(b *testing.B) {
qnames := []string{"critical", "default", "low"}
r := setup(b)
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
for i := 0; i < 10; i++ {
for _, qname := range qnames {
m := testutil.NewTaskMessageWithQueue(
fmt.Sprintf("%s_task%d", qname, i), nil, qname)
if err := r.Enqueue(ctx, m); err != nil {
b.Fatalf("Enqueue failed: %v", err)
}
}
}
b.StartTimer()
if _, _, err := r.Dequeue(qnames...); err != nil {
b.Fatalf("Dequeue failed: %v", err)
}
}
}
func BenchmarkDone(b *testing.B) {
r := setup(b)
m1 := testutil.NewTaskMessage("task1", nil)
m2 := testutil.NewTaskMessage("task2", nil)
m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Done(ctx, msgs[0]); err != nil {
b.Fatalf("Done failed: %v", err)
}
}
}
func BenchmarkRetry(b *testing.B) {
r := setup(b)
m1 := testutil.NewTaskMessage("task1", nil)
m2 := testutil.NewTaskMessage("task2", nil)
m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Retry(ctx, msgs[0], time.Now().Add(1*time.Minute), "error", true /*isFailure*/); err != nil {
b.Fatalf("Retry failed: %v", err)
}
}
}
func BenchmarkArchive(b *testing.B) {
r := setup(b)
m1 := testutil.NewTaskMessage("task1", nil)
m2 := testutil.NewTaskMessage("task2", nil)
m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Archive(ctx, msgs[0], "error"); err != nil {
b.Fatalf("Archive failed: %v", err)
}
}
}
func BenchmarkRequeue(b *testing.B) {
r := setup(b)
m1 := testutil.NewTaskMessage("task1", nil)
m2 := testutil.NewTaskMessage("task2", nil)
m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
}
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.Requeue(ctx, msgs[0]); err != nil {
b.Fatalf("Requeue failed: %v", err)
}
}
}
func BenchmarkCheckAndEnqueue(b *testing.B) {
r := setup(b)
now := time.Now()
var zs []base.Z
for i := -100; i < 100; i++ {
msg := testutil.NewTaskMessage(fmt.Sprintf("task%d", i), nil)
score := now.Add(time.Duration(i) * time.Second).Unix()
zs = append(zs, base.Z{Message: msg, Score: score})
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
b.StopTimer()
testutil.FlushDB(b, r.client)
testutil.SeedScheduledQueue(b, r.client, zs, base.DefaultQueueName)
b.StartTimer()
if err := r.ForwardIfReady(base.DefaultQueueName); err != nil {
b.Fatalf("ForwardIfReady failed: %v", err)
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -6,15 +6,16 @@
package testbroker
import (
"context"
"errors"
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/base"
"github.com/redis/go-redis/v9"
)
var errRedisDown = errors.New("asynqtest: redis is down")
var errRedisDown = errors.New("testutil: redis is down")
// TestBroker is a broker implementation which enables
// to simulate Redis failure in tests.
@ -26,6 +27,9 @@ type TestBroker struct {
real base.Broker
}
// Make sure TestBroker implements Broker interface at compile time.
var _ base.Broker = (*TestBroker)(nil)
func NewTestBroker(b base.Broker) *TestBroker {
return &TestBroker{real: b}
}
@ -42,94 +46,130 @@ func (tb *TestBroker) Wakeup() {
tb.sleeping = false
}
func (tb *TestBroker) Enqueue(msg *base.TaskMessage) error {
func (tb *TestBroker) Enqueue(ctx context.Context, msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Enqueue(msg)
return tb.real.Enqueue(ctx, msg)
}
func (tb *TestBroker) EnqueueUnique(msg *base.TaskMessage, ttl time.Duration) error {
func (tb *TestBroker) EnqueueUnique(ctx context.Context, msg *base.TaskMessage, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.EnqueueUnique(msg, ttl)
return tb.real.EnqueueUnique(ctx, msg, ttl)
}
func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, error) {
func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, time.Time{}, errRedisDown
}
return tb.real.Dequeue(qnames...)
}
func (tb *TestBroker) Done(ctx context.Context, msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Done(ctx, msg)
}
func (tb *TestBroker) MarkAsComplete(ctx context.Context, msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.MarkAsComplete(ctx, msg)
}
func (tb *TestBroker) Requeue(ctx context.Context, msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Requeue(ctx, msg)
}
func (tb *TestBroker) Schedule(ctx context.Context, msg *base.TaskMessage, processAt time.Time) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Schedule(ctx, msg, processAt)
}
func (tb *TestBroker) ScheduleUnique(ctx context.Context, msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ScheduleUnique(ctx, msg, processAt, ttl)
}
func (tb *TestBroker) Retry(ctx context.Context, msg *base.TaskMessage, processAt time.Time, errMsg string, isFailure bool) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Retry(ctx, msg, processAt, errMsg, isFailure)
}
func (tb *TestBroker) Archive(ctx context.Context, msg *base.TaskMessage, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Archive(ctx, msg, errMsg)
}
func (tb *TestBroker) ForwardIfReady(qnames ...string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ForwardIfReady(qnames...)
}
func (tb *TestBroker) DeleteExpiredCompletedTasks(qname string, batchSize int) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.DeleteExpiredCompletedTasks(qname, batchSize)
}
func (tb *TestBroker) ListLeaseExpired(cutoff time.Time, qnames ...string) ([]*base.TaskMessage, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.Dequeue(qnames...)
return tb.real.ListLeaseExpired(cutoff, qnames...)
}
func (tb *TestBroker) Done(msg *base.TaskMessage) error {
func (tb *TestBroker) ExtendLease(qname string, ids ...string) (time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
return time.Time{}, errRedisDown
}
return tb.real.Done(msg)
}
func (tb *TestBroker) Requeue(msg *base.TaskMessage) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Requeue(msg)
}
func (tb *TestBroker) Schedule(msg *base.TaskMessage, processAt time.Time) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Schedule(msg, processAt)
}
func (tb *TestBroker) ScheduleUnique(msg *base.TaskMessage, processAt time.Time, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ScheduleUnique(msg, processAt, ttl)
}
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Retry(msg, processAt, errMsg)
}
func (tb *TestBroker) Kill(msg *base.TaskMessage, errMsg string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Kill(msg, errMsg)
}
func (tb *TestBroker) CheckAndEnqueue() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.CheckAndEnqueue()
return tb.real.ExtendLease(qname, ids...)
}
func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
@ -168,6 +208,24 @@ func (tb *TestBroker) PublishCancelation(id string) error {
return tb.real.PublishCancelation(id)
}
func (tb *TestBroker) WriteResult(qname, id string, data []byte) (int, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return 0, errRedisDown
}
return tb.real.WriteResult(qname, id, data)
}
func (tb *TestBroker) Ping() error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.Ping()
}
func (tb *TestBroker) Close() error {
tb.mu.Lock()
defer tb.mu.Unlock()
@ -176,3 +234,66 @@ func (tb *TestBroker) Close() error {
}
return tb.real.Close()
}
func (tb *TestBroker) AddToGroup(ctx context.Context, msg *base.TaskMessage, gname string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.AddToGroup(ctx, msg, gname)
}
func (tb *TestBroker) AddToGroupUnique(ctx context.Context, msg *base.TaskMessage, gname string, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.AddToGroupUnique(ctx, msg, gname, ttl)
}
func (tb *TestBroker) ListGroups(qname string) ([]string, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.ListGroups(qname)
}
func (tb *TestBroker) AggregationCheck(qname, gname string, t time.Time, gracePeriod, maxDelay time.Duration, maxSize int) (aggregationSetID string, err error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return "", errRedisDown
}
return tb.real.AggregationCheck(qname, gname, t, gracePeriod, maxDelay, maxSize)
}
func (tb *TestBroker) ReadAggregationSet(qname, gname, aggregationSetID string) ([]*base.TaskMessage, time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, time.Time{}, errRedisDown
}
return tb.real.ReadAggregationSet(qname, gname, aggregationSetID)
}
func (tb *TestBroker) DeleteAggregationSet(ctx context.Context, qname, gname, aggregationSetID string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.DeleteAggregationSet(ctx, qname, gname, aggregationSetID)
}
func (tb *TestBroker) ReclaimStaleAggregationSets(qname string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ReclaimStaleAggregationSets(qname)
}

View File

@ -0,0 +1,84 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package testutil
import (
"time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
)
func makeDefaultTaskMessage() *base.TaskMessage {
return &base.TaskMessage{
ID: uuid.NewString(),
Type: "default_task",
Queue: "default",
Retry: 25,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
}
}
type TaskMessageBuilder struct {
msg *base.TaskMessage
}
func NewTaskMessageBuilder() *TaskMessageBuilder {
return &TaskMessageBuilder{}
}
func (b *TaskMessageBuilder) lazyInit() {
if b.msg == nil {
b.msg = makeDefaultTaskMessage()
}
}
func (b *TaskMessageBuilder) Build() *base.TaskMessage {
b.lazyInit()
return b.msg
}
func (b *TaskMessageBuilder) SetType(typename string) *TaskMessageBuilder {
b.lazyInit()
b.msg.Type = typename
return b
}
func (b *TaskMessageBuilder) SetPayload(payload []byte) *TaskMessageBuilder {
b.lazyInit()
b.msg.Payload = payload
return b
}
func (b *TaskMessageBuilder) SetQueue(qname string) *TaskMessageBuilder {
b.lazyInit()
b.msg.Queue = qname
return b
}
func (b *TaskMessageBuilder) SetRetry(n int) *TaskMessageBuilder {
b.lazyInit()
b.msg.Retry = n
return b
}
func (b *TaskMessageBuilder) SetTimeout(timeout time.Duration) *TaskMessageBuilder {
b.lazyInit()
b.msg.Timeout = int64(timeout.Seconds())
return b
}
func (b *TaskMessageBuilder) SetDeadline(deadline time.Time) *TaskMessageBuilder {
b.lazyInit()
b.msg.Deadline = deadline.Unix()
return b
}
func (b *TaskMessageBuilder) SetGroup(gname string) *TaskMessageBuilder {
b.lazyInit()
b.msg.GroupKey = gname
return b
}

View File

@ -0,0 +1,94 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package testutil
import (
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/hibiken/asynq/internal/base"
)
func TestTaskMessageBuilder(t *testing.T) {
tests := []struct {
desc string
ops func(b *TaskMessageBuilder) // operations to perform on the builder
want *base.TaskMessage
}{
{
desc: "zero value and build",
ops: nil,
want: &base.TaskMessage{
Type: "default_task",
Queue: "default",
Payload: nil,
Retry: 25,
Timeout: 1800, // 30m
Deadline: 0,
},
},
{
desc: "with type, payload, and queue",
ops: func(b *TaskMessageBuilder) {
b.SetType("foo").SetPayload([]byte("hello")).SetQueue("myqueue")
},
want: &base.TaskMessage{
Type: "foo",
Queue: "myqueue",
Payload: []byte("hello"),
Retry: 25,
Timeout: 1800, // 30m
Deadline: 0,
},
},
{
desc: "with retry, timeout, and deadline",
ops: func(b *TaskMessageBuilder) {
b.SetRetry(1).
SetTimeout(20 * time.Second).
SetDeadline(time.Date(2017, 3, 6, 0, 0, 0, 0, time.UTC))
},
want: &base.TaskMessage{
Type: "default_task",
Queue: "default",
Payload: nil,
Retry: 1,
Timeout: 20,
Deadline: time.Date(2017, 3, 6, 0, 0, 0, 0, time.UTC).Unix(),
},
},
{
desc: "with group",
ops: func(b *TaskMessageBuilder) {
b.SetGroup("mygroup")
},
want: &base.TaskMessage{
Type: "default_task",
Queue: "default",
Payload: nil,
Retry: 25,
Timeout: 1800,
Deadline: 0,
GroupKey: "mygroup",
},
},
}
cmpOpts := []cmp.Option{cmpopts.IgnoreFields(base.TaskMessage{}, "ID")}
for _, tc := range tests {
var b TaskMessageBuilder
if tc.ops != nil {
tc.ops(&b)
}
got := b.Build()
if diff := cmp.Diff(tc.want, got, cmpOpts...); diff != "" {
t.Errorf("%s: TaskMessageBuilder.Build() = %+v, want %+v;\n(-want,+got)\n%s",
tc.desc, got, tc.want, diff)
}
}
}

View File

@ -0,0 +1,642 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package testutil defines test helpers for asynq and its internal packages.
package testutil
import (
"context"
"encoding/json"
"math"
"sort"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/timeutil"
"github.com/redis/go-redis/v9"
)
// EquateInt64Approx returns a Comparer option that treats int64 values
// to be equal if they are within the given margin.
func EquateInt64Approx(margin int64) cmp.Option {
return cmp.Comparer(func(a, b int64) bool {
return math.Abs(float64(a-b)) <= float64(margin)
})
}
// SortMsgOpt is a cmp.Option to sort base.TaskMessage for comparing slice of task messages.
var SortMsgOpt = cmp.Transformer("SortTaskMessages", func(in []*base.TaskMessage) []*base.TaskMessage {
out := append([]*base.TaskMessage(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].ID < out[j].ID
})
return out
})
// SortZSetEntryOpt is an cmp.Option to sort ZSetEntry for comparing slice of zset entries.
var SortZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []base.Z) []base.Z {
out := append([]base.Z(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].Message.ID < out[j].Message.ID
})
return out
})
// SortServerInfoOpt is a cmp.Option to sort base.ServerInfo for comparing slice of process info.
var SortServerInfoOpt = cmp.Transformer("SortServerInfo", func(in []*base.ServerInfo) []*base.ServerInfo {
out := append([]*base.ServerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
if out[i].Host != out[j].Host {
return out[i].Host < out[j].Host
}
return out[i].PID < out[j].PID
})
return out
})
// SortWorkerInfoOpt is a cmp.Option to sort base.WorkerInfo for comparing slice of worker info.
var SortWorkerInfoOpt = cmp.Transformer("SortWorkerInfo", func(in []*base.WorkerInfo) []*base.WorkerInfo {
out := append([]*base.WorkerInfo(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].ID < out[j].ID
})
return out
})
// SortSchedulerEntryOpt is a cmp.Option to sort base.SchedulerEntry for comparing slice of entries.
var SortSchedulerEntryOpt = cmp.Transformer("SortSchedulerEntry", func(in []*base.SchedulerEntry) []*base.SchedulerEntry {
out := append([]*base.SchedulerEntry(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
return out[i].Spec < out[j].Spec
})
return out
})
// SortSchedulerEnqueueEventOpt is a cmp.Option to sort base.SchedulerEnqueueEvent for comparing slice of events.
var SortSchedulerEnqueueEventOpt = cmp.Transformer("SortSchedulerEnqueueEvent", func(in []*base.SchedulerEnqueueEvent) []*base.SchedulerEnqueueEvent {
out := append([]*base.SchedulerEnqueueEvent(nil), in...)
sort.Slice(out, func(i, j int) bool {
return out[i].EnqueuedAt.Unix() < out[j].EnqueuedAt.Unix()
})
return out
})
// SortStringSliceOpt is a cmp.Option to sort string slice.
var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []string {
out := append([]string(nil), in...)
sort.Strings(out)
return out
})
var SortRedisZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []redis.Z) []redis.Z {
out := append([]redis.Z(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
// TODO: If member is a comparable type (int, string, etc) compare by the member
// Use generic comparable type here once update to go1.18
if _, ok := out[i].Member.(string); ok {
// If member is a string, compare the member
return out[i].Member.(string) < out[j].Member.(string)
}
return out[i].Score < out[j].Score
})
return out
})
// IgnoreIDOpt is an cmp.Option to ignore ID field in task messages when comparing.
var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
// NewTaskMessage returns a new instance of TaskMessage given a task type and payload.
func NewTaskMessage(taskType string, payload []byte) *base.TaskMessage {
return NewTaskMessageWithQueue(taskType, payload, base.DefaultQueueName)
}
// NewTaskMessageWithQueue returns a new instance of TaskMessage given a
// task type, payload and queue name.
func NewTaskMessageWithQueue(taskType string, payload []byte, qname string) *base.TaskMessage {
return &base.TaskMessage{
ID: uuid.NewString(),
Type: taskType,
Queue: qname,
Retry: 25,
Payload: payload,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
}
}
// NewLeaseWithClock returns a new lease with the given expiration time and clock.
func NewLeaseWithClock(expirationTime time.Time, clock timeutil.Clock) *base.Lease {
l := base.NewLease(expirationTime)
l.Clock = clock
return l
}
// JSON serializes the given key-value pairs into stream of bytes in JSON.
func JSON(kv map[string]interface{}) []byte {
b, err := json.Marshal(kv)
if err != nil {
panic(err)
}
return b
}
// TaskMessageAfterRetry returns an updated copy of t after retry.
// It increments retry count and sets the error message and last_failed_at time.
func TaskMessageAfterRetry(t base.TaskMessage, errMsg string, failedAt time.Time) *base.TaskMessage {
t.Retried = t.Retried + 1
t.ErrorMsg = errMsg
t.LastFailedAt = failedAt.Unix()
return &t
}
// TaskMessageWithError returns an updated copy of t with the given error message.
func TaskMessageWithError(t base.TaskMessage, errMsg string, failedAt time.Time) *base.TaskMessage {
t.ErrorMsg = errMsg
t.LastFailedAt = failedAt.Unix()
return &t
}
// TaskMessageWithCompletedAt returns an updated copy of t after completion.
func TaskMessageWithCompletedAt(t base.TaskMessage, completedAt time.Time) *base.TaskMessage {
t.CompletedAt = completedAt.Unix()
return &t
}
// MustMarshal marshals given task message and returns a json string.
// Calling test will fail if marshaling errors out.
func MustMarshal(tb testing.TB, msg *base.TaskMessage) string {
tb.Helper()
data, err := base.EncodeMessage(msg)
if err != nil {
tb.Fatal(err)
}
return string(data)
}
// MustUnmarshal unmarshals given string into task message struct.
// Calling test will fail if unmarshaling errors out.
func MustUnmarshal(tb testing.TB, data string) *base.TaskMessage {
tb.Helper()
msg, err := base.DecodeMessage([]byte(data))
if err != nil {
tb.Fatal(err)
}
return msg
}
// FlushDB deletes all the keys of the currently selected DB.
func FlushDB(tb testing.TB, r redis.UniversalClient) {
tb.Helper()
switch r := r.(type) {
case *redis.Client:
if err := r.FlushDB(context.Background()).Err(); err != nil {
tb.Fatal(err)
}
case *redis.ClusterClient:
err := r.ForEachMaster(context.Background(), func(ctx context.Context, c *redis.Client) error {
if err := c.FlushAll(ctx).Err(); err != nil {
return err
}
return nil
})
if err != nil {
tb.Fatal(err)
}
}
}
// SeedPendingQueue initializes the specified queue with the given messages.
func SeedPendingQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisList(tb, r, base.PendingKey(qname), msgs, base.TaskStatePending)
}
// SeedActiveQueue initializes the active queue with the given messages.
func SeedActiveQueue(tb testing.TB, r redis.UniversalClient, msgs []*base.TaskMessage, qname string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisList(tb, r, base.ActiveKey(qname), msgs, base.TaskStateActive)
}
// SeedScheduledQueue initializes the scheduled queue with the given messages.
func SeedScheduledQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.ScheduledKey(qname), entries, base.TaskStateScheduled)
}
// SeedRetryQueue initializes the retry queue with the given messages.
func SeedRetryQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.RetryKey(qname), entries, base.TaskStateRetry)
}
// SeedArchivedQueue initializes the archived queue with the given messages.
func SeedArchivedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.ArchivedKey(qname), entries, base.TaskStateArchived)
}
// SeedLease initializes the lease set with the given entries.
func SeedLease(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.LeaseKey(qname), entries, base.TaskStateActive)
}
// SeedCompletedQueue initializes the completed set with the given entries.
func SeedCompletedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.CompletedKey(qname), entries, base.TaskStateCompleted)
}
// SeedGroup initializes the group with the given entries.
func SeedGroup(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname, gname string) {
tb.Helper()
ctx := context.Background()
r.SAdd(ctx, base.AllQueues, qname)
r.SAdd(ctx, base.AllGroups(qname), gname)
seedRedisZSet(tb, r, base.GroupKey(qname, gname), entries, base.TaskStateAggregating)
}
func SeedAggregationSet(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname, gname, setID string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.AggregationSetKey(qname, gname, setID), entries, base.TaskStateAggregating)
}
// SeedAllPendingQueues initializes all of the specified queues with the given messages.
//
// pending maps a queue name to a list of messages.
func SeedAllPendingQueues(tb testing.TB, r redis.UniversalClient, pending map[string][]*base.TaskMessage) {
tb.Helper()
for q, msgs := range pending {
SeedPendingQueue(tb, r, msgs, q)
}
}
// SeedAllActiveQueues initializes all of the specified active queues with the given messages.
func SeedAllActiveQueues(tb testing.TB, r redis.UniversalClient, active map[string][]*base.TaskMessage) {
tb.Helper()
for q, msgs := range active {
SeedActiveQueue(tb, r, msgs, q)
}
}
// SeedAllScheduledQueues initializes all of the specified scheduled queues with the given entries.
func SeedAllScheduledQueues(tb testing.TB, r redis.UniversalClient, scheduled map[string][]base.Z) {
tb.Helper()
for q, entries := range scheduled {
SeedScheduledQueue(tb, r, entries, q)
}
}
// SeedAllRetryQueues initializes all of the specified retry queues with the given entries.
func SeedAllRetryQueues(tb testing.TB, r redis.UniversalClient, retry map[string][]base.Z) {
tb.Helper()
for q, entries := range retry {
SeedRetryQueue(tb, r, entries, q)
}
}
// SeedAllArchivedQueues initializes all of the specified archived queues with the given entries.
func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[string][]base.Z) {
tb.Helper()
for q, entries := range archived {
SeedArchivedQueue(tb, r, entries, q)
}
}
// SeedAllLease initializes all of the lease sets with the given entries.
func SeedAllLease(tb testing.TB, r redis.UniversalClient, lease map[string][]base.Z) {
tb.Helper()
for q, entries := range lease {
SeedLease(tb, r, entries, q)
}
}
// SeedAllCompletedQueues initializes all of the completed queues with the given entries.
func SeedAllCompletedQueues(tb testing.TB, r redis.UniversalClient, completed map[string][]base.Z) {
tb.Helper()
for q, entries := range completed {
SeedCompletedQueue(tb, r, entries, q)
}
}
// SeedAllGroups initializes all groups in all queues.
// The map maps queue names to group names which maps to a list of task messages and the time it was
// added to the group.
func SeedAllGroups(tb testing.TB, r redis.UniversalClient, groups map[string]map[string][]base.Z) {
tb.Helper()
for qname, g := range groups {
for gname, entries := range g {
SeedGroup(tb, r, entries, qname, gname)
}
}
}
func seedRedisList(tb testing.TB, c redis.UniversalClient, key string,
msgs []*base.TaskMessage, state base.TaskState) {
tb.Helper()
for _, msg := range msgs {
encoded := MustMarshal(tb, msg)
if err := c.LPush(context.Background(), key, msg.ID).Err(); err != nil {
tb.Fatal(err)
}
taskKey := base.TaskKey(msg.Queue, msg.ID)
data := map[string]interface{}{
"msg": encoded,
"state": state.String(),
"unique_key": msg.UniqueKey,
"group": msg.GroupKey,
}
if err := c.HSet(context.Background(), taskKey, data).Err(); err != nil {
tb.Fatal(err)
}
if len(msg.UniqueKey) > 0 {
err := c.SetNX(context.Background(), msg.UniqueKey, msg.ID, 1*time.Minute).Err()
if err != nil {
tb.Fatalf("Failed to set unique lock in redis: %v", err)
}
}
}
}
func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string,
items []base.Z, state base.TaskState) {
tb.Helper()
for _, item := range items {
msg := item.Message
encoded := MustMarshal(tb, msg)
z := redis.Z{Member: msg.ID, Score: float64(item.Score)}
if err := c.ZAdd(context.Background(), key, z).Err(); err != nil {
tb.Fatal(err)
}
taskKey := base.TaskKey(msg.Queue, msg.ID)
data := map[string]interface{}{
"msg": encoded,
"state": state.String(),
"unique_key": msg.UniqueKey,
"group": msg.GroupKey,
}
if err := c.HSet(context.Background(), taskKey, data).Err(); err != nil {
tb.Fatal(err)
}
if len(msg.UniqueKey) > 0 {
err := c.SetNX(context.Background(), msg.UniqueKey, msg.ID, 1*time.Minute).Err()
if err != nil {
tb.Fatalf("Failed to set unique lock in redis: %v", err)
}
}
}
}
// GetPendingMessages returns all pending messages in the given queue.
// It also asserts the state field of the task.
func GetPendingMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper()
return getMessagesFromList(tb, r, qname, base.PendingKey, base.TaskStatePending)
}
// GetActiveMessages returns all active messages in the given queue.
// It also asserts the state field of the task.
func GetActiveMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper()
return getMessagesFromList(tb, r, qname, base.ActiveKey, base.TaskStateActive)
}
// GetScheduledMessages returns all scheduled task messages in the given queue.
// It also asserts the state field of the task.
func GetScheduledMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper()
return getMessagesFromZSet(tb, r, qname, base.ScheduledKey, base.TaskStateScheduled)
}
// GetRetryMessages returns all retry messages in the given queue.
// It also asserts the state field of the task.
func GetRetryMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper()
return getMessagesFromZSet(tb, r, qname, base.RetryKey, base.TaskStateRetry)
}
// GetArchivedMessages returns all archived messages in the given queue.
// It also asserts the state field of the task.
func GetArchivedMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper()
return getMessagesFromZSet(tb, r, qname, base.ArchivedKey, base.TaskStateArchived)
}
// GetCompletedMessages returns all completed task messages in the given queue.
// It also asserts the state field of the task.
func GetCompletedMessages(tb testing.TB, r redis.UniversalClient, qname string) []*base.TaskMessage {
tb.Helper()
return getMessagesFromZSet(tb, r, qname, base.CompletedKey, base.TaskStateCompleted)
}
// GetScheduledEntries returns all scheduled messages and its score in the given queue.
// It also asserts the state field of the task.
func GetScheduledEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname, base.ScheduledKey, base.TaskStateScheduled)
}
// GetRetryEntries returns all retry messages and its score in the given queue.
// It also asserts the state field of the task.
func GetRetryEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname, base.RetryKey, base.TaskStateRetry)
}
// GetArchivedEntries returns all archived messages and its score in the given queue.
// It also asserts the state field of the task.
func GetArchivedEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname, base.ArchivedKey, base.TaskStateArchived)
}
// GetLeaseEntries returns all task IDs and its score in the lease set for the given queue.
// It also asserts the state field of the task.
func GetLeaseEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname, base.LeaseKey, base.TaskStateActive)
}
// GetCompletedEntries returns all completed messages and its score in the given queue.
// It also asserts the state field of the task.
func GetCompletedEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname, base.CompletedKey, base.TaskStateCompleted)
}
// GetGroupEntries returns all scheduled messages and its score in the given queue.
// It also asserts the state field of the task.
func GetGroupEntries(tb testing.TB, r redis.UniversalClient, qname, groupKey string) []base.Z {
tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname,
func(qname string) string { return base.GroupKey(qname, groupKey) }, base.TaskStateAggregating)
}
// Retrieves all messages stored under `keyFn(qname)` key in redis list.
func getMessagesFromList(tb testing.TB, r redis.UniversalClient, qname string,
keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage {
tb.Helper()
ids := r.LRange(context.Background(), keyFn(qname), 0, -1).Val()
var msgs []*base.TaskMessage
for _, id := range ids {
taskKey := base.TaskKey(qname, id)
data := r.HGet(context.Background(), taskKey, "msg").Val()
msgs = append(msgs, MustUnmarshal(tb, data))
if gotState := r.HGet(context.Background(), taskKey, "state").Val(); gotState != state.String() {
tb.Errorf("task (id=%q) is in %q state, want %v", id, gotState, state)
}
}
return msgs
}
// Retrieves all messages stored under `keyFn(qname)` key in redis zset (sorted-set).
func getMessagesFromZSet(tb testing.TB, r redis.UniversalClient, qname string,
keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage {
tb.Helper()
ids := r.ZRange(context.Background(), keyFn(qname), 0, -1).Val()
var msgs []*base.TaskMessage
for _, id := range ids {
taskKey := base.TaskKey(qname, id)
msg := r.HGet(context.Background(), taskKey, "msg").Val()
msgs = append(msgs, MustUnmarshal(tb, msg))
if gotState := r.HGet(context.Background(), taskKey, "state").Val(); gotState != state.String() {
tb.Errorf("task (id=%q) is in %q state, want %v", id, gotState, state)
}
}
return msgs
}
// Retrieves all messages along with their scores stored under `keyFn(qname)` key in redis zset (sorted-set).
func getMessagesFromZSetWithScores(tb testing.TB, r redis.UniversalClient,
qname string, keyFn func(qname string) string, state base.TaskState) []base.Z {
tb.Helper()
zs := r.ZRangeWithScores(context.Background(), keyFn(qname), 0, -1).Val()
var res []base.Z
for _, z := range zs {
taskID := z.Member.(string)
taskKey := base.TaskKey(qname, taskID)
msg := r.HGet(context.Background(), taskKey, "msg").Val()
res = append(res, base.Z{Message: MustUnmarshal(tb, msg), Score: int64(z.Score)})
if gotState := r.HGet(context.Background(), taskKey, "state").Val(); gotState != state.String() {
tb.Errorf("task (id=%q) is in %q state, want %v", taskID, gotState, state)
}
}
return res
}
// TaskSeedData holds the data required to seed tasks under the task key in test.
type TaskSeedData struct {
Msg *base.TaskMessage
State base.TaskState
PendingSince time.Time
}
func SeedTasks(tb testing.TB, r redis.UniversalClient, taskData []*TaskSeedData) {
for _, data := range taskData {
msg := data.Msg
ctx := context.Background()
key := base.TaskKey(msg.Queue, msg.ID)
v := map[string]interface{}{
"msg": MustMarshal(tb, msg),
"state": data.State.String(),
"unique_key": msg.UniqueKey,
"group": msg.GroupKey,
}
if !data.PendingSince.IsZero() {
v["pending_since"] = data.PendingSince.Unix()
}
if err := r.HSet(ctx, key, v).Err(); err != nil {
tb.Fatalf("Failed to write task data in redis: %v", err)
}
if len(msg.UniqueKey) > 0 {
err := r.SetNX(ctx, msg.UniqueKey, msg.ID, 1*time.Minute).Err()
if err != nil {
tb.Fatalf("Failed to set unique lock in redis: %v", err)
}
}
}
}
func SeedRedisZSets(tb testing.TB, r redis.UniversalClient, zsets map[string][]redis.Z) {
for key, zs := range zsets {
// FIXME: How come we can't simply do ZAdd(ctx, key, zs...) here?
for _, z := range zs {
if err := r.ZAdd(context.Background(), key, z).Err(); err != nil {
tb.Fatalf("Failed to seed zset (key=%q): %v", key, err)
}
}
}
}
func SeedRedisSets(tb testing.TB, r redis.UniversalClient, sets map[string][]string) {
for key, set := range sets {
SeedRedisSet(tb, r, key, set)
}
}
func SeedRedisSet(tb testing.TB, r redis.UniversalClient, key string, members []string) {
for _, mem := range members {
if err := r.SAdd(context.Background(), key, mem).Err(); err != nil {
tb.Fatalf("Failed to seed set (key=%q): %v", key, err)
}
}
}
func SeedRedisLists(tb testing.TB, r redis.UniversalClient, lists map[string][]string) {
for key, vals := range lists {
for _, v := range vals {
if err := r.LPush(context.Background(), key, v).Err(); err != nil {
tb.Fatalf("Failed to seed list (key=%q): %v", key, err)
}
}
}
}
func AssertRedisLists(t *testing.T, r redis.UniversalClient, wantLists map[string][]string) {
for key, want := range wantLists {
got, err := r.LRange(context.Background(), key, 0, -1).Result()
if err != nil {
t.Fatalf("Failed to read list (key=%q): %v", key, err)
}
if diff := cmp.Diff(want, got, SortStringSliceOpt); diff != "" {
t.Errorf("mismatch found in list (key=%q): (-want,+got)\n%s", key, diff)
}
}
}
func AssertRedisSets(t *testing.T, r redis.UniversalClient, wantSets map[string][]string) {
for key, want := range wantSets {
got, err := r.SMembers(context.Background(), key).Result()
if err != nil {
t.Fatalf("Failed to read set (key=%q): %v", key, err)
}
if diff := cmp.Diff(want, got, SortStringSliceOpt); diff != "" {
t.Errorf("mismatch found in set (key=%q): (-want,+got)\n%s", key, diff)
}
}
}
func AssertRedisZSets(t *testing.T, r redis.UniversalClient, wantZSets map[string][]redis.Z) {
for key, want := range wantZSets {
got, err := r.ZRangeWithScores(context.Background(), key, 0, -1).Result()
if err != nil {
t.Fatalf("Failed to read zset (key=%q): %v", key, err)
}
if diff := cmp.Diff(want, got, SortRedisZSetEntryOpt); diff != "" {
t.Errorf("mismatch found in zset (key=%q): (-want,+got)\n%s", key, diff)
}
}
}

View File

@ -0,0 +1,59 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package timeutil exports functions and types related to time and date.
package timeutil
import (
"sync"
"time"
)
// A Clock is an object that can tell you the current time.
//
// This interface allows decoupling code that uses time from the code that creates
// a point in time. You can use this to your advantage by injecting Clocks into interfaces
// rather than having implementations call time.Now() directly.
//
// Use RealClock() in production.
// Use SimulatedClock() in test.
type Clock interface {
Now() time.Time
}
func NewRealClock() Clock { return &realTimeClock{} }
type realTimeClock struct{}
func (_ *realTimeClock) Now() time.Time { return time.Now() }
// A SimulatedClock is a concrete Clock implementation that doesn't "tick" on its own.
// Time is advanced by explicit call to the AdvanceTime() or SetTime() functions.
// This object is concurrency safe.
type SimulatedClock struct {
mu sync.Mutex
t time.Time // guarded by mu
}
func NewSimulatedClock(t time.Time) *SimulatedClock {
return &SimulatedClock{t: t}
}
func (c *SimulatedClock) Now() time.Time {
c.mu.Lock()
defer c.mu.Unlock()
return c.t
}
func (c *SimulatedClock) SetTime(t time.Time) {
c.mu.Lock()
defer c.mu.Unlock()
c.t = t
}
func (c *SimulatedClock) AdvanceTime(d time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
c.t = c.t.Add(d)
}

View File

@ -0,0 +1,48 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package timeutil
import (
"testing"
"time"
)
func TestSimulatedClock(t *testing.T) {
now := time.Now()
tests := []struct {
desc string
initTime time.Time
advanceBy time.Duration
wantTime time.Time
}{
{
desc: "advance time forward",
initTime: now,
advanceBy: 30 * time.Second,
wantTime: now.Add(30 * time.Second),
},
{
desc: "advance time backward",
initTime: now,
advanceBy: -10 * time.Second,
wantTime: now.Add(-10 * time.Second),
},
}
for _, tc := range tests {
c := NewSimulatedClock(tc.initTime)
if c.Now() != tc.initTime {
t.Errorf("%s: Before Advance; SimulatedClock.Now() = %v, want %v", tc.desc, c.Now(), tc.initTime)
}
c.AdvanceTime(tc.advanceBy)
if c.Now() != tc.wantTime {
t.Errorf("%s: After Advance; SimulatedClock.Now() = %v, want %v", tc.desc, c.Now(), tc.wantTime)
}
}
}

86
janitor.go Normal file
View File

@ -0,0 +1,86 @@
// Copyright 2021 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// A janitor is responsible for deleting expired completed tasks from the specified
// queues. It periodically checks for any expired tasks in the completed set, and
// deletes them.
type janitor struct {
logger *log.Logger
broker base.Broker
// channel to communicate back to the long running "janitor" goroutine.
done chan struct{}
// list of queue names to check.
queues []string
// average interval between checks.
avgInterval time.Duration
// number of tasks to be deleted when janitor runs to delete the expired completed tasks.
batchSize int
}
type janitorParams struct {
logger *log.Logger
broker base.Broker
queues []string
interval time.Duration
batchSize int
}
func newJanitor(params janitorParams) *janitor {
return &janitor{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
queues: params.queues,
avgInterval: params.interval,
batchSize: params.batchSize,
}
}
func (j *janitor) shutdown() {
j.logger.Debug("Janitor shutting down...")
// Signal the janitor goroutine to stop.
j.done <- struct{}{}
}
// start starts the "janitor" goroutine.
func (j *janitor) start(wg *sync.WaitGroup) {
wg.Add(1)
timer := time.NewTimer(j.avgInterval) // randomize this interval with margin of 1s
go func() {
defer wg.Done()
for {
select {
case <-j.done:
j.logger.Debug("Janitor done")
return
case <-timer.C:
j.exec()
timer.Reset(j.avgInterval)
}
}
}()
}
func (j *janitor) exec() {
for _, qname := range j.queues {
if err := j.broker.DeleteExpiredCompletedTasks(qname, j.batchSize); err != nil {
j.logger.Errorf("Failed to delete expired completed tasks from queue %q: %v",
qname, err)
}
}
}

91
janitor_test.go Normal file
View File

@ -0,0 +1,91 @@
// Copyright 2021 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
)
func newCompletedTask(qname, tasktype string, payload []byte, completedAt time.Time) *base.TaskMessage {
msg := h.NewTaskMessageWithQueue(tasktype, payload, qname)
msg.CompletedAt = completedAt.Unix()
return msg
}
func TestJanitor(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
const interval = 1 * time.Second
const batchSize = 100
janitor := newJanitor(janitorParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default", "custom"},
interval: interval,
batchSize: batchSize,
})
now := time.Now()
hourAgo := now.Add(-1 * time.Hour)
minuteAgo := now.Add(-1 * time.Minute)
halfHourAgo := now.Add(-30 * time.Minute)
halfHourFromNow := now.Add(30 * time.Minute)
fiveMinFromNow := now.Add(5 * time.Minute)
msg1 := newCompletedTask("default", "task1", nil, hourAgo)
msg2 := newCompletedTask("default", "task2", nil, minuteAgo)
msg3 := newCompletedTask("custom", "task3", nil, hourAgo)
msg4 := newCompletedTask("custom", "task4", nil, minuteAgo)
tests := []struct {
completed map[string][]base.Z // initial completed sets
wantCompleted map[string][]base.Z // expected completed sets after janitor runs
}{
{
completed: map[string][]base.Z{
"default": {
{Message: msg1, Score: halfHourAgo.Unix()},
{Message: msg2, Score: fiveMinFromNow.Unix()},
},
"custom": {
{Message: msg3, Score: halfHourFromNow.Unix()},
{Message: msg4, Score: minuteAgo.Unix()},
},
},
wantCompleted: map[string][]base.Z{
"default": {
{Message: msg2, Score: fiveMinFromNow.Unix()},
},
"custom": {
{Message: msg3, Score: halfHourFromNow.Unix()},
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedAllCompletedQueues(t, r, tc.completed)
var wg sync.WaitGroup
janitor.start(&wg)
time.Sleep(2 * interval) // make sure to let janitor run at least one time
janitor.shutdown()
for qname, want := range tc.wantCompleted {
got := h.GetCompletedEntries(t, r, qname)
if diff := cmp.Diff(want, got, h.SortZSetEntryOpt); diff != "" {
t.Errorf("diff found in %q after running janitor: (-want, +got)\n%s", base.CompletedKey(qname), diff)
}
}
}
}

View File

@ -1,220 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"encoding/json"
"fmt"
"time"
"github.com/spf13/cast"
)
// Payload holds arbitrary data needed for task execution.
type Payload struct {
data map[string]interface{}
}
type errKeyNotFound struct {
key string
}
func (e *errKeyNotFound) Error() string {
return fmt.Sprintf("key %q does not exist", e.key)
}
// Has reports whether key exists.
func (p Payload) Has(key string) bool {
_, ok := p.data[key]
return ok
}
func toInt(v interface{}) (int, error) {
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return int(val), nil
default:
return cast.ToIntE(v)
}
}
// GetString returns a string value if a string type is associated with
// the key, otherwise reports an error.
func (p Payload) GetString(key string) (string, error) {
v, ok := p.data[key]
if !ok {
return "", &errKeyNotFound{key}
}
return cast.ToStringE(v)
}
// GetInt returns an int value if a numeric type is associated with
// the key, otherwise reports an error.
func (p Payload) GetInt(key string) (int, error) {
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
return toInt(v)
}
// GetFloat64 returns a float64 value if a numeric type is associated with
// the key, otherwise reports an error.
func (p Payload) GetFloat64(key string) (float64, error) {
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
switch v := v.(type) {
case json.Number:
return v.Float64()
default:
return cast.ToFloat64E(v)
}
}
// GetBool returns a boolean value if a boolean type is associated with
// the key, otherwise reports an error.
func (p Payload) GetBool(key string) (bool, error) {
v, ok := p.data[key]
if !ok {
return false, &errKeyNotFound{key}
}
return cast.ToBoolE(v)
}
// GetStringSlice returns a slice of strings if a string slice type is associated with
// the key, otherwise reports an error.
func (p Payload) GetStringSlice(key string) ([]string, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringSliceE(v)
}
// GetIntSlice returns a slice of ints if a int slice type is associated with
// the key, otherwise reports an error.
func (p Payload) GetIntSlice(key string) ([]int, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
switch v := v.(type) {
case []interface{}:
var res []int
for _, elem := range v {
val, err := toInt(elem)
if err != nil {
return nil, err
}
res = append(res, int(val))
}
return res, nil
default:
return cast.ToIntSliceE(v)
}
}
// GetStringMap returns a map of string to empty interface
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMap(key string) (map[string]interface{}, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapE(v)
}
// GetStringMapString returns a map of string to string
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapString(key string) (map[string]string, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapStringE(v)
}
// GetStringMapStringSlice returns a map of string to string slice
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapStringSlice(key string) (map[string][]string, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapStringSliceE(v)
}
// GetStringMapInt returns a map of string to int
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapInt(key string) (map[string]int, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
switch v := v.(type) {
case map[string]interface{}:
res := make(map[string]int)
for key, val := range v {
ival, err := toInt(val)
if err != nil {
return nil, err
}
res[key] = ival
}
return res, nil
default:
return cast.ToStringMapIntE(v)
}
}
// GetStringMapBool returns a map of string to boolean
// if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetStringMapBool(key string) (map[string]bool, error) {
v, ok := p.data[key]
if !ok {
return nil, &errKeyNotFound{key}
}
return cast.ToStringMapBoolE(v)
}
// GetTime returns a time value if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetTime(key string) (time.Time, error) {
v, ok := p.data[key]
if !ok {
return time.Time{}, &errKeyNotFound{key}
}
return cast.ToTimeE(v)
}
// GetDuration returns a duration value if a correct map type is associated with the key,
// otherwise reports an error.
func (p Payload) GetDuration(key string) (time.Duration, error) {
v, ok := p.data[key]
if !ok {
return 0, &errKeyNotFound{key}
}
switch v := v.(type) {
case json.Number:
val, err := v.Int64()
if err != nil {
return 0, err
}
return time.Duration(val), nil
default:
return cast.ToDurationE(v)
}
}

View File

@ -1,647 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"encoding/json"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
)
type payloadTest struct {
data map[string]interface{}
key string
nonkey string
}
func TestPayloadString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"name": "gopher"},
key: "name",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetString(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetString(tc.nonkey)
if err == nil || got != "" {
t.Errorf("Payload.GetString(%q) = %v, %v; want '', error",
tc.key, got, err)
}
}
}
func TestPayloadInt(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user_id": 42},
key: "user_id",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetInt(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetInt(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetInt(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadFloat64(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"pi": 3.14},
key: "pi",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetFloat64(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetFloat64(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetFloat64(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetFloat64(%q) = %v, %v; want 0, error",
tc.key, got, err)
}
}
}
func TestPayloadBool(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"enabled": true},
key: "enabled",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetBool(tc.key)
if err != nil || got != tc.data[tc.key] {
t.Errorf("With Marshaling: Payload.GetBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetBool(tc.nonkey)
if err == nil || got != false {
t.Errorf("Payload.GetBool(%q) = %v, %v; want false, error",
tc.key, got, err)
}
}
}
func TestPayloadStringSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"names": []string{"luke", "rey", "anakin"}},
key: "names",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadIntSlice(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"nums": []int{9, 8, 7}},
key: "nums",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetIntSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetIntSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetIntSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetIntSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetIntSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMap(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"user": map[string]interface{}{"name": "Jon Doe", "score": 2.2}},
key: "user",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMap(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMap(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMap(tc.key)
ignoreOpt := cmpopts.IgnoreMapEntries(func(key string, val interface{}) bool {
switch val.(type) {
case json.Number:
return true
default:
return false
}
})
diff = cmp.Diff(got, tc.data[tc.key], ignoreOpt)
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMap(%q) = %v, %v, want %v, nil;(-want,+got)\n%s",
tc.key, got, err, tc.data[tc.key], diff)
}
// access non-existent key.
got, err = payload.GetStringMap(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMap(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapString(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"address": map[string]string{"line": "123 Main St", "city": "San Francisco", "state": "CA"}},
key: "address",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapString(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapString(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapString(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapString(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapString(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapStringSlice(t *testing.T) {
favs := map[string][]string{
"movies": {"forrest gump", "star wars"},
"tv_shows": {"game of thrones", "HIMYM", "breaking bad"},
}
tests := []payloadTest{
{
data: map[string]interface{}{"favorites": favs},
key: "favorites",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapStringSlice(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapStringSlice(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapStringSlice(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapStringSlice(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapStringSlice(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapInt(t *testing.T) {
counter := map[string]int{
"a": 1,
"b": 101,
"c": 42,
}
tests := []payloadTest{
{
data: map[string]interface{}{"counts": counter},
key: "counts",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapInt(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapInt(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapInt(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapInt(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapInt(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadStringMapBool(t *testing.T) {
features := map[string]bool{
"A": false,
"B": true,
"C": true,
}
tests := []payloadTest{
{
data: map[string]interface{}{"features": features},
key: "features",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetStringMapBool(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetStringMapBool(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetStringMapBool(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetStringMapBool(tc.nonkey)
if err == nil || got != nil {
t.Errorf("Payload.GetStringMapBool(%q) = %v, %v; want nil, error",
tc.key, got, err)
}
}
}
func TestPayloadTime(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"current": time.Now()},
key: "current",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetTime(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetTime(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetTime(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetTime(tc.nonkey)
if err == nil || !got.IsZero() {
t.Errorf("Payload.GetTime(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Time{})
}
}
}
func TestPayloadDuration(t *testing.T) {
tests := []payloadTest{
{
data: map[string]interface{}{"duration": 15 * time.Minute},
key: "duration",
nonkey: "unknown",
},
}
for _, tc := range tests {
payload := Payload{tc.data}
got, err := payload.GetDuration(tc.key)
diff := cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// encode and then decode task messsage.
in := h.NewTaskMessage("testing", tc.data)
encoded, err := base.EncodeMessage(in)
if err != nil {
t.Fatal(err)
}
out, err := base.DecodeMessage(encoded)
if err != nil {
t.Fatal(err)
}
payload = Payload{out.Payload}
got, err = payload.GetDuration(tc.key)
diff = cmp.Diff(got, tc.data[tc.key])
if err != nil || diff != "" {
t.Errorf("With Marshaling: Payload.GetDuration(%q) = %v, %v, want %v, nil",
tc.key, got, err, tc.data[tc.key])
}
// access non-existent key.
got, err = payload.GetDuration(tc.nonkey)
if err == nil || got != 0 {
t.Errorf("Payload.GetDuration(%q) = %v, %v; want %v, error",
tc.key, got, err, time.Duration(0))
}
}
}
func TestPayloadHas(t *testing.T) {
payload := Payload{map[string]interface{}{
"user_id": 123,
}}
if !payload.Has("user_id") {
t.Errorf("Payload.Has(%q) = false, want true", "user_id")
}
if payload.Has("name") {
t.Errorf("Payload.Has(%q) = true, want false", "name")
}
}

253
periodic_task_manager.go Normal file
View File

@ -0,0 +1,253 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"crypto/sha256"
"fmt"
"sort"
"sync"
"time"
"github.com/redis/go-redis/v9"
)
// PeriodicTaskManager manages scheduling of periodic tasks.
// It syncs scheduler's entries by calling the config provider periodically.
type PeriodicTaskManager struct {
s *Scheduler
p PeriodicTaskConfigProvider
syncInterval time.Duration
done chan (struct{})
wg sync.WaitGroup
m map[string]string // map[hash]entryID
}
type PeriodicTaskManagerOpts struct {
// Required: must be non nil
PeriodicTaskConfigProvider PeriodicTaskConfigProvider
// Optional: if RedisUniversalClient is nil must be non nil
RedisConnOpt RedisConnOpt
// Optional: if RedisUniversalClient is non nil, RedisConnOpt is ignored.
RedisUniversalClient redis.UniversalClient
// Optional: scheduler options
*SchedulerOpts
// Optional: default is 3m
SyncInterval time.Duration
}
const defaultSyncInterval = 3 * time.Minute
// NewPeriodicTaskManager returns a new PeriodicTaskManager instance.
// The given opts should specify the RedisConnOp and PeriodicTaskConfigProvider at minimum.
func NewPeriodicTaskManager(opts PeriodicTaskManagerOpts) (*PeriodicTaskManager, error) {
if opts.PeriodicTaskConfigProvider == nil {
return nil, fmt.Errorf("PeriodicTaskConfigProvider cannot be nil")
}
if opts.RedisConnOpt == nil && opts.RedisUniversalClient == nil {
return nil, fmt.Errorf("RedisConnOpt/RedisUniversalClient cannot be nil")
}
var scheduler *Scheduler
if opts.RedisUniversalClient != nil {
scheduler = NewSchedulerFromRedisClient(opts.RedisUniversalClient, opts.SchedulerOpts)
} else {
scheduler = NewScheduler(opts.RedisConnOpt, opts.SchedulerOpts)
}
syncInterval := opts.SyncInterval
if syncInterval == 0 {
syncInterval = defaultSyncInterval
}
return &PeriodicTaskManager{
s: scheduler,
p: opts.PeriodicTaskConfigProvider,
syncInterval: syncInterval,
done: make(chan struct{}),
m: make(map[string]string),
}, nil
}
// PeriodicTaskConfigProvider provides configs for periodic tasks.
// GetConfigs will be called by a PeriodicTaskManager periodically to
// sync the scheduler's entries with the configs returned by the provider.
type PeriodicTaskConfigProvider interface {
GetConfigs() ([]*PeriodicTaskConfig, error)
}
// PeriodicTaskConfig specifies the details of a periodic task.
type PeriodicTaskConfig struct {
Cronspec string // required: must be non empty string
Task *Task // required: must be non nil
Opts []Option // optional: can be nil
}
func (c *PeriodicTaskConfig) hash() string {
h := sha256.New()
_, _ = h.Write([]byte(c.Cronspec))
_, _ = h.Write([]byte(c.Task.Type()))
h.Write(c.Task.Payload())
opts := stringifyOptions(c.Opts)
sort.Strings(opts)
for _, opt := range opts {
_, _ = h.Write([]byte(opt))
}
return fmt.Sprintf("%x", h.Sum(nil))
}
func validatePeriodicTaskConfig(c *PeriodicTaskConfig) error {
if c == nil {
return fmt.Errorf("PeriodicTaskConfig cannot be nil")
}
if c.Task == nil {
return fmt.Errorf("PeriodicTaskConfig.Task cannot be nil")
}
if c.Cronspec == "" {
return fmt.Errorf("PeriodicTaskConfig.Cronspec cannot be empty")
}
return nil
}
// Start starts a scheduler and background goroutine to sync the scheduler with the configs
// returned by the provider.
//
// Start returns any error encountered at start up time.
func (mgr *PeriodicTaskManager) Start() error {
if mgr.s == nil || mgr.p == nil {
panic("asynq: cannot start uninitialized PeriodicTaskManager; use NewPeriodicTaskManager to initialize")
}
if err := mgr.initialSync(); err != nil {
return fmt.Errorf("asynq: %v", err)
}
if err := mgr.s.Start(); err != nil {
return fmt.Errorf("asynq: %v", err)
}
mgr.wg.Add(1)
go func() {
defer mgr.wg.Done()
ticker := time.NewTicker(mgr.syncInterval)
for {
select {
case <-mgr.done:
mgr.s.logger.Debugf("Stopping syncer goroutine")
ticker.Stop()
return
case <-ticker.C:
mgr.sync()
}
}
}()
return nil
}
// Shutdown gracefully shuts down the manager.
// It notifies a background syncer goroutine to stop and stops scheduler.
func (mgr *PeriodicTaskManager) Shutdown() {
close(mgr.done)
mgr.wg.Wait()
mgr.s.Shutdown()
}
// Run starts the manager and blocks until an os signal to exit the program is received.
// Once it receives a signal, it gracefully shuts down the manager.
func (mgr *PeriodicTaskManager) Run() error {
if err := mgr.Start(); err != nil {
return err
}
mgr.s.waitForSignals()
mgr.Shutdown()
mgr.s.logger.Debugf("PeriodicTaskManager exiting")
return nil
}
func (mgr *PeriodicTaskManager) initialSync() error {
configs, err := mgr.p.GetConfigs()
if err != nil {
return fmt.Errorf("initial call to GetConfigs failed: %v", err)
}
for _, c := range configs {
if err := validatePeriodicTaskConfig(c); err != nil {
return fmt.Errorf("initial call to GetConfigs contained an invalid config: %v", err)
}
}
mgr.add(configs)
return nil
}
func (mgr *PeriodicTaskManager) add(configs []*PeriodicTaskConfig) {
for _, c := range configs {
entryID, err := mgr.s.Register(c.Cronspec, c.Task, c.Opts...)
if err != nil {
mgr.s.logger.Errorf("Failed to register periodic task: cronspec=%q task=%q err=%v",
c.Cronspec, c.Task.Type(), err)
continue
}
mgr.m[c.hash()] = entryID
mgr.s.logger.Infof("Successfully registered periodic task: cronspec=%q task=%q, entryID=%s",
c.Cronspec, c.Task.Type(), entryID)
}
}
func (mgr *PeriodicTaskManager) remove(removed map[string]string) {
for hash, entryID := range removed {
if err := mgr.s.Unregister(entryID); err != nil {
mgr.s.logger.Errorf("Failed to unregister periodic task: %v", err)
continue
}
delete(mgr.m, hash)
mgr.s.logger.Infof("Successfully unregistered periodic task: entryID=%s", entryID)
}
}
func (mgr *PeriodicTaskManager) sync() {
configs, err := mgr.p.GetConfigs()
if err != nil {
mgr.s.logger.Errorf("Failed to get periodic task configs: %v", err)
return
}
for _, c := range configs {
if err := validatePeriodicTaskConfig(c); err != nil {
mgr.s.logger.Errorf("Failed to sync: GetConfigs returned an invalid config: %v", err)
return
}
}
// Diff and only register/unregister the newly added/removed entries.
removed := mgr.diffRemoved(configs)
added := mgr.diffAdded(configs)
mgr.remove(removed)
mgr.add(added)
}
// diffRemoved diffs the incoming configs with the registered config and returns
// a map containing hash and entryID of each config that was removed.
func (mgr *PeriodicTaskManager) diffRemoved(configs []*PeriodicTaskConfig) map[string]string {
newConfigs := make(map[string]string)
for _, c := range configs {
newConfigs[c.hash()] = "" // empty value since we don't have entryID yet
}
removed := make(map[string]string)
for k, v := range mgr.m {
// test whether existing config is present in the incoming configs
if _, found := newConfigs[k]; !found {
removed[k] = v
}
}
return removed
}
// diffAdded diffs the incoming configs with the registered configs and returns
// a list of configs that were added.
func (mgr *PeriodicTaskManager) diffAdded(configs []*PeriodicTaskConfig) []*PeriodicTaskConfig {
var added []*PeriodicTaskConfig
for _, c := range configs {
if _, found := mgr.m[c.hash()]; !found {
added = append(added, c)
}
}
return added
}

View File

@ -0,0 +1,340 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sort"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
)
// Trivial implementation of PeriodicTaskConfigProvider for testing purpose.
type FakeConfigProvider struct {
mu sync.Mutex
cfgs []*PeriodicTaskConfig
}
func (p *FakeConfigProvider) SetConfigs(cfgs []*PeriodicTaskConfig) {
p.mu.Lock()
defer p.mu.Unlock()
p.cfgs = cfgs
}
func (p *FakeConfigProvider) GetConfigs() ([]*PeriodicTaskConfig, error) {
p.mu.Lock()
defer p.mu.Unlock()
return p.cfgs, nil
}
func TestNewPeriodicTaskManager(t *testing.T) {
redisConnOpt := getRedisConnOpt(t)
cfgs := []*PeriodicTaskConfig{
{Cronspec: "* * * * *", Task: NewTask("foo", nil)},
{Cronspec: "* * * * *", Task: NewTask("bar", nil)},
}
tests := []struct {
desc string
opts PeriodicTaskManagerOpts
}{
{
desc: "with provider and redisConnOpt",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
},
},
{
desc: "with sync option",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
SyncInterval: 5 * time.Minute,
},
},
{
desc: "with scheduler option",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
SyncInterval: 5 * time.Minute,
SchedulerOpts: &SchedulerOpts{
LogLevel: DebugLevel,
},
},
},
}
for _, tc := range tests {
_, err := NewPeriodicTaskManager(tc.opts)
if err != nil {
t.Errorf("%s; NewPeriodicTaskManager returned error: %v", tc.desc, err)
}
}
t.Run("error", func(t *testing.T) {
tests := []struct {
desc string
opts PeriodicTaskManagerOpts
}{
{
desc: "without provider",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
},
},
{
desc: "without redisConOpt",
opts: PeriodicTaskManagerOpts{
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
},
},
}
for _, tc := range tests {
_, err := NewPeriodicTaskManager(tc.opts)
if err == nil {
t.Errorf("%s; NewPeriodicTaskManager did not return error", tc.desc)
}
}
})
}
func TestPeriodicTaskConfigHash(t *testing.T) {
tests := []struct {
desc string
a *PeriodicTaskConfig
b *PeriodicTaskConfig
isSame bool
}{
{
desc: "basic identity test",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
isSame: true,
},
{
desc: "with a option",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
isSame: true,
},
{
desc: "with multiple options (different order)",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Unique(5 * time.Minute), Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue"), Unique(5 * time.Minute)},
},
isSame: true,
},
{
desc: "with payload",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("hello world!")),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("hello world!")),
Opts: []Option{Queue("myqueue")},
},
isSame: true,
},
{
desc: "with different cronspecs",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
b: &PeriodicTaskConfig{
Cronspec: "5 * * * *",
Task: NewTask("foo", nil),
},
isSame: false,
},
{
desc: "with different task type",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("bar", nil),
},
isSame: false,
},
{
desc: "with different options",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Unique(10 * time.Minute)},
},
isSame: false,
},
{
desc: "with different options (one is subset of the other)",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue"), Unique(10 * time.Minute)},
},
isSame: false,
},
{
desc: "with different payload",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("hello!")),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("HELLO!")),
Opts: []Option{Queue("myqueue"), Unique(10 * time.Minute)},
},
isSame: false,
},
}
for _, tc := range tests {
if tc.isSame && tc.a.hash() != tc.b.hash() {
t.Errorf("%s: a.hash=%s b.hash=%s expected to be equal",
tc.desc, tc.a.hash(), tc.b.hash())
}
if !tc.isSame && tc.a.hash() == tc.b.hash() {
t.Errorf("%s: a.hash=%s b.hash=%s expected to be not equal",
tc.desc, tc.a.hash(), tc.b.hash())
}
}
}
// Things to test.
// - Run the manager
// - Change provider to return new configs
// - Verify that the scheduler synced with the new config
func TestPeriodicTaskManager(t *testing.T) {
// Note: In this test, we'll use task type as an ID for each config.
cfgs := []*PeriodicTaskConfig{
{Task: NewTask("task1", nil), Cronspec: "* * * * 1"},
{Task: NewTask("task2", nil), Cronspec: "* * * * 2"},
}
const syncInterval = 3 * time.Second
provider := &FakeConfigProvider{cfgs: cfgs}
mgr, err := NewPeriodicTaskManager(PeriodicTaskManagerOpts{
RedisConnOpt: getRedisConnOpt(t),
PeriodicTaskConfigProvider: provider,
SyncInterval: syncInterval,
})
if err != nil {
t.Fatalf("Failed to initialize PeriodicTaskManager: %v", err)
}
if err := mgr.Start(); err != nil {
t.Fatalf("Failed to start PeriodicTaskManager: %v", err)
}
defer mgr.Shutdown()
got := extractCronEntries(mgr.s)
want := []*cronEntry{
{Cronspec: "* * * * 1", TaskType: "task1"},
{Cronspec: "* * * * 2", TaskType: "task2"},
}
if diff := cmp.Diff(want, got, sortCronEntry); diff != "" {
t.Errorf("Diff found in scheduler's registered entries: %s", diff)
}
// Change the underlying configs
// - task2 removed
// - task3 added
provider.SetConfigs([]*PeriodicTaskConfig{
{Task: NewTask("task1", nil), Cronspec: "* * * * 1"},
{Task: NewTask("task3", nil), Cronspec: "* * * * 3"},
})
// Wait for the next sync
time.Sleep(syncInterval * 2)
// Verify the entries are synced
got = extractCronEntries(mgr.s)
want = []*cronEntry{
{Cronspec: "* * * * 1", TaskType: "task1"},
{Cronspec: "* * * * 3", TaskType: "task3"},
}
if diff := cmp.Diff(want, got, sortCronEntry); diff != "" {
t.Errorf("Diff found in scheduler's registered entries: %s", diff)
}
// Change the underlying configs
// All configs removed, empty set.
provider.SetConfigs([]*PeriodicTaskConfig{})
// Wait for the next sync
time.Sleep(syncInterval * 2)
// Verify the entries are synced
got = extractCronEntries(mgr.s)
want = []*cronEntry{}
if diff := cmp.Diff(want, got, sortCronEntry); diff != "" {
t.Errorf("Diff found in scheduler's registered entries: %s", diff)
}
}
func extractCronEntries(s *Scheduler) []*cronEntry {
var out []*cronEntry
for _, e := range s.cron.Entries() {
job := e.Job.(*enqueueJob)
out = append(out, &cronEntry{Cronspec: job.cronspec, TaskType: job.task.Type()})
}
return out
}
var sortCronEntry = cmp.Transformer("sortCronEntry", func(in []*cronEntry) []*cronEntry {
out := append([]*cronEntry(nil), in...)
sort.Slice(out, func(i, j int) bool {
return out[i].TaskType < out[j].TaskType
})
return out
})
// A simple struct to allow for simpler comparison in test.
type cronEntry struct {
Cronspec string
TaskType string
}

View File

@ -7,32 +7,41 @@ package asynq
import (
"context"
"fmt"
"math/rand"
"math"
"math/rand/v2"
"runtime"
"runtime/debug"
"sort"
"strings"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
asynqcontext "github.com/hibiken/asynq/internal/context"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/timeutil"
"golang.org/x/time/rate"
)
type processor struct {
logger *log.Logger
broker base.Broker
clock timeutil.Clock
handler Handler
handler Handler
baseCtxFn func() context.Context
queueConfig map[string]int
// orderedQueues is set only in strict-priority mode.
orderedQueues []string
retryDelayFunc retryDelayFunc
errHandler ErrorHandler
taskCheckInterval time.Duration
retryDelayFunc RetryDelayFunc
isFailureFunc func(error) bool
errHandler ErrorHandler
shutdownTimeout time.Duration
// channel via which to send sync requests to syncer.
@ -50,34 +59,35 @@ type processor struct {
done chan struct{}
once sync.Once
// abort channel is closed when the shutdown of the "processor" goroutine starts.
abort chan struct{}
// quit channel communicates to the in-flight worker goroutines to stop.
// quit channel is closed when the shutdown of the "processor" goroutine starts.
quit chan struct{}
// cancelations is a set of cancel functions for all in-progress tasks.
// abort channel communicates to the in-flight worker goroutines to stop.
abort chan struct{}
// cancelations is a set of cancel functions for all active tasks.
cancelations *base.Cancelations
starting chan<- *base.TaskMessage
starting chan<- *workerInfo
finished chan<- *base.TaskMessage
}
type retryDelayFunc func(n int, err error, task *Task) time.Duration
type processorParams struct {
logger *log.Logger
broker base.Broker
retryDelayFunc retryDelayFunc
syncCh chan<- *syncRequest
cancelations *base.Cancelations
concurrency int
queues map[string]int
strictPriority bool
errHandler ErrorHandler
shutdownTimeout time.Duration
starting chan<- *base.TaskMessage
finished chan<- *base.TaskMessage
logger *log.Logger
broker base.Broker
baseCtxFn func() context.Context
retryDelayFunc RetryDelayFunc
taskCheckInterval time.Duration
isFailureFunc func(error) bool
syncCh chan<- *syncRequest
cancelations *base.Cancelations
concurrency int
queues map[string]int
strictPriority bool
errHandler ErrorHandler
shutdownTimeout time.Duration
starting chan<- *workerInfo
finished chan<- *base.TaskMessage
}
// newProcessor constructs a new processor.
@ -88,22 +98,27 @@ func newProcessor(params processorParams) *processor {
orderedQueues = sortByPriority(queues)
}
return &processor{
logger: params.logger,
broker: params.broker,
queueConfig: queues,
orderedQueues: orderedQueues,
retryDelayFunc: params.retryDelayFunc,
syncRequestCh: params.syncCh,
cancelations: params.cancelations,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, params.concurrency),
done: make(chan struct{}),
abort: make(chan struct{}),
quit: make(chan struct{}),
errHandler: params.errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
starting: params.starting,
finished: params.finished,
logger: params.logger,
broker: params.broker,
baseCtxFn: params.baseCtxFn,
clock: timeutil.NewRealClock(),
queueConfig: queues,
orderedQueues: orderedQueues,
taskCheckInterval: params.taskCheckInterval,
retryDelayFunc: params.retryDelayFunc,
isFailureFunc: params.isFailureFunc,
syncRequestCh: params.syncCh,
cancelations: params.cancelations,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
sema: make(chan struct{}, params.concurrency),
done: make(chan struct{}),
quit: make(chan struct{}),
abort: make(chan struct{}),
errHandler: params.errHandler,
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
shutdownTimeout: params.shutdownTimeout,
starting: params.starting,
finished: params.finished,
}
}
@ -113,25 +128,20 @@ func (p *processor) stop() {
p.once.Do(func() {
p.logger.Debug("Processor shutting down...")
// Unblock if processor is waiting for sema token.
close(p.abort)
close(p.quit)
// Signal the processor goroutine to stop processing tasks
// from the queue.
p.done <- struct{}{}
})
}
// NOTE: once terminated, processor cannot be re-started.
func (p *processor) terminate() {
// NOTE: once shutdown, processor cannot be re-started.
func (p *processor) shutdown() {
p.stop()
time.AfterFunc(p.shutdownTimeout, func() { close(p.quit) })
time.AfterFunc(p.shutdownTimeout, func() { close(p.abort) })
p.logger.Info("Waiting for all workers to finish...")
// send cancellation signal to all in-progress task handlers
for _, cancel := range p.cancelations.GetAll() {
cancel()
}
// block until all workers have released the token
for i := 0; i < cap(p.sema); i++ {
p.sema <- struct{}{}
@ -158,78 +168,103 @@ func (p *processor) start(wg *sync.WaitGroup) {
// exec pulls a task out of the queue and starts a worker goroutine to
// process the task.
func (p *processor) exec() {
qnames := p.queues()
msg, err := p.broker.Dequeue(qnames...)
switch {
case err == rdb.ErrNoProcessableTask:
p.logger.Debug("All queues are empty")
// Queues are empty, this is a normal behavior.
// Sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: We are not using blocking pop operation and polling queues instead.
// This adds significant load to redis.
time.Sleep(time.Second)
return
case err != nil:
if p.errLogLimiter.Allow() {
p.logger.Errorf("Dequeue error: %v", err)
}
return
}
select {
case <-p.abort:
// shutdown is starting, return immediately after requeuing the message.
p.requeue(msg)
case <-p.quit:
return
case p.sema <- struct{}{}: // acquire token
p.starting <- msg
qnames := p.queues()
msg, leaseExpirationTime, err := p.broker.Dequeue(qnames...)
switch {
case errors.Is(err, errors.ErrNoProcessableTask):
p.logger.Debug("All queues are empty")
// Queues are empty, this is a normal behavior.
// Sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: We are not using blocking pop operation and polling queues instead.
// This adds significant load to redis.
jitter := rand.N(p.taskCheckInterval)
time.Sleep(p.taskCheckInterval/2 + jitter)
<-p.sema // release token
return
case err != nil:
if p.errLogLimiter.Allow() {
p.logger.Errorf("Dequeue error: %v", err)
}
<-p.sema // release token
return
}
lease := base.NewLease(leaseExpirationTime)
deadline := p.computeDeadline(msg)
p.starting <- &workerInfo{msg, time.Now(), deadline, lease}
go func() {
defer func() {
p.finished <- msg
<-p.sema // release token
}()
ctx, cancel := createContext(msg)
p.cancelations.Add(msg.ID.String(), cancel)
ctx, cancel := asynqcontext.New(p.baseCtxFn(), msg, deadline)
p.cancelations.Add(msg.ID, cancel)
defer func() {
cancel()
p.cancelations.Delete(msg.ID.String())
p.cancelations.Delete(msg.ID)
}()
// check context before starting a worker goroutine.
select {
case <-ctx.Done():
// already canceled (e.g. deadline exceeded).
p.handleFailedMessage(ctx, lease, msg, ctx.Err())
return
default:
}
resCh := make(chan error, 1)
task := NewTask(msg.Type, msg.Payload)
go func() { resCh <- perform(ctx, task, p.handler) }()
go func() {
task := newTask(
msg.Type,
msg.Payload,
&ResultWriter{
id: msg.ID,
qname: msg.Queue,
broker: p.broker,
ctx: ctx,
},
)
resCh <- p.perform(ctx, task)
}()
select {
case <-p.quit:
case <-p.abort:
// time is up, push the message back to queue and quit this worker goroutine.
p.logger.Warnf("Quitting worker. task id=%s", msg.ID)
p.requeue(msg)
p.requeue(lease, msg)
return
case <-lease.Done():
cancel()
p.handleFailedMessage(ctx, lease, msg, ErrLeaseExpired)
return
case <-ctx.Done():
p.handleFailedMessage(ctx, lease, msg, ctx.Err())
return
case resErr := <-resCh:
// Note: One of three things should happen.
// 1) Done -> Removes the message from InProgress
// 2) Retry -> Removes the message from InProgress & Adds the message to Retry
// 3) Kill -> Removes the message from InProgress & Adds the message to Dead
if resErr != nil {
if p.errHandler != nil {
p.errHandler.HandleError(task, resErr, msg.Retried, msg.Retry)
}
if msg.Retried >= msg.Retry {
p.kill(msg, resErr)
} else {
p.retry(msg, resErr)
}
p.handleFailedMessage(ctx, lease, msg, resErr)
return
}
p.markAsDone(msg)
p.handleSucceededMessage(lease, msg)
}
}()
}
}
func (p *processor) requeue(msg *base.TaskMessage) {
err := p.broker.Requeue(msg)
func (p *processor) requeue(l *base.Lease, msg *base.TaskMessage) {
if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.Requeue(ctx, msg)
if err != nil {
p.logger.Errorf("Could not push task id=%s back to queue: %v", msg.ID, err)
} else {
@ -237,47 +272,121 @@ func (p *processor) requeue(msg *base.TaskMessage) {
}
}
func (p *processor) markAsDone(msg *base.TaskMessage) {
err := p.broker.Done(msg)
func (p *processor) handleSucceededMessage(l *base.Lease, msg *base.TaskMessage) {
if msg.Retention > 0 {
p.markAsComplete(l, msg)
} else {
p.markAsDone(l, msg)
}
}
func (p *processor) markAsComplete(l *base.Lease, msg *base.TaskMessage) {
if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.MarkAsComplete(ctx, msg)
if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.InProgressQueue, err)
errMsg := fmt.Sprintf("Could not move task id=%s type=%q from %q to %q: %+v",
msg.ID, msg.Type, base.ActiveKey(msg.Queue), base.CompletedKey(msg.Queue), err)
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.broker.Done(msg)
return p.broker.MarkAsComplete(ctx, msg)
},
errMsg: errMsg,
errMsg: errMsg,
deadline: l.Deadline(),
}
}
}
func (p *processor) retry(msg *base.TaskMessage, e error) {
func (p *processor) markAsDone(l *base.Lease, msg *base.TaskMessage) {
if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.Done(ctx, msg)
if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.ActiveKey(msg.Queue), err)
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.broker.Done(ctx, msg)
},
errMsg: errMsg,
deadline: l.Deadline(),
}
}
}
// SkipRetry is used as a return value from Handler.ProcessTask to indicate that
// the task should not be retried and should be archived instead.
var SkipRetry = errors.New("skip retry for the task")
// RevokeTask is used as a return value from Handler.ProcessTask to indicate that
// the task should not be retried or archived.
var RevokeTask = errors.New("revoke task")
func (p *processor) handleFailedMessage(ctx context.Context, l *base.Lease, msg *base.TaskMessage, err error) {
if p.errHandler != nil {
p.errHandler.HandleError(ctx, NewTask(msg.Type, msg.Payload), err)
}
switch {
case errors.Is(err, RevokeTask):
p.logger.Warnf("revoke task id=%s", msg.ID)
p.markAsDone(l, msg)
case msg.Retried >= msg.Retry || errors.Is(err, SkipRetry):
p.logger.Warnf("Retry exhausted for task id=%s", msg.ID)
p.archive(l, msg, err)
default:
p.retry(l, msg, err, p.isFailureFunc(err))
}
}
func (p *processor) retry(l *base.Lease, msg *base.TaskMessage, e error, isFailure bool) {
if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(d)
err := p.broker.Retry(msg, retryAt, e.Error())
err := p.broker.Retry(ctx, msg, retryAt, e.Error(), isFailure)
if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.RetryQueue)
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.RetryKey(msg.Queue))
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.broker.Retry(msg, retryAt, e.Error())
return p.broker.Retry(ctx, msg, retryAt, e.Error(), isFailure)
},
errMsg: errMsg,
errMsg: errMsg,
deadline: l.Deadline(),
}
}
}
func (p *processor) kill(msg *base.TaskMessage, e error) {
p.logger.Warnf("Retry exhausted for task id=%s", msg.ID)
err := p.broker.Kill(msg, e.Error())
func (p *processor) archive(l *base.Lease, msg *base.TaskMessage, e error) {
if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.Archive(ctx, msg, e.Error())
if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.InProgressQueue, base.DeadQueue)
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.ArchivedKey(msg.Queue))
p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{
fn: func() error {
return p.broker.Kill(msg, e.Error())
return p.broker.Archive(ctx, msg, e.Error())
},
errMsg: errMsg,
errMsg: errMsg,
deadline: l.Deadline(),
}
}
}
@ -304,21 +413,36 @@ func (p *processor) queues() []string {
names = append(names, qname)
}
}
r := rand.New(rand.NewSource(time.Now().UnixNano()))
r.Shuffle(len(names), func(i, j int) { names[i], names[j] = names[j], names[i] })
rand.Shuffle(len(names), func(i, j int) { names[i], names[j] = names[j], names[i] })
return uniq(names, len(p.queueConfig))
}
// perform calls the handler with the given task.
// If the call returns without panic, it simply returns the value,
// otherwise, it recovers from panic and returns an error.
func perform(ctx context.Context, task *Task, h Handler) (err error) {
func (p *processor) perform(ctx context.Context, task *Task) (err error) {
defer func() {
if x := recover(); x != nil {
err = fmt.Errorf("panic: %v", x)
p.logger.Errorf("recovering from panic. See the stack trace below for details:\n%s", string(debug.Stack()))
_, file, line, ok := runtime.Caller(1) // skip the first frame (panic itself)
if ok && strings.Contains(file, "runtime/") {
// The panic came from the runtime, most likely due to incorrect
// map/slice usage. The parent frame should have the real trigger.
_, file, line, ok = runtime.Caller(2)
}
var errMsg string
// Include the file and line number info in the error, if runtime.Caller returned ok.
if ok {
errMsg = fmt.Sprintf("panic [%s:%d]: %v", file, line, x)
} else {
errMsg = fmt.Sprintf("panic: %v", x)
}
err = &errors.PanicError{
ErrMsg: errMsg,
}
}
}()
return h.ProcessTask(ctx, task)
return p.handler.ProcessTask(ctx, task)
}
// uniq dedupes elements and returns a slice of unique names of length l.
@ -394,3 +518,23 @@ func gcd(xs ...int) int {
}
return res
}
// computeDeadline returns the given task's deadline,
func (p *processor) computeDeadline(msg *base.TaskMessage) time.Time {
if msg.Timeout == 0 && msg.Deadline == 0 {
p.logger.Errorf("asynq: internal error: both timeout and deadline are not set for the task message: %s", msg.ID)
return p.clock.Now().Add(defaultTimeout)
}
if msg.Timeout != 0 && msg.Deadline != 0 {
deadlineUnix := math.Min(float64(p.clock.Now().Unix()+msg.Timeout), float64(msg.Deadline))
return time.Unix(int64(deadlineUnix), 0)
}
if msg.Timeout != 0 {
return p.clock.Now().Add(time.Duration(msg.Timeout) * time.Second)
}
return time.Unix(msg.Deadline, 0)
}
func IsPanicError(err error) bool {
return errors.IsPanicError(err)
}

File diff suppressed because it is too large Load Diff

126
recoverer.go Normal file
View File

@ -0,0 +1,126 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/log"
)
type recoverer struct {
logger *log.Logger
broker base.Broker
retryDelayFunc RetryDelayFunc
isFailureFunc func(error) bool
// channel to communicate back to the long running "recoverer" goroutine.
done chan struct{}
// list of queues to check for deadline.
queues []string
// poll interval.
interval time.Duration
}
type recovererParams struct {
logger *log.Logger
broker base.Broker
queues []string
interval time.Duration
retryDelayFunc RetryDelayFunc
isFailureFunc func(error) bool
}
func newRecoverer(params recovererParams) *recoverer {
return &recoverer{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
queues: params.queues,
interval: params.interval,
retryDelayFunc: params.retryDelayFunc,
isFailureFunc: params.isFailureFunc,
}
}
func (r *recoverer) shutdown() {
r.logger.Debug("Recoverer shutting down...")
// Signal the recoverer goroutine to stop polling.
r.done <- struct{}{}
}
func (r *recoverer) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
r.recover()
timer := time.NewTimer(r.interval)
for {
select {
case <-r.done:
r.logger.Debug("Recoverer done")
timer.Stop()
return
case <-timer.C:
r.recover()
timer.Reset(r.interval)
}
}
}()
}
// ErrLeaseExpired error indicates that the task failed because the worker working on the task
// could not extend its lease due to missing heartbeats. The worker may have crashed or got cutoff from the network.
var ErrLeaseExpired = errors.New("asynq: task lease expired")
func (r *recoverer) recover() {
r.recoverLeaseExpiredTasks()
r.recoverStaleAggregationSets()
}
func (r *recoverer) recoverLeaseExpiredTasks() {
// Get all tasks which have expired 30 seconds ago or earlier to accommodate certain amount of clock skew.
cutoff := time.Now().Add(-30 * time.Second)
msgs, err := r.broker.ListLeaseExpired(cutoff, r.queues...)
if err != nil {
r.logger.Warnf("recoverer: could not list lease expired tasks: %v", err)
return
}
for _, msg := range msgs {
if msg.Retried >= msg.Retry {
r.archive(msg, ErrLeaseExpired)
} else {
r.retry(msg, ErrLeaseExpired)
}
}
}
func (r *recoverer) recoverStaleAggregationSets() {
for _, qname := range r.queues {
if err := r.broker.ReclaimStaleAggregationSets(qname); err != nil {
r.logger.Warnf("recoverer: could not reclaim stale aggregation sets in queue %q: %v", qname, err)
}
}
}
func (r *recoverer) retry(msg *base.TaskMessage, err error) {
delay := r.retryDelayFunc(msg.Retried, err, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(delay)
if err := r.broker.Retry(context.Background(), msg, retryAt, err.Error(), r.isFailureFunc(err)); err != nil {
r.logger.Warnf("recoverer: could not retry lease expired task: %v", err)
}
}
func (r *recoverer) archive(msg *base.TaskMessage, err error) {
if err := r.broker.Archive(context.Background(), msg, err.Error()); err != nil {
r.logger.Warnf("recoverer: could not move task to archive: %v", err)
}
}

276
recoverer_test.go Normal file
View File

@ -0,0 +1,276 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
)
func TestRecoverer(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
t1 := h.NewTaskMessageWithQueue("task1", nil, "default")
t2 := h.NewTaskMessageWithQueue("task2", nil, "default")
t3 := h.NewTaskMessageWithQueue("task3", nil, "critical")
t4 := h.NewTaskMessageWithQueue("task4", nil, "default")
t4.Retried = t4.Retry // t4 has reached its max retry count
now := time.Now()
tests := []struct {
desc string
active map[string][]*base.TaskMessage
lease map[string][]base.Z
retry map[string][]base.Z
archived map[string][]base.Z
wantActive map[string][]*base.TaskMessage
wantLease map[string][]base.Z
wantRetry map[string][]*base.TaskMessage
wantArchived map[string][]*base.TaskMessage
}{
{
desc: "with one active task",
active: map[string][]*base.TaskMessage{
"default": {t1},
},
lease: map[string][]base.Z{
"default": {{Message: t1, Score: now.Add(-1 * time.Minute).Unix()}},
},
retry: map[string][]base.Z{
"default": {},
},
archived: map[string][]base.Z{
"default": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
},
wantLease: map[string][]base.Z{
"default": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {t1},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
},
},
{
desc: "with a task with max-retry reached",
active: map[string][]*base.TaskMessage{
"default": {t4},
"critical": {},
},
lease: map[string][]base.Z{
"default": {{Message: t4, Score: now.Add(-40 * time.Second).Unix()}},
"critical": {},
},
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
archived: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantLease: map[string][]base.Z{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {t4},
"critical": {},
},
},
{
desc: "with multiple active tasks, and one expired",
active: map[string][]*base.TaskMessage{
"default": {t1, t2},
"critical": {t3},
},
lease: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Add(-2 * time.Minute).Unix()},
{Message: t2, Score: now.Add(20 * time.Second).Unix()},
},
"critical": {
{Message: t3, Score: now.Add(20 * time.Second).Unix()},
},
},
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
archived: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {t2},
"critical": {t3},
},
wantLease: map[string][]base.Z{
"default": {{Message: t2, Score: now.Add(20 * time.Second).Unix()}},
"critical": {{Message: t3, Score: now.Add(20 * time.Second).Unix()}},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {t1},
"critical": {},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
},
{
desc: "with multiple expired active tasks",
active: map[string][]*base.TaskMessage{
"default": {t1, t2},
"critical": {t3},
},
lease: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Add(-1 * time.Minute).Unix()},
{Message: t2, Score: now.Add(10 * time.Second).Unix()},
},
"critical": {
{Message: t3, Score: now.Add(-1 * time.Minute).Unix()},
},
},
retry: map[string][]base.Z{
"default": {},
"cricial": {},
},
archived: map[string][]base.Z{
"default": {},
"cricial": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {t2},
"critical": {},
},
wantLease: map[string][]base.Z{
"default": {{Message: t2, Score: now.Add(10 * time.Second).Unix()}},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {t1},
"critical": {t3},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
},
{
desc: "with empty active queue",
active: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
lease: map[string][]base.Z{
"default": {},
"critical": {},
},
retry: map[string][]base.Z{
"default": {},
"critical": {},
},
archived: map[string][]base.Z{
"default": {},
"critical": {},
},
wantActive: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantLease: map[string][]base.Z{
"default": {},
"critical": {},
},
wantRetry: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
wantArchived: map[string][]*base.TaskMessage{
"default": {},
"critical": {},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedAllActiveQueues(t, r, tc.active)
h.SeedAllLease(t, r, tc.lease)
h.SeedAllRetryQueues(t, r, tc.retry)
h.SeedAllArchivedQueues(t, r, tc.archived)
recoverer := newRecoverer(recovererParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default", "critical"},
interval: 1 * time.Second,
retryDelayFunc: func(n int, err error, task *Task) time.Duration { return 30 * time.Second },
isFailureFunc: defaultIsFailureFunc,
})
var wg sync.WaitGroup
recoverer.start(&wg)
runTime := time.Now() // time when recoverer is running
time.Sleep(2 * time.Second)
recoverer.shutdown()
for qname, want := range tc.wantActive {
gotActive := h.GetActiveMessages(t, r, qname)
if diff := cmp.Diff(want, gotActive, h.SortMsgOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.ActiveKey(qname), diff)
}
}
for qname, want := range tc.wantLease {
gotLease := h.GetLeaseEntries(t, r, qname)
if diff := cmp.Diff(want, gotLease, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.LeaseKey(qname), diff)
}
}
cmpOpt := h.EquateInt64Approx(2) // allow up to two-second difference in `LastFailedAt`
for qname, msgs := range tc.wantRetry {
gotRetry := h.GetRetryMessages(t, r, qname)
var wantRetry []*base.TaskMessage // Note: construct message here since `LastFailedAt` is relative to each test run
for _, msg := range msgs {
wantRetry = append(wantRetry, h.TaskMessageAfterRetry(*msg, ErrLeaseExpired.Error(), runTime))
}
if diff := cmp.Diff(wantRetry, gotRetry, h.SortMsgOpt, cmpOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff)
}
}
for qname, msgs := range tc.wantArchived {
gotArchived := h.GetArchivedMessages(t, r, qname)
var wantArchived []*base.TaskMessage
for _, msg := range msgs {
wantArchived = append(wantArchived, h.TaskMessageWithError(*msg, ErrLeaseExpired.Error(), runTime))
}
if diff := cmp.Diff(wantArchived, gotArchived, h.SortMsgOpt, cmpOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.ArchivedKey(qname), diff)
}
}
}
}

View File

@ -5,64 +5,371 @@
package asynq
import (
"fmt"
"os"
"sync"
"time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
"github.com/redis/go-redis/v9"
"github.com/robfig/cron/v3"
)
type scheduler struct {
logger *log.Logger
broker base.Broker
// A Scheduler kicks off tasks at regular intervals based on the user defined schedule.
//
// Schedulers are safe for concurrent use by multiple goroutines.
type Scheduler struct {
id string
// channel to communicate back to the long running "scheduler" goroutine.
done chan struct{}
state *serverState
// poll interval on average
avgInterval time.Duration
heartbeatInterval time.Duration
logger *log.Logger
client *Client
rdb *rdb.RDB
cron *cron.Cron
location *time.Location
done chan struct{}
wg sync.WaitGroup
preEnqueueFunc func(task *Task, opts []Option)
postEnqueueFunc func(info *TaskInfo, err error)
errHandler func(task *Task, opts []Option, err error)
// guards idmap
mu sync.Mutex
// idmap maps Scheduler's entry ID to cron.EntryID
// to avoid using cron.EntryID as the public API of
// the Scheduler.
idmap map[string]cron.EntryID
}
type schedulerParams struct {
logger *log.Logger
broker base.Broker
interval time.Duration
const defaultHeartbeatInterval = 10 * time.Second
// NewScheduler returns a new Scheduler instance given the redis connection option.
// The parameter opts is optional, defaults will be used if opts is set to nil
func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler {
scheduler := newScheduler(opts)
redisClient, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
}
rdb := rdb.NewRDB(redisClient)
scheduler.rdb = rdb
scheduler.client = &Client{broker: rdb, sharedConnection: false}
return scheduler
}
func newScheduler(params schedulerParams) *scheduler {
return &scheduler{
logger: params.logger,
broker: params.broker,
done: make(chan struct{}),
avgInterval: params.interval,
// NewSchedulerFromRedisClient returns a new instance of Scheduler given a redis.UniversalClient
// The parameter opts is optional, defaults will be used if opts is set to nil.
// Warning: The underlying redis connection pool will not be closed by Asynq, you are responsible for closing it.
func NewSchedulerFromRedisClient(c redis.UniversalClient, opts *SchedulerOpts) *Scheduler {
scheduler := newScheduler(opts)
scheduler.rdb = rdb.NewRDB(c)
scheduler.client = NewClientFromRedisClient(c)
return scheduler
}
func newScheduler(opts *SchedulerOpts) *Scheduler {
if opts == nil {
opts = &SchedulerOpts{}
}
heartbeatInterval := opts.HeartbeatInterval
if heartbeatInterval <= 0 {
heartbeatInterval = defaultHeartbeatInterval
}
logger := log.NewLogger(opts.Logger)
loglevel := opts.LogLevel
if loglevel == level_unspecified {
loglevel = InfoLevel
}
logger.SetLevel(toInternalLogLevel(loglevel))
loc := opts.Location
if loc == nil {
loc = time.UTC
}
return &Scheduler{
id: generateSchedulerID(),
state: &serverState{value: srvStateNew},
heartbeatInterval: heartbeatInterval,
logger: logger,
cron: cron.New(cron.WithLocation(loc)),
location: loc,
done: make(chan struct{}),
preEnqueueFunc: opts.PreEnqueueFunc,
postEnqueueFunc: opts.PostEnqueueFunc,
errHandler: opts.EnqueueErrorHandler,
idmap: make(map[string]cron.EntryID),
}
}
func (s *scheduler) terminate() {
s.logger.Debug("Scheduler shutting down...")
// Signal the scheduler goroutine to stop polling.
s.done <- struct{}{}
func generateSchedulerID() string {
host, err := os.Hostname()
if err != nil {
host = "unknown-host"
}
return fmt.Sprintf("%s:%d:%v", host, os.Getpid(), uuid.New())
}
// start starts the "scheduler" goroutine.
func (s *scheduler) start(wg *sync.WaitGroup) {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-s.done:
s.logger.Debug("Scheduler done")
return
case <-time.After(s.avgInterval):
s.exec()
}
// SchedulerOpts specifies scheduler options.
type SchedulerOpts struct {
// HeartbeatInterval specifies the interval between scheduler heartbeats.
//
// If unset, zero or a negative value, the interval is set to 10 second.
//
// Note: Setting this value too low may add significant load to redis.
//
// By default, HeartbeatInterval is set to 10 seconds.
HeartbeatInterval time.Duration
// Logger specifies the logger used by the scheduler instance.
//
// If unset, the default logger is used.
Logger Logger
// LogLevel specifies the minimum log level to enable.
//
// If unset, InfoLevel is used by default.
LogLevel LogLevel
// Location specifies the time zone location.
//
// If unset, the UTC time zone (time.UTC) is used.
Location *time.Location
// PreEnqueueFunc, if provided, is called before a task gets enqueued by Scheduler.
// The callback function should return quickly to not block the current thread.
PreEnqueueFunc func(task *Task, opts []Option)
// PostEnqueueFunc, if provided, is called after a task gets enqueued by Scheduler.
// The callback function should return quickly to not block the current thread.
PostEnqueueFunc func(info *TaskInfo, err error)
// Deprecated: Use PostEnqueueFunc instead
// EnqueueErrorHandler gets called when scheduler cannot enqueue a registered task
// due to an error.
EnqueueErrorHandler func(task *Task, opts []Option, err error)
}
// enqueueJob encapsulates the job of enqueuing a task and recording the event.
type enqueueJob struct {
id uuid.UUID
cronspec string
task *Task
opts []Option
location *time.Location
logger *log.Logger
client *Client
rdb *rdb.RDB
preEnqueueFunc func(task *Task, opts []Option)
postEnqueueFunc func(info *TaskInfo, err error)
errHandler func(task *Task, opts []Option, err error)
}
func (j *enqueueJob) Run() {
if j.preEnqueueFunc != nil {
j.preEnqueueFunc(j.task, j.opts)
}
info, err := j.client.Enqueue(j.task, j.opts...)
if j.postEnqueueFunc != nil {
j.postEnqueueFunc(info, err)
}
if err != nil {
if j.errHandler != nil {
j.errHandler(j.task, j.opts, err)
}
}()
}
func (s *scheduler) exec() {
if err := s.broker.CheckAndEnqueue(); err != nil {
s.logger.Errorf("Could not enqueue scheduled tasks: %v", err)
return
}
j.logger.Debugf("scheduler enqueued a task: %+v", info)
event := &base.SchedulerEnqueueEvent{
TaskID: info.ID,
EnqueuedAt: time.Now().In(j.location),
}
err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event)
if err != nil {
j.logger.Warnf("scheduler could not record enqueue event of enqueued task %s: %v", info.ID, err)
}
}
// Register registers a task to be enqueued on the given schedule specified by the cronspec.
// It returns an ID of the newly registered entry.
func (s *Scheduler) Register(cronspec string, task *Task, opts ...Option) (entryID string, err error) {
job := &enqueueJob{
id: uuid.New(),
cronspec: cronspec,
task: task,
opts: opts,
location: s.location,
client: s.client,
rdb: s.rdb,
logger: s.logger,
preEnqueueFunc: s.preEnqueueFunc,
postEnqueueFunc: s.postEnqueueFunc,
errHandler: s.errHandler,
}
cronID, err := s.cron.AddJob(cronspec, job)
if err != nil {
return "", err
}
s.mu.Lock()
s.idmap[job.id.String()] = cronID
s.mu.Unlock()
return job.id.String(), nil
}
// Unregister removes a registered entry by entry ID.
// Unregister returns a non-nil error if no entries were found for the given entryID.
func (s *Scheduler) Unregister(entryID string) error {
s.mu.Lock()
defer s.mu.Unlock()
cronID, ok := s.idmap[entryID]
if !ok {
return fmt.Errorf("asynq: no scheduler entry found")
}
delete(s.idmap, entryID)
s.cron.Remove(cronID)
return nil
}
// Run starts the scheduler until an os signal to exit the program is received.
// It returns an error if scheduler is already running or has been shutdown.
func (s *Scheduler) Run() error {
if err := s.Start(); err != nil {
return err
}
s.waitForSignals()
s.Shutdown()
return nil
}
// Start starts the scheduler.
// It returns an error if the scheduler is already running or has been shutdown.
func (s *Scheduler) Start() error {
if err := s.start(); err != nil {
return err
}
s.logger.Info("Scheduler starting")
s.logger.Infof("Scheduler timezone is set to %v", s.location)
s.cron.Start()
s.wg.Add(1)
go s.runHeartbeater()
return nil
}
// Checks server state and returns an error if pre-condition is not met.
// Otherwise it sets the server state to active.
func (s *Scheduler) start() error {
s.state.mu.Lock()
defer s.state.mu.Unlock()
switch s.state.value {
case srvStateActive:
return fmt.Errorf("asynq: the scheduler is already running")
case srvStateClosed:
return fmt.Errorf("asynq: the scheduler has already been stopped")
}
s.state.value = srvStateActive
return nil
}
// Shutdown stops and shuts down the scheduler.
func (s *Scheduler) Shutdown() {
s.state.mu.Lock()
if s.state.value == srvStateNew || s.state.value == srvStateClosed {
// scheduler is not running, do nothing and return.
s.state.mu.Unlock()
return
}
s.state.value = srvStateClosed
s.state.mu.Unlock()
s.logger.Info("Scheduler shutting down")
close(s.done) // signal heartbeater to stop
ctx := s.cron.Stop()
<-ctx.Done()
s.wg.Wait()
s.clearHistory()
if err := s.client.Close(); err != nil {
s.logger.Errorf("Failed to close redis client connection: %v", err)
}
s.logger.Info("Scheduler stopped")
}
func (s *Scheduler) runHeartbeater() {
defer s.wg.Done()
ticker := time.NewTicker(s.heartbeatInterval)
for {
select {
case <-s.done:
s.logger.Debugf("Scheduler heatbeater shutting down")
if err := s.rdb.ClearSchedulerEntries(s.id); err != nil {
s.logger.Errorf("Failed to clear the scheduler entries: %v", err)
}
ticker.Stop()
return
case <-ticker.C:
s.beat()
}
}
}
// beat writes a snapshot of entries to redis.
func (s *Scheduler) beat() {
var entries []*base.SchedulerEntry
for _, entry := range s.cron.Entries() {
job := entry.Job.(*enqueueJob)
e := &base.SchedulerEntry{
ID: job.id.String(),
Spec: job.cronspec,
Type: job.task.Type(),
Payload: job.task.Payload(),
Opts: stringifyOptions(job.opts),
Next: entry.Next,
Prev: entry.Prev,
}
entries = append(entries, e)
}
if err := s.rdb.WriteSchedulerEntries(s.id, entries, s.heartbeatInterval*2); err != nil {
s.logger.Warnf("Scheduler could not write heartbeat data: %v", err)
}
}
func stringifyOptions(opts []Option) []string {
var res []string
for _, opt := range opts {
res = append(res, opt.String())
}
return res
}
func (s *Scheduler) clearHistory() {
for _, entry := range s.cron.Entries() {
job := entry.Job.(*enqueueJob)
if err := s.rdb.ClearSchedulerHistory(job.id.String()); err != nil {
s.logger.Warnf("Could not clear scheduler history for entry %q: %v", job.id.String(), err)
}
}
}
// Ping performs a ping against the redis connection.
func (s *Scheduler) Ping() error {
s.state.mu.Lock()
defer s.state.mu.Unlock()
if s.state.value == srvStateClosed {
return nil
}
return s.rdb.Ping()
}

View File

@ -10,88 +10,225 @@ import (
"time"
"github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/redis/go-redis/v9"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testutil"
)
func TestScheduler(t *testing.T) {
r := setup(t)
rdbClient := rdb.NewRDB(r)
const pollInterval = time.Second
s := newScheduler(schedulerParams{
logger: testLogger,
broker: rdbClient,
interval: pollInterval,
})
t1 := h.NewTaskMessage("gen_thumbnail", nil)
t2 := h.NewTaskMessage("send_email", nil)
t3 := h.NewTaskMessage("reindex", nil)
t4 := h.NewTaskMessage("sync", nil)
now := time.Now()
func TestSchedulerRegister(t *testing.T) {
tests := []struct {
initScheduled []h.ZSetEntry // scheduled queue initial state
initRetry []h.ZSetEntry // retry queue initial state
initQueue []*base.TaskMessage // default queue initial state
wait time.Duration // wait duration before checking for final state
wantScheduled []*base.TaskMessage // schedule queue final state
wantRetry []*base.TaskMessage // retry queue final state
wantQueue []*base.TaskMessage // default queue final state
cronspec string
task *Task
opts []Option
wait time.Duration
queue string
want []*base.TaskMessage
}{
{
initScheduled: []h.ZSetEntry{
{Msg: t1, Score: float64(now.Add(time.Hour).Unix())},
{Msg: t2, Score: float64(now.Add(-2 * time.Second).Unix())},
cronspec: "@every 3s",
task: NewTask("task1", nil),
opts: []Option{MaxRetry(10)},
wait: 10 * time.Second,
queue: "default",
want: []*base.TaskMessage{
{
Type: "task1",
Payload: nil,
Retry: 10,
Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
{
Type: "task1",
Payload: nil,
Retry: 10,
Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
{
Type: "task1",
Payload: nil,
Retry: 10,
Timeout: int64(defaultTimeout.Seconds()),
Queue: "default",
},
},
initRetry: []h.ZSetEntry{
{Msg: t3, Score: float64(time.Now().Add(-500 * time.Millisecond).Unix())},
},
initQueue: []*base.TaskMessage{t4},
wait: pollInterval * 2,
wantScheduled: []*base.TaskMessage{t1},
wantRetry: []*base.TaskMessage{},
wantQueue: []*base.TaskMessage{t2, t3, t4},
},
{
initScheduled: []h.ZSetEntry{
{Msg: t1, Score: float64(now.Unix())},
{Msg: t2, Score: float64(now.Add(-2 * time.Second).Unix())},
{Msg: t3, Score: float64(now.Add(-500 * time.Millisecond).Unix())},
},
initRetry: []h.ZSetEntry{},
initQueue: []*base.TaskMessage{t4},
wait: pollInterval * 2,
wantScheduled: []*base.TaskMessage{},
wantRetry: []*base.TaskMessage{},
wantQueue: []*base.TaskMessage{t1, t2, t3, t4},
},
}
r := setup(t)
// Tests for new redis connection.
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
h.SeedScheduledQueue(t, r, tc.initScheduled) // initialize scheduled queue
h.SeedRetryQueue(t, r, tc.initRetry) // initialize retry queue
h.SeedEnqueuedQueue(t, r, tc.initQueue) // initialize default queue
scheduler := NewScheduler(getRedisConnOpt(t), nil)
if _, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...); err != nil {
t.Fatal(err)
}
var wg sync.WaitGroup
s.start(&wg)
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
time.Sleep(tc.wait)
s.terminate()
scheduler.Shutdown()
gotScheduled := h.GetScheduledMessages(t, r)
if diff := cmp.Diff(tc.wantScheduled, gotScheduled, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.ScheduledQueue, diff)
got := testutil.GetPendingMessages(t, r, tc.queue)
if diff := cmp.Diff(tc.want, got, testutil.IgnoreIDOpt); diff != "" {
t.Errorf("mismatch found in queue %q: (-want,+got)\n%s", tc.queue, diff)
}
}
r = setup(t)
// Tests for existing redis connection.
for _, tc := range tests {
redisClient := getRedisConnOpt(t).MakeRedisClient().(redis.UniversalClient)
scheduler := NewSchedulerFromRedisClient(redisClient, nil)
if _, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...); err != nil {
t.Fatal(err)
}
gotRetry := h.GetRetryMessages(t, r)
if diff := cmp.Diff(tc.wantRetry, gotRetry, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.RetryQueue, diff)
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
time.Sleep(tc.wait)
scheduler.Shutdown()
gotEnqueued := h.GetEnqueuedMessages(t, r)
if diff := cmp.Diff(tc.wantQueue, gotEnqueued, h.SortMsgOpt); diff != "" {
t.Errorf("mismatch found in %q after running scheduler: (-want, +got)\n%s", base.DefaultQueue, diff)
got := testutil.GetPendingMessages(t, r, tc.queue)
if diff := cmp.Diff(tc.want, got, testutil.IgnoreIDOpt); diff != "" {
t.Errorf("mismatch found in queue %q: (-want,+got)\n%s", tc.queue, diff)
}
}
}
func TestSchedulerWhenRedisDown(t *testing.T) {
var (
mu sync.Mutex
counter int
)
errorHandler := func(task *Task, opts []Option, err error) {
mu.Lock()
counter++
mu.Unlock()
}
// Connect to non-existent redis instance to simulate a redis server being down.
scheduler := NewScheduler(
RedisClientOpt{Addr: ":9876"}, // no Redis listening to this port.
&SchedulerOpts{EnqueueErrorHandler: errorHandler},
)
task := NewTask("test", nil)
if _, err := scheduler.Register("@every 3s", task); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
// Scheduler should attempt to enqueue the task three times (every 3s).
time.Sleep(10 * time.Second)
scheduler.Shutdown()
mu.Lock()
if counter != 3 {
t.Errorf("EnqueueErrorHandler was called %d times, want 3", counter)
}
mu.Unlock()
}
func TestSchedulerUnregister(t *testing.T) {
tests := []struct {
cronspec string
task *Task
opts []Option
wait time.Duration
queue string
}{
{
cronspec: "@every 3s",
task: NewTask("task1", nil),
opts: []Option{MaxRetry(10)},
wait: 10 * time.Second,
queue: "default",
},
}
r := setup(t)
for _, tc := range tests {
scheduler := NewScheduler(getRedisConnOpt(t), nil)
entryID, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...)
if err != nil {
t.Fatal(err)
}
if err := scheduler.Unregister(entryID); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
time.Sleep(tc.wait)
scheduler.Shutdown()
got := testutil.GetPendingMessages(t, r, tc.queue)
if len(got) != 0 {
t.Errorf("%d tasks were enqueued, want zero", len(got))
}
}
}
func TestSchedulerPostAndPreEnqueueHandler(t *testing.T) {
var (
preMu sync.Mutex
preCounter int
postMu sync.Mutex
postCounter int
)
preHandler := func(task *Task, opts []Option) {
preMu.Lock()
preCounter++
preMu.Unlock()
}
postHandler := func(info *TaskInfo, err error) {
postMu.Lock()
postCounter++
postMu.Unlock()
}
// Connect to non-existent redis instance to simulate a redis server being down.
scheduler := NewScheduler(
getRedisConnOpt(t),
&SchedulerOpts{
PreEnqueueFunc: preHandler,
PostEnqueueFunc: postHandler,
},
)
task := NewTask("test", nil)
if _, err := scheduler.Register("@every 3s", task); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
// Scheduler should attempt to enqueue the task three times (every 3s).
time.Sleep(10 * time.Second)
scheduler.Shutdown()
preMu.Lock()
if preCounter != 3 {
t.Errorf("PreEnqueueFunc was called %d times, want 3", preCounter)
}
preMu.Unlock()
postMu.Lock()
if postCounter != 3 {
t.Errorf("PostEnqueueFunc was called %d times, want 3", postCounter)
}
postMu.Unlock()
}

View File

@ -62,7 +62,7 @@ func (mux *ServeMux) Handler(t *Task) (h Handler, pattern string) {
mux.mu.RLock()
defer mux.mu.RUnlock()
h, pattern = mux.match(t.Type)
h, pattern = mux.match(t.Type())
if h == nil {
h, pattern = NotFoundHandler(), ""
}
@ -98,7 +98,7 @@ func (mux *ServeMux) Handle(pattern string, handler Handler) {
mux.mu.Lock()
defer mux.mu.Unlock()
if pattern == "" {
if strings.TrimSpace(pattern) == "" {
panic("asynq: invalid pattern")
}
if handler == nil {
@ -144,14 +144,12 @@ func (mux *ServeMux) HandleFunc(pattern string, handler func(context.Context, *T
func (mux *ServeMux) Use(mws ...MiddlewareFunc) {
mux.mu.Lock()
defer mux.mu.Unlock()
for _, fn := range mws {
mux.mws = append(mux.mws, fn)
}
mux.mws = append(mux.mws, mws...)
}
// NotFound returns an error indicating that the handler was not found for the given task.
func NotFound(ctx context.Context, task *Task) error {
return fmt.Errorf("handler not found for task %q", task.Type)
return fmt.Errorf("handler not found for task %q", task.Type())
}
// NotFoundHandler returns a simple task handler that returns a ``not found`` error.

View File

@ -68,7 +68,7 @@ func TestServeMux(t *testing.T) {
}
if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type(), tc.want)
}
}
}
@ -124,7 +124,7 @@ func TestServeMuxNotFound(t *testing.T) {
task := NewTask(tc.typename, nil)
err := mux.ProcessTask(context.Background(), task)
if err == nil {
t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type)
t.Errorf("ProcessTask did not return error for task %q, should return 'not found' error", task.Type())
}
}
}
@ -164,7 +164,7 @@ func TestServeMuxMiddlewares(t *testing.T) {
}
if called != tc.want {
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type, tc.want)
t.Errorf("%q handler was called for task %q, want %q to be called", called, task.Type(), tc.want)
}
}
}

551
server.go
View File

@ -9,7 +9,7 @@ import (
"errors"
"fmt"
"math"
"math/rand"
"math/rand/v2"
"runtime"
"strings"
"sync"
@ -18,34 +18,79 @@ import (
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb"
"github.com/redis/go-redis/v9"
)
// Server is responsible for managing the background-task processing.
// Server is responsible for task processing and task lifecycle management.
//
// Server pulls tasks off queues and processes them.
// If the processing of a task is unsuccessful, server will
// schedule it for a retry.
// If the processing of a task is unsuccessful, server will schedule it for a retry.
//
// A task will be retried until either the task gets processed successfully
// or until it reaches its max retry count.
//
// If a task exhausts its retries, it will be moved to the "dead" queue and
// will be kept in the queue for some time until a certain condition is met
// (e.g., queue size reaches a certain limit, or the task has been in the
// queue for a certain amount of time).
// If a task exhausts its retries, it will be moved to the archive and
// will be kept in the archive set.
// Note that the archive size is finite and once it reaches its max size,
// oldest tasks in the archive will be deleted.
type Server struct {
logger *log.Logger
broker base.Broker
// When a Server has been created with an existing Redis connection, we do
// not want to close it.
sharedConnection bool
status *base.ServerStatus
state *serverState
// wait group to wait for all goroutines to finish.
wg sync.WaitGroup
scheduler *scheduler
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
wg sync.WaitGroup
forwarder *forwarder
processor *processor
syncer *syncer
heartbeater *heartbeater
subscriber *subscriber
recoverer *recoverer
healthchecker *healthchecker
janitor *janitor
aggregator *aggregator
}
type serverState struct {
mu sync.Mutex
value serverStateValue
}
type serverStateValue int
const (
// StateNew represents a new server. Server begins in
// this state and then transition to StatusActive when
// Start or Run is callled.
srvStateNew serverStateValue = iota
// StateActive indicates the server is up and active.
srvStateActive
// StateStopped indicates the server is up but no longer processing new tasks.
srvStateStopped
// StateClosed indicates the server has been shutdown.
srvStateClosed
)
var serverStates = []string{
"new",
"active",
"stopped",
"closed",
}
func (s serverStateValue) String() string {
if srvStateNew <= s && s <= srvStateClosed {
return serverStates[s]
}
return "unknown status"
}
// Config specifies the server's background-task processing behavior.
@ -53,17 +98,36 @@ type Config struct {
// Maximum number of concurrent processing of tasks.
//
// If set to a zero or negative value, NewServer will overwrite the value
// to the number of CPUs usable by the currennt process.
// to the number of CPUs usable by the current process.
Concurrency int
// BaseContext optionally specifies a function that returns the base context for Handler invocations on this server.
//
// If BaseContext is nil, the default is context.Background().
// If this is defined, then it MUST return a non-nil context
BaseContext func() context.Context
// TaskCheckInterval specifies the interval between checks for new tasks to process when all queues are empty.
//
// If unset, zero or a negative value, the interval is set to 1 second.
//
// Note: Setting this value too low may add significant load to redis.
//
// By default, TaskCheckInterval is set to 1 seconds.
TaskCheckInterval time.Duration
// Function to calculate retry delay for a failed task.
//
// By default, it uses exponential backoff algorithm to calculate the delay.
RetryDelayFunc RetryDelayFunc
// Predicate function to determine whether the error returned from Handler is a failure.
// If the function returns false, Server will not increment the retried counter for the task,
// and Server won't record the queue stats (processed and failed stats) to avoid skewing the error
// rate of the queue.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
RetryDelayFunc func(n int, e error, t *Task) time.Duration
// By default, if the given error is non-nil the function returns true.
IsFailure func(error) bool
// List of queues to process with given priority value. Keys are the names of the
// queues and values are associated priority value.
@ -73,11 +137,13 @@ type Config struct {
// Priority is treated as follows to avoid starving low priority queues.
//
// Example:
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
//
// Queues: map[string]int{
// "critical": 6,
// "default": 3,
// "low": 1,
// }
//
// With the above config and given that all queues are not empty, the tasks
// in "critical", "default", "low" should be processed 60%, 30%, 10% of
// the time respectively.
@ -97,14 +163,26 @@ type Config struct {
// HandleError is invoked only if the task handler returns a non-nil error.
//
// Example:
// func reportError(task *asynq.Task, err error, retried, maxRetry int) {
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
// func reportError(ctx context, task *asynq.Task, err error) {
// retried, _ := asynq.GetRetryCount(ctx)
// maxRetry, _ := asynq.GetMaxRetry(ctx)
// if retried >= maxRetry {
// err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err)
// }
// errorReportingService.Notify(err)
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
//
// we can also handle panic error like:
// func reportError(ctx context, task *asynq.Task, err error) {
// if asynq.IsPanicError(err) {
// errorReportingService.Notify(err)
// }
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler
// Logger specifies the logger used by the server instance.
@ -122,22 +200,102 @@ type Config struct {
//
// If unset or zero, default timeout of 8 seconds is used.
ShutdownTimeout time.Duration
// HealthCheckFunc is called periodically with any errors encountered during ping to the
// connected redis server.
HealthCheckFunc func(error)
// HealthCheckInterval specifies the interval between healthchecks.
//
// If unset or zero, the interval is set to 15 seconds.
HealthCheckInterval time.Duration
// DelayedTaskCheckInterval specifies the interval between checks run on 'scheduled' and 'retry'
// tasks, and forwarding them to 'pending' state if they are ready to be processed.
//
// If unset or zero, the interval is set to 5 seconds.
DelayedTaskCheckInterval time.Duration
// GroupGracePeriod specifies the amount of time the server will wait for an incoming task before aggregating
// the tasks in a group. If an incoming task is received within this period, the server will wait for another
// period of the same length, up to GroupMaxDelay if specified.
//
// If unset or zero, the grace period is set to 1 minute.
// Minimum duration for GroupGracePeriod is 1 second. If value specified is less than a second, the call to
// NewServer will panic.
GroupGracePeriod time.Duration
// GroupMaxDelay specifies the maximum amount of time the server will wait for incoming tasks before aggregating
// the tasks in a group.
//
// If unset or zero, no delay limit is used.
GroupMaxDelay time.Duration
// GroupMaxSize specifies the maximum number of tasks that can be aggregated into a single task within a group.
// If GroupMaxSize is reached, the server will aggregate the tasks into one immediately.
//
// If unset or zero, no size limit is used.
GroupMaxSize int
// GroupAggregator specifies the aggregation function used to aggregate multiple tasks in a group into one task.
//
// If unset or nil, the group aggregation feature will be disabled on the server.
GroupAggregator GroupAggregator
// JanitorInterval specifies the average interval of janitor checks for expired completed tasks.
//
// If unset or zero, default interval of 8 seconds is used.
JanitorInterval time.Duration
// JanitorBatchSize specifies the number of expired completed tasks to be deleted in one run.
//
// If unset or zero, default batch size of 100 is used.
// Make sure to not put a big number as the batch size to prevent a long-running script.
JanitorBatchSize int
}
// An ErrorHandler handles errors returned by the task handler.
// GroupAggregator aggregates a group of tasks into one before the tasks are passed to the Handler.
type GroupAggregator interface {
// Aggregate aggregates the given tasks in a group with the given group name,
// and returns a new task which is the aggregation of those tasks.
//
// Use NewTask(typename, payload, opts...) to set any options for the aggregated task.
// The Queue option, if provided, will be ignored and the aggregated task will always be enqueued
// to the same queue the group belonged.
Aggregate(group string, tasks []*Task) *Task
}
// The GroupAggregatorFunc type is an adapter to allow the use of ordinary functions as a GroupAggregator.
// If f is a function with the appropriate signature, GroupAggregatorFunc(f) is a GroupAggregator that calls f.
type GroupAggregatorFunc func(group string, tasks []*Task) *Task
// Aggregate calls fn(group, tasks)
func (fn GroupAggregatorFunc) Aggregate(group string, tasks []*Task) *Task {
return fn(group, tasks)
}
// An ErrorHandler handles an error occurred during task processing.
type ErrorHandler interface {
HandleError(task *Task, err error, retried, maxRetry int)
HandleError(ctx context.Context, task *Task, err error)
}
// The ErrorHandlerFunc type is an adapter to allow the use of ordinary functions as a ErrorHandler.
// If f is a function with the appropriate signature, ErrorHandlerFunc(f) is a ErrorHandler that calls f.
type ErrorHandlerFunc func(task *Task, err error, retried, maxRetry int)
type ErrorHandlerFunc func(ctx context.Context, task *Task, err error)
// HandleError calls fn(task, err, retried, maxRetry)
func (fn ErrorHandlerFunc) HandleError(task *Task, err error, retried, maxRetry int) {
fn(task, err, retried, maxRetry)
// HandleError calls fn(ctx, task, err)
func (fn ErrorHandlerFunc) HandleError(ctx context.Context, task *Task, err error) {
fn(ctx, task, err)
}
// RetryDelayFunc calculates the retry delay duration for a failed task given
// the retry count, error, and the task.
//
// n is the number of times the task has been retried.
// e is the error returned by the task handler.
// t is the task in question.
type RetryDelayFunc func(n int, e error, t *Task) time.Duration
// Logger supports logging at various log levels.
type Logger interface {
// Debug logs a message at Debug level.
@ -238,32 +396,79 @@ func toInternalLogLevel(l LogLevel) log.Level {
panic(fmt.Sprintf("asynq: unexpected log level: %v", l))
}
// Formula taken from https://github.com/mperham/sidekiq.
func defaultDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1))
// DefaultRetryDelayFunc is the default RetryDelayFunc used if one is not specified in Config.
// It uses exponential back-off strategy to calculate the retry delay.
func DefaultRetryDelayFunc(n int, e error, t *Task) time.Duration {
// Formula taken from https://github.com/mperham/sidekiq.
s := int(math.Pow(float64(n), 4)) + 15 + (rand.IntN(30) * (n + 1))
return time.Duration(s) * time.Second
}
func defaultIsFailureFunc(err error) bool { return err != nil }
var defaultQueueConfig = map[string]int{
base.DefaultQueueName: 1,
}
const defaultShutdownTimeout = 8 * time.Second
const (
defaultTaskCheckInterval = 1 * time.Second
defaultShutdownTimeout = 8 * time.Second
defaultHealthCheckInterval = 15 * time.Second
defaultDelayedTaskCheckInterval = 5 * time.Second
defaultGroupGracePeriod = 1 * time.Minute
defaultJanitorInterval = 8 * time.Second
defaultJanitorBatchSize = 100
)
// NewServer returns a new Server given a redis connection option
// and background processing configuration.
// and server configuration.
func NewServer(r RedisConnOpt, cfg Config) *Server {
redisClient, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
}
server := NewServerFromRedisClient(redisClient, cfg)
server.sharedConnection = false
return server
}
// NewServerFromRedisClient returns a new instance of Server given a redis.UniversalClient
// and server configuration
// Warning: The underlying redis connection pool will not be closed by Asynq, you are responsible for closing it.
func NewServerFromRedisClient(c redis.UniversalClient, cfg Config) *Server {
baseCtxFn := cfg.BaseContext
if baseCtxFn == nil {
baseCtxFn = context.Background
}
n := cfg.Concurrency
if n < 1 {
n = runtime.NumCPU()
}
taskCheckInterval := cfg.TaskCheckInterval
if taskCheckInterval <= 0 {
taskCheckInterval = defaultTaskCheckInterval
}
delayFunc := cfg.RetryDelayFunc
if delayFunc == nil {
delayFunc = defaultDelayFunc
delayFunc = DefaultRetryDelayFunc
}
isFailureFunc := cfg.IsFailure
if isFailureFunc == nil {
isFailureFunc = defaultIsFailureFunc
}
queues := make(map[string]int)
for qname, p := range cfg.Queues {
if err := base.ValidateQueueName(qname); err != nil {
continue // ignore invalid queue names
}
if p > 0 {
queues[qname] = p
}
@ -271,10 +476,26 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
if len(queues) == 0 {
queues = defaultQueueConfig
}
var qnames []string
for q := range queues {
qnames = append(qnames, q)
}
shutdownTimeout := cfg.ShutdownTimeout
if shutdownTimeout == 0 {
shutdownTimeout = defaultShutdownTimeout
}
healthcheckInterval := cfg.HealthCheckInterval
if healthcheckInterval == 0 {
healthcheckInterval = defaultHealthCheckInterval
}
// TODO: Create a helper to check for zero value and fall back to default (e.g. getDurationOrDefault())
groupGracePeriod := cfg.GroupGracePeriod
if groupGracePeriod == 0 {
groupGracePeriod = defaultGroupGracePeriod
}
if groupGracePeriod < time.Second {
panic("GroupGracePeriod cannot be less than a second")
}
logger := log.NewLogger(cfg.Logger)
loglevel := cfg.LogLevel
if loglevel == level_unspecified {
@ -282,11 +503,11 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
}
logger.SetLevel(toInternalLogLevel(loglevel))
rdb := rdb.NewRDB(createRedisClient(r))
starting := make(chan *base.TaskMessage)
rdb := rdb.NewRDB(c)
starting := make(chan *workerInfo)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
status := base.NewServerStatus(base.StatusIdle)
srvState := &serverState{value: srvStateNew}
cancels := base.NewCancelations()
syncer := newSyncer(syncerParams{
@ -301,14 +522,19 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
status: status,
state: srvState,
starting: starting,
finished: finished,
})
scheduler := newScheduler(schedulerParams{
delayedTaskCheckInterval := cfg.DelayedTaskCheckInterval
if delayedTaskCheckInterval == 0 {
delayedTaskCheckInterval = defaultDelayedTaskCheckInterval
}
forwarder := newForwarder(forwarderParams{
logger: logger,
broker: rdb,
interval: 5 * time.Second,
queues: qnames,
interval: delayedTaskCheckInterval,
})
subscriber := newSubscriber(subscriberParams{
logger: logger,
@ -316,28 +542,80 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
cancelations: cancels,
})
processor := newProcessor(processorParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
taskCheckInterval: taskCheckInterval,
baseCtxFn: baseCtxFn,
isFailureFunc: isFailureFunc,
syncCh: syncCh,
cancelations: cancels,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
errHandler: cfg.ErrorHandler,
shutdownTimeout: shutdownTimeout,
starting: starting,
finished: finished,
})
recoverer := newRecoverer(recovererParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
isFailureFunc: isFailureFunc,
queues: qnames,
interval: 1 * time.Minute,
})
healthchecker := newHealthChecker(healthcheckerParams{
logger: logger,
broker: rdb,
retryDelayFunc: delayFunc,
syncCh: syncCh,
cancelations: cancels,
concurrency: n,
queues: queues,
strictPriority: cfg.StrictPriority,
errHandler: cfg.ErrorHandler,
shutdownTimeout: shutdownTimeout,
starting: starting,
finished: finished,
interval: healthcheckInterval,
healthcheckFunc: cfg.HealthCheckFunc,
})
janitorInterval := cfg.JanitorInterval
if janitorInterval == 0 {
janitorInterval = defaultJanitorInterval
}
janitorBatchSize := cfg.JanitorBatchSize
if janitorBatchSize == 0 {
janitorBatchSize = defaultJanitorBatchSize
}
if janitorBatchSize > defaultJanitorBatchSize {
logger.Warnf("Janitor batch size of %d is greater than the recommended batch size of %d. "+
"This might cause a long-running script", janitorBatchSize, defaultJanitorBatchSize)
}
janitor := newJanitor(janitorParams{
logger: logger,
broker: rdb,
queues: qnames,
interval: janitorInterval,
batchSize: janitorBatchSize,
})
aggregator := newAggregator(aggregatorParams{
logger: logger,
broker: rdb,
queues: qnames,
gracePeriod: groupGracePeriod,
maxDelay: cfg.GroupMaxDelay,
maxSize: cfg.GroupMaxSize,
groupAggregator: cfg.GroupAggregator,
})
return &Server{
logger: logger,
broker: rdb,
status: status,
scheduler: scheduler,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
logger: logger,
broker: rdb,
sharedConnection: true,
state: srvState,
forwarder: forwarder,
processor: processor,
syncer: syncer,
heartbeater: heartbeater,
subscriber: subscriber,
recoverer: recoverer,
healthchecker: healthchecker,
janitor: janitor,
aggregator: aggregator,
}
}
@ -346,8 +624,17 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
// ProcessTask should return nil if the processing of a task
// is successful.
//
// If ProcessTask return a non-nil error or panics, the task
// will be retried after delay.
// If ProcessTask returns a non-nil error or panics, the task
// will be retried after delay if retry-count is remaining,
// otherwise the task will be archived.
//
// One exception to this rule is when ProcessTask returns a SkipRetry error.
// If the returned error is SkipRetry or an error wraps SkipRetry, retry is
// skipped and the task will be immediately archived instead.
//
// Another exception to this rule is when ProcessTask returns a RevokeTask error.
// If the returned error is RevokeTask or an error wraps RevokeTask, the task
// will not be retried or archived.
type Handler interface {
ProcessTask(context.Context, *Task) error
}
@ -363,90 +650,138 @@ func (fn HandlerFunc) ProcessTask(ctx context.Context, task *Task) error {
return fn(ctx, task)
}
// ErrServerStopped indicates that the operation is now illegal because of the server being stopped.
var ErrServerStopped = errors.New("asynq: the server has been stopped")
// ErrServerClosed indicates that the operation is now illegal because of the server has been shutdown.
var ErrServerClosed = errors.New("asynq: Server closed")
// Run starts the background-task processing and blocks until
// Run starts the task processing and blocks until
// an os signal to exit the program is received. Once it receives
// a signal, it gracefully shuts down all active workers and other
// goroutines to process the tasks.
//
// Run returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
// Run returns any error encountered at server startup time.
// If the server has already been shutdown, ErrServerClosed is returned.
func (srv *Server) Run(handler Handler) error {
if err := srv.Start(handler); err != nil {
return err
}
srv.waitForSignals()
srv.Stop()
srv.Shutdown()
return nil
}
// Start starts the worker server. Once the server has started,
// it pulls tasks off queues and starts a worker goroutine for each task.
// Tasks are processed concurrently by the workers up to the number of
// concurrency specified at the initialization time.
// it pulls tasks off queues and starts a worker goroutine for each task
// and then call Handler to process it.
// Tasks are processed concurrently by the workers up to the number of
// concurrency specified in Config.Concurrency.
//
// Start returns any error encountered during server startup time.
// If the server has already been stopped, ErrServerStopped is returned.
// Start returns any error encountered at server startup time.
// If the server has already been shutdown, ErrServerClosed is returned.
func (srv *Server) Start(handler Handler) error {
if handler == nil {
return fmt.Errorf("asynq: server cannot run with nil handler")
}
switch srv.status.Get() {
case base.StatusRunning:
return fmt.Errorf("asynq: the server is already running")
case base.StatusStopped:
return ErrServerStopped
}
srv.status.Set(base.StatusRunning)
srv.processor.handler = handler
if err := srv.start(); err != nil {
return err
}
srv.logger.Info("Starting processing")
srv.heartbeater.start(&srv.wg)
srv.healthchecker.start(&srv.wg)
srv.subscriber.start(&srv.wg)
srv.syncer.start(&srv.wg)
srv.scheduler.start(&srv.wg)
srv.recoverer.start(&srv.wg)
srv.forwarder.start(&srv.wg)
srv.processor.start(&srv.wg)
srv.janitor.start(&srv.wg)
srv.aggregator.start(&srv.wg)
return nil
}
// Stop stops the worker server.
// Checks server state and returns an error if pre-condition is not met.
// Otherwise it sets the server state to active.
func (srv *Server) start() error {
srv.state.mu.Lock()
defer srv.state.mu.Unlock()
switch srv.state.value {
case srvStateActive:
return fmt.Errorf("asynq: the server is already running")
case srvStateStopped:
return fmt.Errorf("asynq: the server is in the stopped state. Waiting for shutdown.")
case srvStateClosed:
return ErrServerClosed
}
srv.state.value = srvStateActive
return nil
}
// Shutdown gracefully shuts down the server.
// It gracefully closes all active workers. The server will wait for
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
func (srv *Server) Stop() {
switch srv.status.Get() {
case base.StatusIdle, base.StatusStopped:
func (srv *Server) Shutdown() {
srv.state.mu.Lock()
if srv.state.value == srvStateNew || srv.state.value == srvStateClosed {
srv.state.mu.Unlock()
// server is not running, do nothing and return.
return
}
srv.state.value = srvStateClosed
srv.state.mu.Unlock()
srv.logger.Info("Starting graceful shutdown")
// Note: The order of termination is important.
// Note: The order of shutdown is important.
// Sender goroutines should be terminated before the receiver goroutines.
// processor -> syncer (via syncCh)
// processor -> heartbeater (via starting, finished channels)
srv.scheduler.terminate()
srv.processor.terminate()
srv.syncer.terminate()
srv.subscriber.terminate()
srv.heartbeater.terminate()
srv.forwarder.shutdown()
srv.processor.shutdown()
srv.recoverer.shutdown()
srv.syncer.shutdown()
srv.subscriber.shutdown()
srv.janitor.shutdown()
srv.aggregator.shutdown()
srv.healthchecker.shutdown()
srv.heartbeater.shutdown()
srv.wg.Wait()
srv.broker.Close()
srv.status.Set(base.StatusStopped)
if !srv.sharedConnection {
srv.broker.Close()
}
srv.logger.Info("Exiting")
}
// Quiet signals the server to stop pulling new tasks off queues.
// Quiet should be used before stopping the server.
func (srv *Server) Quiet() {
// Stop signals the server to stop pulling new tasks off queues.
// Stop can be used before shutting down the server to ensure that all
// currently active tasks are processed before server shutdown.
//
// Stop does not shutdown the server, make sure to call Shutdown before exit.
func (srv *Server) Stop() {
srv.state.mu.Lock()
if srv.state.value != srvStateActive {
// Invalid call to Stop, server can only go from Active state to Stopped state.
srv.state.mu.Unlock()
return
}
srv.state.value = srvStateStopped
srv.state.mu.Unlock()
srv.logger.Info("Stopping processor")
srv.processor.stop()
srv.status.Set(base.StatusQuiet)
srv.logger.Info("Processor stopped")
}
// Ping performs a ping against the redis connection.
//
// This is an alternative to the HealthCheckFunc available in the Config object.
func (srv *Server) Ping() error {
srv.state.mu.Lock()
defer srv.state.mu.Unlock()
if srv.state.value == srvStateClosed {
return nil
}
return srv.broker.Ping()
}

View File

@ -13,24 +13,13 @@ import (
"github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker"
"github.com/hibiken/asynq/internal/testutil"
"github.com/redis/go-redis/v9"
"go.uber.org/goleak"
)
func TestServer(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
r := &RedisClientOpt{
Addr: "localhost:6379",
DB: 15,
}
c := NewClient(r)
srv := NewServer(r, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
func testServer(t *testing.T, c *Client, srv *Server) {
// no-op handler
h := func(ctx context.Context, task *Task) error {
return nil
@ -41,38 +30,75 @@ func TestServer(t *testing.T) {
t.Fatal(err)
}
err = c.Enqueue(NewTask("send_email", map[string]interface{}{"recipient_id": 123}))
_, err = c.Enqueue(NewTask("send_email", testutil.JSON(map[string]interface{}{"recipient_id": 123})))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
err = c.EnqueueAt(time.Now().Add(time.Hour), NewTask("send_email", map[string]interface{}{"recipient_id": 456}))
_, err = c.Enqueue(NewTask("send_email", testutil.JSON(map[string]interface{}{"recipient_id": 456})), ProcessIn(1*time.Hour))
if err != nil {
t.Errorf("could not enqueue a task: %v", err)
}
srv.Stop()
srv.Shutdown()
}
func TestServer(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/redis/go-redis/v9/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNone(t, ignoreOpt)
redisConnOpt := getRedisConnOpt(t)
c := NewClient(redisConnOpt)
defer c.Close()
srv := NewServer(redisConnOpt, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
testServer(t, c, srv)
}
func TestServerFromRedisClient(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/redis/go-redis/v9/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNone(t, ignoreOpt)
redisConnOpt := getRedisConnOpt(t)
redisClient := redisConnOpt.MakeRedisClient().(redis.UniversalClient)
c := NewClientFromRedisClient(redisClient)
srv := NewServerFromRedisClient(redisClient, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
testServer(t, c, srv)
err := c.Close()
if err == nil {
t.Error("client.Close() should have failed because of a shared client but it didn't")
}
}
func TestServerRun(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v7/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
ignoreOpt := goleak.IgnoreTopFunction("github.com/redis/go-redis/v9/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNone(t, ignoreOpt)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
done := make(chan struct{})
// Make sure server exits when receiving TERM signal.
go func() {
time.Sleep(2 * time.Second)
syscall.Kill(syscall.Getpid(), syscall.SIGTERM)
_ = syscall.Kill(syscall.Getpid(), syscall.SIGTERM)
done <- struct{}{}
}()
go func() {
select {
case <-time.After(10 * time.Second):
t.Fatal("server did not stop after receiving TERM signal")
panic("server did not stop after receiving TERM signal")
case <-done:
}
}()
@ -83,30 +109,30 @@ func TestServerRun(t *testing.T) {
}
}
func TestServerErrServerStopped(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
func TestServerErrServerClosed(t *testing.T) {
srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
}
srv.Stop()
srv.Shutdown()
err := srv.Start(handler)
if err != ErrServerStopped {
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerStopped error", err)
if err != ErrServerClosed {
t.Errorf("Restarting server: (*Server).Start(handler) = %v, want ErrServerClosed error", err)
}
}
func TestServerErrNilHandler(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
err := srv.Start(nil)
if err == nil {
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
srv.Stop()
srv.Shutdown()
}
}
func TestServerErrServerRunning(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
handler := NewServeMux()
if err := srv.Start(handler); err != nil {
t.Fatal(err)
@ -115,7 +141,7 @@ func TestServerErrServerRunning(t *testing.T) {
if err == nil {
t.Error("Calling (*Server).Start(handler) on already running server did not return error")
}
srv.Stop()
srv.Shutdown()
}
func TestServerWithRedisDown(t *testing.T) {
@ -127,9 +153,9 @@ func TestServerWithRedisDown(t *testing.T) {
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel})
srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.forwarder.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
@ -147,7 +173,7 @@ func TestServerWithRedisDown(t *testing.T) {
time.Sleep(3 * time.Second)
srv.Stop()
srv.Shutdown()
}
func TestServerWithFlakyBroker(t *testing.T) {
@ -159,19 +185,20 @@ func TestServerWithFlakyBroker(t *testing.T) {
}()
r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: redisAddr, DB: redisDB}, Config{LogLevel: testLogLevel})
redisConnOpt := getRedisConnOpt(t)
srv := NewServer(redisConnOpt, Config{LogLevel: testLogLevel})
srv.broker = testBroker
srv.scheduler.broker = testBroker
srv.forwarder.broker = testBroker
srv.heartbeater.broker = testBroker
srv.processor.broker = testBroker
srv.subscriber.broker = testBroker
c := NewClient(RedisClientOpt{Addr: redisAddr, DB: redisDB})
c := NewClient(redisConnOpt)
h := func(ctx context.Context, task *Task) error {
// force task retry.
if task.Type == "bad_task" {
return fmt.Errorf("could not process %q", task.Type)
if task.Type() == "bad_task" {
return fmt.Errorf("could not process %q", task.Type())
}
time.Sleep(2 * time.Second)
return nil
@ -183,15 +210,15 @@ func TestServerWithFlakyBroker(t *testing.T) {
}
for i := 0; i < 10; i++ {
err := c.Enqueue(NewTask("enqueued", nil), MaxRetry(i))
_, err := c.Enqueue(NewTask("enqueued", nil), MaxRetry(i))
if err != nil {
t.Fatal(err)
}
err = c.Enqueue(NewTask("bad_task", nil))
_, err = c.Enqueue(NewTask("bad_task", nil))
if err != nil {
t.Fatal(err)
}
err = c.EnqueueIn(time.Duration(i)*time.Second, NewTask("scheduled", nil))
_, err = c.Enqueue(NewTask("scheduled", nil), ProcessIn(time.Duration(i)*time.Second))
if err != nil {
t.Fatal(err)
}
@ -207,7 +234,7 @@ func TestServerWithFlakyBroker(t *testing.T) {
time.Sleep(3 * time.Second)
srv.Stop()
srv.Shutdown()
}
func TestLogLevel(t *testing.T) {

View File

@ -1,4 +1,4 @@
// +build linux bsd darwin
//go:build linux || dragonfly || freebsd || netbsd || openbsd || darwin
package asynq
@ -22,9 +22,18 @@ func (srv *Server) waitForSignals() {
for {
sig := <-sigs
if sig == unix.SIGTSTP {
srv.Quiet()
srv.Stop()
continue
} else {
srv.Stop()
break
}
break
}
}
func (s *Scheduler) waitForSignals() {
s.logger.Info("Send signal TERM or INT to stop the scheduler")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, unix.SIGTERM, unix.SIGINT)
<-sigs
}

View File

@ -1,4 +1,4 @@
// +build windows
//go:build windows
package asynq
@ -20,3 +20,10 @@ func (srv *Server) waitForSignals() {
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}
func (s *Scheduler) waitForSignals() {
s.logger.Info("Send signal TERM or INT to stop the scheduler")
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, windows.SIGTERM, windows.SIGINT)
<-sigs
}

View File

@ -8,7 +8,7 @@ import (
"sync"
"time"
"github.com/go-redis/redis/v7"
"github.com/redis/go-redis/v9"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
@ -20,7 +20,7 @@ type subscriber struct {
// channel to communicate back to the long running "subscriber" goroutine.
done chan struct{}
// cancelations hold cancel functions for all in-progress tasks.
// cancelations hold cancel functions for all active tasks.
cancelations *base.Cancelations
// time to wait before retrying to connect to redis.
@ -43,7 +43,7 @@ func newSubscriber(params subscriberParams) *subscriber {
}
}
func (s *subscriber) terminate() {
func (s *subscriber) shutdown() {
s.logger.Debug("Subscriber shutting down...")
// Signal the subscriber goroutine to stop.
s.done <- struct{}{}

View File

@ -16,6 +16,7 @@ import (
func TestSubscriber(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
tests := []struct {
@ -45,7 +46,7 @@ func TestSubscriber(t *testing.T) {
})
var wg sync.WaitGroup
subscriber.start(&wg)
defer subscriber.terminate()
defer subscriber.shutdown()
// wait for subscriber to establish connection to pubsub channel
time.Sleep(time.Second)
@ -76,6 +77,7 @@ func TestSubscriberWithRedisDown(t *testing.T) {
}
}()
r := rdb.NewRDB(setup(t))
defer r.Close()
testBroker := testbroker.NewTestBroker(r)
cancelations := base.NewCancelations()
@ -89,7 +91,7 @@ func TestSubscriberWithRedisDown(t *testing.T) {
testBroker.Sleep() // simulate a situation where subscriber cannot connect to redis.
var wg sync.WaitGroup
subscriber.start(&wg)
defer subscriber.terminate()
defer subscriber.shutdown()
time.Sleep(2 * time.Second) // subscriber should wait and retry connecting to redis.

View File

@ -26,8 +26,9 @@ type syncer struct {
}
type syncRequest struct {
fn func() error // sync operation
errMsg string // error message
fn func() error // sync operation
errMsg string // error message
deadline time.Time // request should be dropped if deadline has been exceeded
}
type syncerParams struct {
@ -45,7 +46,7 @@ func newSyncer(params syncerParams) *syncer {
}
}
func (s *syncer) terminate() {
func (s *syncer) shutdown() {
s.logger.Debug("Syncer shutting down...")
// Signal the syncer goroutine to stop.
s.done <- struct{}{}
@ -72,6 +73,9 @@ func (s *syncer) start(wg *sync.WaitGroup) {
case <-time.After(s.interval):
var temp []*syncRequest
for _, req := range requests {
if req.deadline.Before(time.Now()) {
continue // drop stale request
}
if err := req.fn(); err != nil {
temp = append(temp, req)
}

View File

@ -5,14 +5,15 @@
package asynq
import (
"context"
"fmt"
"sync"
"testing"
"time"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
)
func TestSyncer(t *testing.T) {
@ -22,8 +23,9 @@ func TestSyncer(t *testing.T) {
h.NewTaskMessage("gen_thumbnail", nil),
}
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
h.SeedInProgressQueue(t, r, inProgress)
h.SeedActiveQueue(t, r, inProgress, base.DefaultQueueName)
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
@ -34,22 +36,23 @@ func TestSyncer(t *testing.T) {
})
var wg sync.WaitGroup
syncer.start(&wg)
defer syncer.terminate()
defer syncer.shutdown()
for _, msg := range inProgress {
m := msg
syncRequestCh <- &syncRequest{
fn: func() error {
return rdbClient.Done(m)
return rdbClient.Done(context.Background(), m)
},
deadline: time.Now().Add(5 * time.Minute),
}
}
time.Sleep(2 * interval) // ensure that syncer runs at least once
gotInProgress := h.GetInProgressMessages(t, r)
if l := len(gotInProgress); l != 0 {
t.Errorf("%q has length %d; want 0", base.InProgressQueue, l)
gotActive := h.GetActiveMessages(t, r, base.DefaultQueueName)
if l := len(gotActive); l != 0 {
t.Errorf("%q has length %d; want 0", base.ActiveKey(base.DefaultQueueName), l)
}
}
@ -64,7 +67,7 @@ func TestSyncerRetry(t *testing.T) {
var wg sync.WaitGroup
syncer.start(&wg)
defer syncer.terminate()
defer syncer.shutdown()
var (
mu sync.Mutex
@ -85,8 +88,9 @@ func TestSyncerRetry(t *testing.T) {
}
syncRequestCh <- &syncRequest{
fn: requestFunc,
errMsg: "error",
fn: requestFunc,
errMsg: "error",
deadline: time.Now().Add(5 * time.Minute),
}
// allow syncer to retry
@ -98,3 +102,41 @@ func TestSyncerRetry(t *testing.T) {
}
mu.Unlock()
}
func TestSyncerDropsStaleRequests(t *testing.T) {
const interval = time.Second
syncRequestCh := make(chan *syncRequest)
syncer := newSyncer(syncerParams{
logger: testLogger,
requestsCh: syncRequestCh,
interval: interval,
})
var wg sync.WaitGroup
syncer.start(&wg)
var (
mu sync.Mutex
n int // number of times request has been processed
)
for i := 0; i < 10; i++ {
syncRequestCh <- &syncRequest{
fn: func() error {
mu.Lock()
n++
mu.Unlock()
return nil
},
deadline: time.Now().Add(time.Duration(-i) * time.Second), // already exceeded deadline
}
}
time.Sleep(2 * interval) // ensure that syncer runs at least once
syncer.shutdown()
mu.Lock()
if n != 0 {
t.Errorf("requests has been processed %d times, want 0", n)
}
mu.Unlock()
}

View File

@ -1,168 +1,57 @@
# Asynq CLI
Asynq CLI is a command line tool to monitor the tasks managed by `asynq` package.
Asynq CLI is a command line tool to monitor the queues and tasks managed by `asynq` package.
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Stats](#stats)
- [History](#history)
- [Servers](#servers)
- [List](#list)
- [Enqueue](#enqueue)
- [Delete](#delete)
- [Kill](#kill)
- [Cancel](#cancel)
- [Pause](#pause)
- [Usage](#usage)
- [Config File](#config-file)
## Installation
In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynq
go install github.com/hibiken/asynq/tools/asynq@latest
This will create the asynq executable under your `$GOPATH/bin` directory.
## Quickstart
## Usage
The tool has a few commands to inspect the state of tasks and queues.
### Commands
Run `asynq help` to see all the available commands.
To view details on any command, use `asynq help <command> <subcommand>`.
- `asynq dash`
- `asynq stats`
- `asynq queue [ls inspect history rm pause unpause]`
- `asynq task [ls cancel delete archive run deleteall archiveall runall]`
- `asynq server [ls]`
### Global flags
Asynq CLI needs to connect to a redis-server to inspect the state of queues and tasks. Use flags to specify the options to connect to the redis-server used by your application.
To connect to a redis cluster, pass `--cluster` and `--cluster_addrs` flags.
By default, CLI will try to connect to a redis server running at `localhost:6379`.
### Stats
```
--config string config file to set flag defaut values (default is $HOME/.asynq.yaml)
-n, --db int redis database number (default is 0)
-h, --help help for asynq
-p, --password string password to use when connecting to redis server
-u, --uri string redis server URI (default "127.0.0.1:6379")
Stats command gives the overview of the current state of tasks and queues. You can run it in conjunction with `watch` command to repeatedly run `stats`.
Example:
watch -n 3 asynq stats
This will run `asynq stats` command every 3 seconds.
![Gif](/docs/assets/asynq_stats.gif)
### History
History command shows the number of processed and failed tasks from the last x days.
By default, it shows the stats from the last 10 days. Use `--days` to specify the number of days.
Example:
asynq history --days=30
![Gif](/docs/assets/asynq_history.gif)
### Servers
Servers command shows the list of running worker servers pulling tasks from the given redis instance.
Example:
asynq servers
### List
List command shows all tasks in the specified state in a table format
Example:
asynq ls retry
asynq ls scheduled
asynq ls dead
asynq ls enqueued:default
asynq ls inprogress
### Enqueue
There are two commands to enqueue tasks.
Command `enq` takes a task ID and moves the task to **Enqueued** state. You can obtain the task ID by running `ls` command.
Example:
asynq enq d:1575732274:bnogo8gt6toe23vhef0g
Command `enqall` moves all tasks to **Enqueued** state from the specified state.
Example:
asynq enqall retry
Running the above command will move all **Retry** tasks to **Enqueued** state.
### Delete
There are two commands for task deletion.
Command `del` takes a task ID and deletes the task. You can obtain the task ID by running `ls` command.
Example:
asynq del r:1575732274:bnogo8gt6toe23vhef0g
Command `delall` deletes all tasks which are in the specified state.
Example:
asynq delall retry
Running the above command will delete all **Retry** tasks.
### Kill
There are two commands to kill (i.e. move to dead state) tasks.
Command `kill` takes a task ID and kills the task. You can obtain the task ID by running `ls` command.
Example:
asynq kill r:1575732274:bnogo8gt6toe23vhef0g
Command `killall` kills all tasks which are in the specified state.
Example:
asynq killall retry
Running the above command will move all **Retry** tasks to **Dead** state.
### Cancel
Command `cancel` takes a task ID and sends a cancelation signal to the goroutine processing the specified task.
You can obtain the task ID by running `ls` command.
The task should be in "in-progress" state.
Handler implementation needs to be context aware in order to actually stop processing.
Example:
asynq cancel bnogo8gt6toe23vhef0g
### Pause
Command `pause` pauses the spcified queue. Tasks in paused queues are not processed by servers.
To resume processing from the queue, use `unpause` command.
To see which queues are currently paused, use `stats` command.
Example:
asynq pause email
asynq unpause email
--cluster connect to redis cluster
--cluster_addrs string list of comma-separated redis server addresses
```
## Config File
You can use a config file to set default values for the flags.
This is useful, for example when you have to connect to a remote redis server.
By default, `asynq` will try to read config file located in
`$HOME/.asynq.(yaml|json)`. You can specify the file location via `--config` flag.
`$HOME/.asynq.(yml|json)`. You can specify the file location via `--config` flag.
Config file example:

View File

@ -1,53 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// cancelCmd represents the cancel command
var cancelCmd = &cobra.Command{
Use: "cancel [task id]",
Short: "Sends a cancelation signal to the goroutine processing the specified task",
Long: `Cancel (asynq cancel) will send a cancelation signal to the goroutine processing
the specified task.
The command takes one argument which specifies the task to cancel.
The task should be in in-progress state.
Identifier for a task should be obtained by running "asynq ls" command.
Handler implementation needs to be context aware for cancelation signal to
actually cancel the processing.
Example: asynq cancel bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: cancel,
}
func init() {
rootCmd.AddCommand(cancelCmd)
}
func cancel(cmd *cobra.Command, args []string) {
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
err := r.PublishCancelation(args[0])
if err != nil {
fmt.Printf("could not send cancelation signal: %v\n", err)
os.Exit(1)
}
fmt.Printf("Successfully sent cancelation siganl for task %s\n", args[0])
}

139
tools/asynq/cmd/cron.go Normal file
View File

@ -0,0 +1,139 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"io"
"os"
"sort"
"time"
"github.com/MakeNowJust/heredoc/v2"
"github.com/hibiken/asynq"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(cronCmd)
cronCmd.AddCommand(cronListCmd)
cronCmd.AddCommand(cronHistoryCmd)
cronHistoryCmd.Flags().Int("page", 1, "page number")
cronHistoryCmd.Flags().Int("size", 30, "page size")
}
var cronCmd = &cobra.Command{
Use: "cron <command> [flags]",
Short: "Manage cron",
Example: heredoc.Doc(`
$ asynq cron ls
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1`),
}
var cronListCmd = &cobra.Command{
Use: "list",
Aliases: []string{"ls"},
Short: "List cron entries",
Run: cronList,
}
var cronHistoryCmd = &cobra.Command{
Use: "history <entry_id> [<entry_id>...]",
Short: "Show history of each cron tasks",
Args: cobra.MinimumNArgs(1),
Run: cronHistory,
Example: heredoc.Doc(`
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1 bf6a8594-cd03-4968-b36a-8572c5e160dd
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1 --size=100
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1 --page=2`),
}
func cronList(cmd *cobra.Command, args []string) {
inspector := createInspector()
entries, err := inspector.SchedulerEntries()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(entries) == 0 {
fmt.Println("No scheduler entries")
return
}
// Sort entries by spec.
sort.Slice(entries, func(i, j int) bool {
x, y := entries[i], entries[j]
return x.Spec < y.Spec
})
cols := []string{"EntryID", "Spec", "Type", "Payload", "Options", "Next", "Prev"}
printRows := func(w io.Writer, tmpl string) {
for _, e := range entries {
fmt.Fprintf(w, tmpl, e.ID, e.Spec, e.Task.Type(), sprintBytes(e.Task.Payload()), e.Opts,
nextEnqueue(e.Next), prevEnqueue(e.Prev))
}
}
printTable(cols, printRows)
}
// Returns a string describing when the next enqueue will happen.
func nextEnqueue(nextEnqueueAt time.Time) string {
d := nextEnqueueAt.Sub(time.Now()).Round(time.Second)
if d < 0 {
return "Now"
}
return fmt.Sprintf("In %v", d)
}
// Returns a string describing when the previous enqueue was.
func prevEnqueue(prevEnqueuedAt time.Time) string {
if prevEnqueuedAt.IsZero() {
return "N/A"
}
return fmt.Sprintf("%v ago", time.Since(prevEnqueuedAt).Round(time.Second))
}
func cronHistory(cmd *cobra.Command, args []string) {
pageNum, err := cmd.Flags().GetInt("page")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
pageSize, err := cmd.Flags().GetInt("size")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
inspector := createInspector()
for i, entryID := range args {
if i > 0 {
fmt.Printf("\n%s\n", separator)
}
fmt.Println()
fmt.Printf("Entry: %s\n\n", entryID)
events, err := inspector.ListSchedulerEnqueueEvents(
entryID, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Printf("error: %v\n", err)
continue
}
if len(events) == 0 {
fmt.Printf("No scheduler enqueue events found for entry: %s\n", entryID)
continue
}
cols := []string{"TaskID", "EnqueuedAt"}
printRows := func(w io.Writer, tmpl string) {
for _, e := range events {
fmt.Fprintf(w, tmpl, e.TaskID, e.EnqueuedAt)
}
}
printTable(cols, printRows)
}
}

45
tools/asynq/cmd/dash.go Normal file
View File

@ -0,0 +1,45 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"time"
"github.com/MakeNowJust/heredoc/v2"
"github.com/hibiken/asynq/tools/asynq/cmd/dash"
"github.com/spf13/cobra"
)
var (
flagPollInterval = 8 * time.Second
)
func init() {
rootCmd.AddCommand(dashCmd)
dashCmd.Flags().DurationVar(&flagPollInterval, "refresh", 8*time.Second, "Interval between data refresh (default: 8s, min allowed: 1s)")
}
var dashCmd = &cobra.Command{
Use: "dash",
Short: "View dashboard",
Long: heredoc.Doc(`
Display interactive dashboard.`),
Args: cobra.NoArgs,
Example: heredoc.Doc(`
$ asynq dash
$ asynq dash --refresh=3s`),
Run: func(cmd *cobra.Command, args []string) {
if flagPollInterval < 1*time.Second {
fmt.Println("error: --refresh cannot be less than 1s")
os.Exit(1)
}
dash.Run(dash.Options{
PollInterval: flagPollInterval,
RedisConnOpt: getRedisConnOpt(),
})
},
}

View File

@ -0,0 +1,220 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"errors"
"fmt"
"os"
"strings"
"time"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
)
// viewType is an enum for dashboard views.
type viewType int
const (
viewTypeQueues viewType = iota
viewTypeQueueDetails
viewTypeHelp
)
// State holds dashboard state.
type State struct {
queues []*asynq.QueueInfo
tasks []*asynq.TaskInfo
groups []*asynq.GroupInfo
err error
// Note: index zero corresponds to the table header; index=1 correctponds to the first element
queueTableRowIdx int // highlighted row in queue table
taskTableRowIdx int // highlighted row in task table
groupTableRowIdx int // highlighted row in group table
taskState asynq.TaskState // highlighted task state in queue details view
taskID string // selected task ID
selectedQueue *asynq.QueueInfo // queue shown on queue details view
selectedGroup *asynq.GroupInfo
selectedTask *asynq.TaskInfo
pageNum int // pagination page number
view viewType // current view type
prevView viewType // to support "go back"
}
func (s *State) DebugString() string {
var b strings.Builder
b.WriteString(fmt.Sprintf("len(queues)=%d ", len(s.queues)))
b.WriteString(fmt.Sprintf("len(tasks)=%d ", len(s.tasks)))
b.WriteString(fmt.Sprintf("len(groups)=%d ", len(s.groups)))
b.WriteString(fmt.Sprintf("err=%v ", s.err))
if s.taskState != 0 {
b.WriteString(fmt.Sprintf("taskState=%s ", s.taskState.String()))
} else {
b.WriteString(fmt.Sprintf("taskState=0"))
}
b.WriteString(fmt.Sprintf("taskID=%s ", s.taskID))
b.WriteString(fmt.Sprintf("queueTableRowIdx=%d ", s.queueTableRowIdx))
b.WriteString(fmt.Sprintf("taskTableRowIdx=%d ", s.taskTableRowIdx))
b.WriteString(fmt.Sprintf("groupTableRowIdx=%d ", s.groupTableRowIdx))
if s.selectedQueue != nil {
b.WriteString(fmt.Sprintf("selectedQueue={Queue:%s} ", s.selectedQueue.Queue))
} else {
b.WriteString("selectedQueue=nil ")
}
if s.selectedGroup != nil {
b.WriteString(fmt.Sprintf("selectedGroup={Group:%s} ", s.selectedGroup.Group))
} else {
b.WriteString("selectedGroup=nil ")
}
if s.selectedTask != nil {
b.WriteString(fmt.Sprintf("selectedTask={ID:%s} ", s.selectedTask.ID))
} else {
b.WriteString("selectedTask=nil ")
}
b.WriteString(fmt.Sprintf("pageNum=%d", s.pageNum))
return b.String()
}
type Options struct {
DebugMode bool
PollInterval time.Duration
RedisConnOpt asynq.RedisConnOpt
}
func Run(opts Options) {
s, err := tcell.NewScreen()
if err != nil {
fmt.Printf("failed to create a screen: %v\n", err)
os.Exit(1)
}
if err := s.Init(); err != nil {
fmt.Printf("failed to initialize screen: %v\n", err)
os.Exit(1)
}
s.SetStyle(baseStyle) // set default text style
var (
state = State{} // confined in this goroutine only; DO NOT SHARE
inspector = asynq.NewInspector(opts.RedisConnOpt)
ticker = time.NewTicker(opts.PollInterval)
eventCh = make(chan tcell.Event)
done = make(chan struct{})
// channels to send/receive data fetched asynchronously
errorCh = make(chan error)
queueCh = make(chan *asynq.QueueInfo)
taskCh = make(chan *asynq.TaskInfo)
queuesCh = make(chan []*asynq.QueueInfo)
groupsCh = make(chan []*asynq.GroupInfo)
tasksCh = make(chan []*asynq.TaskInfo)
)
defer ticker.Stop()
f := dataFetcher{
inspector,
opts,
s,
errorCh,
queueCh,
taskCh,
queuesCh,
groupsCh,
tasksCh,
}
d := dashDrawer{
s,
opts,
}
h := keyEventHandler{
s: s,
fetcher: &f,
drawer: &d,
state: &state,
done: done,
ticker: ticker,
pollInterval: opts.PollInterval,
}
go fetchQueues(inspector, queuesCh, errorCh, opts)
go s.ChannelEvents(eventCh, done) // TODO: Double check that we are not leaking goroutine with this one.
d.Draw(&state) // draw initial screen
for {
// Update screen
s.Show()
select {
case ev := <-eventCh:
// Process event
switch ev := ev.(type) {
case *tcell.EventResize:
s.Sync()
case *tcell.EventKey:
h.HandleKeyEvent(ev)
}
case <-ticker.C:
f.Fetch(&state)
case queues := <-queuesCh:
state.queues = queues
state.err = nil
if len(queues) < state.queueTableRowIdx {
state.queueTableRowIdx = len(queues)
}
d.Draw(&state)
case q := <-queueCh:
state.selectedQueue = q
state.err = nil
d.Draw(&state)
case groups := <-groupsCh:
state.groups = groups
state.err = nil
if len(groups) < state.groupTableRowIdx {
state.groupTableRowIdx = len(groups)
}
d.Draw(&state)
case tasks := <-tasksCh:
state.tasks = tasks
state.err = nil
if len(tasks) < state.taskTableRowIdx {
state.taskTableRowIdx = len(tasks)
}
d.Draw(&state)
case t := <-taskCh:
state.selectedTask = t
state.err = nil
d.Draw(&state)
case err := <-errorCh:
if errors.Is(err, asynq.ErrTaskNotFound) {
state.selectedTask = nil
} else {
state.err = err
}
d.Draw(&state)
}
}
}

View File

@ -0,0 +1,724 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"fmt"
"math"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
"github.com/mattn/go-runewidth"
)
var (
baseStyle = tcell.StyleDefault.Background(tcell.ColorReset).Foreground(tcell.ColorReset)
labelStyle = baseStyle.Foreground(tcell.ColorLightGray)
// styles for bar graph
activeStyle = baseStyle.Foreground(tcell.ColorBlue)
pendingStyle = baseStyle.Foreground(tcell.ColorGreen)
aggregatingStyle = baseStyle.Foreground(tcell.ColorLightGreen)
scheduledStyle = baseStyle.Foreground(tcell.ColorYellow)
retryStyle = baseStyle.Foreground(tcell.ColorPink)
archivedStyle = baseStyle.Foreground(tcell.ColorPurple)
completedStyle = baseStyle.Foreground(tcell.ColorDarkGreen)
)
// drawer draws UI with the given state.
type drawer interface {
Draw(state *State)
}
type dashDrawer struct {
s tcell.Screen
opts Options
}
func (dd *dashDrawer) Draw(state *State) {
s, opts := dd.s, dd.opts
s.Clear()
// Simulate data update on every render
d := NewScreenDrawer(s)
switch state.view {
case viewTypeQueues:
d.Println("=== Queues ===", baseStyle.Bold(true))
d.NL()
drawQueueSizeGraphs(d, state)
d.NL()
drawQueueTable(d, baseStyle, state)
case viewTypeQueueDetails:
d.Println("=== Queue Summary ===", baseStyle.Bold(true))
d.NL()
drawQueueSummary(d, state)
d.NL()
d.NL()
d.Println("=== Tasks ===", baseStyle.Bold(true))
d.NL()
drawTaskStateBreakdown(d, baseStyle, state)
d.NL()
drawTaskTable(d, state)
drawTaskModal(d, state)
case viewTypeHelp:
drawHelp(d)
}
d.GoToBottom()
if opts.DebugMode {
drawDebugInfo(d, state)
} else {
drawFooter(d, state)
}
}
func drawQueueSizeGraphs(d *ScreenDrawer, state *State) {
var qnames []string
var qsizes []string // queue size in strings
maxSize := 1 // not zero to avoid division by zero
for _, q := range state.queues {
qnames = append(qnames, q.Queue)
qsizes = append(qsizes, strconv.Itoa(q.Size))
if q.Size > maxSize {
maxSize = q.Size
}
}
qnameWidth := maxwidth(qnames)
qsizeWidth := maxwidth(qsizes)
// Calculate the multipler to scale the graph
screenWidth, _ := d.Screen().Size()
graphMaxWidth := screenWidth - (qnameWidth + qsizeWidth + 3) // <qname> |<graph> <size>
multipiler := 1.0
if graphMaxWidth < maxSize {
multipiler = float64(graphMaxWidth) / float64(maxSize)
}
const tick = '▇'
for _, q := range state.queues {
d.Print(q.Queue, baseStyle)
d.Print(strings.Repeat(" ", qnameWidth-runewidth.StringWidth(q.Queue)+1), baseStyle) // padding between qname and graph
d.Print("|", baseStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Active)*multipiler))), activeStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Pending)*multipiler))), pendingStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Aggregating)*multipiler))), aggregatingStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Scheduled)*multipiler))), scheduledStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Retry)*multipiler))), retryStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Archived)*multipiler))), archivedStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Completed)*multipiler))), completedStyle)
d.Print(fmt.Sprintf(" %d", q.Size), baseStyle)
d.NL()
}
d.NL()
d.Print("active=", baseStyle)
d.Print(string(tick), activeStyle)
d.Print(" pending=", baseStyle)
d.Print(string(tick), pendingStyle)
d.Print(" aggregating=", baseStyle)
d.Print(string(tick), aggregatingStyle)
d.Print(" scheduled=", baseStyle)
d.Print(string(tick), scheduledStyle)
d.Print(" retry=", baseStyle)
d.Print(string(tick), retryStyle)
d.Print(" archived=", baseStyle)
d.Print(string(tick), archivedStyle)
d.Print(" completed=", baseStyle)
d.Print(string(tick), completedStyle)
d.NL()
}
func drawFooter(d *ScreenDrawer, state *State) {
if state.err != nil {
style := baseStyle.Background(tcell.ColorDarkRed)
d.Print(state.err.Error(), style)
d.FillLine(' ', style)
return
}
style := baseStyle.Background(tcell.ColorDarkSlateGray).Foreground(tcell.ColorWhite)
switch state.view {
case viewTypeHelp:
d.Print("<Esc>: GoBack", style)
default:
d.Print("<?>: Help <Ctrl+C>: Exit ", style)
}
d.FillLine(' ', style)
}
// returns the maximum width from the given list of names
func maxwidth(names []string) int {
max := 0
for _, s := range names {
if w := runewidth.StringWidth(s); w > max {
max = w
}
}
return max
}
// rpad adds padding to the right of a string.
func rpad(s string, padding int) string {
tmpl := fmt.Sprintf("%%-%ds ", padding)
return fmt.Sprintf(tmpl, s)
}
// lpad adds padding to the left of a string.
func lpad(s string, padding int) string {
tmpl := fmt.Sprintf("%%%ds ", padding)
return fmt.Sprintf(tmpl, s)
}
// byteCount converts the given bytes into human readable string
func byteCount(b int64) string {
const unit = 1000
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "kMGTPE"[exp])
}
var queueColumnConfigs = []*columnConfig[*asynq.QueueInfo]{
{"Queue", alignLeft, func(q *asynq.QueueInfo) string { return q.Queue }},
{"State", alignLeft, func(q *asynq.QueueInfo) string { return formatQueueState(q) }},
{"Size", alignRight, func(q *asynq.QueueInfo) string { return strconv.Itoa(q.Size) }},
{"Latency", alignRight, func(q *asynq.QueueInfo) string { return q.Latency.Round(time.Second).String() }},
{"MemoryUsage", alignRight, func(q *asynq.QueueInfo) string { return byteCount(q.MemoryUsage) }},
{"Processed", alignRight, func(q *asynq.QueueInfo) string { return strconv.Itoa(q.Processed) }},
{"Failed", alignRight, func(q *asynq.QueueInfo) string { return strconv.Itoa(q.Failed) }},
{"ErrorRate", alignRight, func(q *asynq.QueueInfo) string { return formatErrorRate(q.Processed, q.Failed) }},
}
func formatQueueState(q *asynq.QueueInfo) string {
if q.Paused {
return "PAUSED"
}
return "RUN"
}
func formatErrorRate(processed, failed int) string {
if processed == 0 {
return "-"
}
return fmt.Sprintf("%.2f", float64(failed)/float64(processed))
}
func formatNextProcessTime(t time.Time) string {
now := time.Now()
if t.Before(now) {
return "now"
}
return fmt.Sprintf("in %v", (t.Sub(now).Round(time.Second)))
}
func formatPastTime(t time.Time) string {
now := time.Now()
if t.After(now) || t.Equal(now) {
return "just now"
}
return fmt.Sprintf("%v ago", time.Since(t).Round(time.Second))
}
func drawQueueTable(d *ScreenDrawer, style tcell.Style, state *State) {
drawTable(d, style, queueColumnConfigs, state.queues, state.queueTableRowIdx-1)
}
func drawQueueSummary(d *ScreenDrawer, state *State) {
q := state.selectedQueue
if q == nil {
d.Println("ERROR: Press q to go back", baseStyle)
return
}
d.Print("Name ", labelStyle)
d.Println(q.Queue, baseStyle)
d.Print("Size ", labelStyle)
d.Println(strconv.Itoa(q.Size), baseStyle)
d.Print("Latency ", labelStyle)
d.Println(q.Latency.Round(time.Second).String(), baseStyle)
d.Print("MemUsage ", labelStyle)
d.Println(byteCount(q.MemoryUsage), baseStyle)
}
// Returns the max number of groups that can be displayed.
func groupPageSize(s tcell.Screen) int {
_, h := s.Size()
return h - 16 // height - (# of rows used)
}
// Returns the number of tasks to fetch.
func taskPageSize(s tcell.Screen) int {
_, h := s.Size()
return h - 15 // height - (# of rows used)
}
func shouldShowGroupTable(state *State) bool {
return state.taskState == asynq.TaskStateAggregating && state.selectedGroup == nil
}
func getTaskTableColumnConfig(taskState asynq.TaskState) []*columnConfig[*asynq.TaskInfo] {
switch taskState {
case asynq.TaskStateActive:
return activeTaskTableColumns
case asynq.TaskStatePending:
return pendingTaskTableColumns
case asynq.TaskStateAggregating:
return aggregatingTaskTableColumns
case asynq.TaskStateScheduled:
return scheduledTaskTableColumns
case asynq.TaskStateRetry:
return retryTaskTableColumns
case asynq.TaskStateArchived:
return archivedTaskTableColumns
case asynq.TaskStateCompleted:
return completedTaskTableColumns
}
panic("unknown task state")
}
var activeTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retried", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.Retried) }},
{"Max Retry", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.MaxRetry) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var pendingTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retried", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.Retried) }},
{"Max Retry", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.MaxRetry) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var aggregatingTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
{"Group", alignLeft, func(t *asynq.TaskInfo) string { return t.Group }},
}
var scheduledTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Next Process Time", alignLeft, func(t *asynq.TaskInfo) string {
return formatNextProcessTime(t.NextProcessAt)
}},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var retryTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retry", alignRight, func(t *asynq.TaskInfo) string { return fmt.Sprintf("%d/%d", t.Retried, t.MaxRetry) }},
{"Last Failure", alignLeft, func(t *asynq.TaskInfo) string { return t.LastErr }},
{"Last Failure Time", alignLeft, func(t *asynq.TaskInfo) string { return formatPastTime(t.LastFailedAt) }},
{"Next Process Time", alignLeft, func(t *asynq.TaskInfo) string {
return formatNextProcessTime(t.NextProcessAt)
}},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var archivedTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retry", alignRight, func(t *asynq.TaskInfo) string { return fmt.Sprintf("%d/%d", t.Retried, t.MaxRetry) }},
{"Last Failure", alignLeft, func(t *asynq.TaskInfo) string { return t.LastErr }},
{"Last Failure Time", alignLeft, func(t *asynq.TaskInfo) string { return formatPastTime(t.LastFailedAt) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var completedTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Completion Time", alignLeft, func(t *asynq.TaskInfo) string { return formatPastTime(t.CompletedAt) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
{"Result", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Result) }},
}
func drawTaskTable(d *ScreenDrawer, state *State) {
if shouldShowGroupTable(state) {
drawGroupTable(d, state)
return
}
if len(state.tasks) == 0 {
return // print nothing
}
drawTable(d, baseStyle, getTaskTableColumnConfig(state.taskState), state.tasks, state.taskTableRowIdx-1)
// Pagination
pageSize := taskPageSize(d.Screen())
totalCount := getTaskCount(state.selectedQueue, state.taskState)
if state.taskState == asynq.TaskStateAggregating {
// aggregating tasks are scoped to each group when shown in the table.
totalCount = state.selectedGroup.Size
}
if pageSize < totalCount {
start := (state.pageNum-1)*pageSize + 1
end := start + len(state.tasks) - 1
paginationStyle := baseStyle.Foreground(tcell.ColorLightGray)
d.Print(fmt.Sprintf("Showing %d-%d out of %d", start, end, totalCount), paginationStyle)
if isNextTaskPageAvailable(d.Screen(), state) {
d.Print(" n=NextPage", paginationStyle)
}
if state.pageNum > 1 {
d.Print(" p=PrevPage", paginationStyle)
}
d.FillLine(' ', paginationStyle)
}
}
func isNextTaskPageAvailable(s tcell.Screen, state *State) bool {
totalCount := getTaskCount(state.selectedQueue, state.taskState)
end := (state.pageNum-1)*taskPageSize(s) + len(state.tasks)
return end < totalCount
}
func drawGroupTable(d *ScreenDrawer, state *State) {
if len(state.groups) == 0 {
return // print nothing
}
d.Println("<<< Select group >>>", baseStyle)
colConfigs := []*columnConfig[*asynq.GroupInfo]{
{"Name", alignLeft, func(g *asynq.GroupInfo) string { return g.Group }},
{"Size", alignRight, func(g *asynq.GroupInfo) string { return strconv.Itoa(g.Size) }},
}
// pagination
pageSize := groupPageSize(d.Screen())
total := len(state.groups)
start := (state.pageNum - 1) * pageSize
end := min(start+pageSize, total)
drawTable(d, baseStyle, colConfigs, state.groups[start:end], state.groupTableRowIdx-1)
if pageSize < total {
d.Print(fmt.Sprintf("Showing %d-%d out of %d", start+1, end, total), labelStyle)
if end < total {
d.Print(" n=NextPage", labelStyle)
}
if start > 0 {
d.Print(" p=PrevPage", labelStyle)
}
}
d.FillLine(' ', labelStyle)
}
type number interface {
int | int64 | float64
}
// min returns the smaller of x and y. if x==y, returns x
func min[V number](x, y V) V {
if x > y {
return y
}
return x
}
// Define the order of states to show
var taskStates = []asynq.TaskState{
asynq.TaskStateActive,
asynq.TaskStatePending,
asynq.TaskStateAggregating,
asynq.TaskStateScheduled,
asynq.TaskStateRetry,
asynq.TaskStateArchived,
asynq.TaskStateCompleted,
}
func nextTaskState(current asynq.TaskState) asynq.TaskState {
for i, ts := range taskStates {
if current == ts {
if i == len(taskStates)-1 {
return taskStates[0]
} else {
return taskStates[i+1]
}
}
}
panic("unknown task state")
}
func prevTaskState(current asynq.TaskState) asynq.TaskState {
for i, ts := range taskStates {
if current == ts {
if i == 0 {
return taskStates[len(taskStates)-1]
} else {
return taskStates[i-1]
}
}
}
panic("unknown task state")
}
func getTaskCount(queue *asynq.QueueInfo, taskState asynq.TaskState) int {
switch taskState {
case asynq.TaskStateActive:
return queue.Active
case asynq.TaskStatePending:
return queue.Pending
case asynq.TaskStateAggregating:
return queue.Aggregating
case asynq.TaskStateScheduled:
return queue.Scheduled
case asynq.TaskStateRetry:
return queue.Retry
case asynq.TaskStateArchived:
return queue.Archived
case asynq.TaskStateCompleted:
return queue.Completed
}
panic("unkonwn task state")
}
func drawTaskStateBreakdown(d *ScreenDrawer, style tcell.Style, state *State) {
const pad = " " // padding between states
for _, ts := range taskStates {
s := style
if state.taskState == ts {
s = s.Bold(true).Underline(true)
}
d.Print(fmt.Sprintf("%s:%d", strings.Title(ts.String()), getTaskCount(state.selectedQueue, ts)), s)
d.Print(pad, style)
}
d.NL()
}
func drawTaskModal(d *ScreenDrawer, state *State) {
if state.taskID == "" {
return
}
task := state.selectedTask
if task == nil {
// task no longer found
fns := []func(d *modalRowDrawer){
func(d *modalRowDrawer) { d.Print("=== Task Info ===", baseStyle.Bold(true)) },
func(d *modalRowDrawer) { d.Print("", baseStyle) },
func(d *modalRowDrawer) {
d.Print(fmt.Sprintf("Task %q no longer exists", state.taskID), baseStyle)
},
}
withModal(d, fns)
return
}
fns := []func(d *modalRowDrawer){
func(d *modalRowDrawer) { d.Print("=== Task Info ===", baseStyle.Bold(true)) },
func(d *modalRowDrawer) { d.Print("", baseStyle) },
func(d *modalRowDrawer) {
d.Print("ID: ", labelStyle)
d.Print(task.ID, baseStyle)
},
func(d *modalRowDrawer) {
d.Print("Type: ", labelStyle)
d.Print(task.Type, baseStyle)
},
func(d *modalRowDrawer) {
d.Print("State: ", labelStyle)
d.Print(task.State.String(), baseStyle)
},
func(d *modalRowDrawer) {
d.Print("Queue: ", labelStyle)
d.Print(task.Queue, baseStyle)
},
func(d *modalRowDrawer) {
d.Print("Retry: ", labelStyle)
d.Print(fmt.Sprintf("%d/%d", task.Retried, task.MaxRetry), baseStyle)
},
}
if task.LastErr != "" {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Last Failure: ", labelStyle)
d.Print(task.LastErr, baseStyle)
})
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Last Failure Time: ", labelStyle)
d.Print(fmt.Sprintf("%v (%s)", task.LastFailedAt, formatPastTime(task.LastFailedAt)), baseStyle)
})
}
if !task.NextProcessAt.IsZero() {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Next Process Time: ", labelStyle)
d.Print(fmt.Sprintf("%v (%s)", task.NextProcessAt, formatNextProcessTime(task.NextProcessAt)), baseStyle)
})
}
if !task.CompletedAt.IsZero() {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Completion Time: ", labelStyle)
d.Print(fmt.Sprintf("%v (%s)", task.CompletedAt, formatPastTime(task.CompletedAt)), baseStyle)
})
}
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Payload: ", labelStyle)
d.Print(formatByteSlice(task.Payload), baseStyle)
})
if task.Result != nil {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Result: ", labelStyle)
d.Print(formatByteSlice(task.Result), baseStyle)
})
}
withModal(d, fns)
}
// Reports whether the given byte slice is printable (i.e. human readable)
func isPrintable(data []byte) bool {
if !utf8.Valid(data) {
return false
}
isAllSpace := true
for _, r := range string(data) {
if !unicode.IsGraphic(r) {
return false
}
if !unicode.IsSpace(r) {
isAllSpace = false
}
}
return !isAllSpace
}
func formatByteSlice(data []byte) string {
if data == nil {
return "<nil>"
}
if !isPrintable(data) {
return "<non-printable>"
}
return strings.ReplaceAll(string(data), "\n", " ")
}
type modalRowDrawer struct {
d *ScreenDrawer
width int // current width occupied by content
maxWidth int
}
// Note: s should not include newline
func (d *modalRowDrawer) Print(s string, style tcell.Style) {
if d.width >= d.maxWidth {
return // no longer write to this row
}
if d.width+runewidth.StringWidth(s) > d.maxWidth {
s = truncate(s, d.maxWidth-d.width)
}
d.d.Print(s, style)
}
// withModal draws a modal with the given functions row by row.
func withModal(d *ScreenDrawer, rowPrintFns []func(d *modalRowDrawer)) {
w, h := d.Screen().Size()
var (
modalWidth = int(math.Floor(float64(w) * 0.6))
modalHeight = int(math.Floor(float64(h) * 0.6))
rowOffset = int(math.Floor(float64(h) * 0.2)) // 20% from the top
colOffset = int(math.Floor(float64(w) * 0.2)) // 20% from the left
)
if modalHeight < 3 {
return // no content can be shown
}
d.Goto(colOffset, rowOffset)
d.Print(string(tcell.RuneULCorner), baseStyle)
d.Print(strings.Repeat(string(tcell.RuneHLine), modalWidth-2), baseStyle)
d.Print(string(tcell.RuneURCorner), baseStyle)
d.NL()
rowDrawer := modalRowDrawer{
d: d,
width: 0,
maxWidth: modalWidth - 4, /* borders + paddings */
}
for i := 1; i < modalHeight-1; i++ {
d.Goto(colOffset, rowOffset+i)
d.Print(fmt.Sprintf("%c ", tcell.RuneVLine), baseStyle)
if i <= len(rowPrintFns) {
rowPrintFns[i-1](&rowDrawer)
}
d.FillUntil(' ', baseStyle, colOffset+modalWidth-2)
d.Print(fmt.Sprintf(" %c", tcell.RuneVLine), baseStyle)
d.NL()
}
d.Goto(colOffset, rowOffset+modalHeight-1)
d.Print(string(tcell.RuneLLCorner), baseStyle)
d.Print(strings.Repeat(string(tcell.RuneHLine), modalWidth-2), baseStyle)
d.Print(string(tcell.RuneLRCorner), baseStyle)
d.NL()
}
func adjustWidth(s string, width int) string {
sw := runewidth.StringWidth(s)
if sw > width {
return truncate(s, width)
}
var b strings.Builder
b.WriteString(s)
b.WriteString(strings.Repeat(" ", width-sw))
return b.String()
}
// truncates s if s exceeds max length.
func truncate(s string, max int) string {
if runewidth.StringWidth(s) <= max {
return s
}
return string([]rune(s)[:max-1]) + "…"
}
func drawDebugInfo(d *ScreenDrawer, state *State) {
d.Println(state.DebugString(), baseStyle)
}
func drawHelp(d *ScreenDrawer) {
keyStyle := labelStyle.Bold(true)
withModal(d, []func(*modalRowDrawer){
func(d *modalRowDrawer) { d.Print("=== Help ===", baseStyle.Bold(true)) },
func(d *modalRowDrawer) { d.Print("", baseStyle) },
func(d *modalRowDrawer) {
d.Print("<Enter>", keyStyle)
d.Print(" to select", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<Esc>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<q>", keyStyle)
d.Print(" to go back", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<UpArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<k>", keyStyle)
d.Print(" to move up", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<DownArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<j>", keyStyle)
d.Print(" to move down", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<LeftArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<h>", keyStyle)
d.Print(" to move left", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<RightArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<l>", keyStyle)
d.Print(" to move right", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<Ctrl+C>", keyStyle)
d.Print(" to quit", baseStyle)
},
})
}

View File

@ -0,0 +1,33 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import "testing"
func TestTruncate(t *testing.T) {
tests := []struct {
s string
max int
want string
}{
{
s: "hello world!",
max: 15,
want: "hello world!",
},
{
s: "hello world!",
max: 6,
want: "hello…",
},
}
for _, tc := range tests {
got := truncate(tc.s, tc.max)
if tc.want != got {
t.Errorf("truncate(%q, %d) = %q, want %q", tc.s, tc.max, got, tc.want)
}
}
}

View File

@ -0,0 +1,185 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"sort"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
)
type fetcher interface {
// Fetch retries data required by the given state of the dashboard.
Fetch(state *State)
}
type dataFetcher struct {
inspector *asynq.Inspector
opts Options
s tcell.Screen
errorCh chan<- error
queueCh chan<- *asynq.QueueInfo
taskCh chan<- *asynq.TaskInfo
queuesCh chan<- []*asynq.QueueInfo
groupsCh chan<- []*asynq.GroupInfo
tasksCh chan<- []*asynq.TaskInfo
}
func (f *dataFetcher) Fetch(state *State) {
switch state.view {
case viewTypeQueues:
f.fetchQueues()
case viewTypeQueueDetails:
if shouldShowGroupTable(state) {
f.fetchGroups(state.selectedQueue.Queue)
} else if state.taskState == asynq.TaskStateAggregating {
f.fetchAggregatingTasks(state.selectedQueue.Queue, state.selectedGroup.Group, taskPageSize(f.s), state.pageNum)
} else {
f.fetchTasks(state.selectedQueue.Queue, state.taskState, taskPageSize(f.s), state.pageNum)
}
// if the task modal is open, additionally fetch the selected task's info
if state.taskID != "" {
f.fetchTaskInfo(state.selectedQueue.Queue, state.taskID)
}
}
}
func (f *dataFetcher) fetchQueues() {
var (
inspector = f.inspector
queuesCh = f.queuesCh
errorCh = f.errorCh
opts = f.opts
)
go fetchQueues(inspector, queuesCh, errorCh, opts)
}
func fetchQueues(i *asynq.Inspector, queuesCh chan<- []*asynq.QueueInfo, errorCh chan<- error, opts Options) {
queues, err := i.Queues()
if err != nil {
errorCh <- err
return
}
sort.Strings(queues)
var res []*asynq.QueueInfo
for _, q := range queues {
info, err := i.GetQueueInfo(q)
if err != nil {
errorCh <- err
return
}
res = append(res, info)
}
queuesCh <- res
}
func fetchQueueInfo(i *asynq.Inspector, qname string, queueCh chan<- *asynq.QueueInfo, errorCh chan<- error) {
q, err := i.GetQueueInfo(qname)
if err != nil {
errorCh <- err
return
}
queueCh <- q
}
func (f *dataFetcher) fetchGroups(qname string) {
var (
i = f.inspector
groupsCh = f.groupsCh
errorCh = f.errorCh
queueCh = f.queueCh
)
go fetchGroups(i, qname, groupsCh, errorCh)
go fetchQueueInfo(i, qname, queueCh, errorCh)
}
func fetchGroups(i *asynq.Inspector, qname string, groupsCh chan<- []*asynq.GroupInfo, errorCh chan<- error) {
groups, err := i.Groups(qname)
if err != nil {
errorCh <- err
return
}
groupsCh <- groups
}
func (f *dataFetcher) fetchAggregatingTasks(qname, group string, pageSize, pageNum int) {
var (
i = f.inspector
tasksCh = f.tasksCh
errorCh = f.errorCh
queueCh = f.queueCh
)
go fetchAggregatingTasks(i, qname, group, pageSize, pageNum, tasksCh, errorCh)
go fetchQueueInfo(i, qname, queueCh, errorCh)
}
func fetchAggregatingTasks(i *asynq.Inspector, qname, group string, pageSize, pageNum int,
tasksCh chan<- []*asynq.TaskInfo, errorCh chan<- error) {
tasks, err := i.ListAggregatingTasks(qname, group, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
errorCh <- err
return
}
tasksCh <- tasks
}
func (f *dataFetcher) fetchTasks(qname string, taskState asynq.TaskState, pageSize, pageNum int) {
var (
i = f.inspector
tasksCh = f.tasksCh
errorCh = f.errorCh
queueCh = f.queueCh
)
go fetchTasks(i, qname, taskState, pageSize, pageNum, tasksCh, errorCh)
go fetchQueueInfo(i, qname, queueCh, errorCh)
}
func fetchTasks(i *asynq.Inspector, qname string, taskState asynq.TaskState, pageSize, pageNum int,
tasksCh chan<- []*asynq.TaskInfo, errorCh chan<- error) {
var (
tasks []*asynq.TaskInfo
err error
)
opts := []asynq.ListOption{asynq.PageSize(pageSize), asynq.Page(pageNum)}
switch taskState {
case asynq.TaskStateActive:
tasks, err = i.ListActiveTasks(qname, opts...)
case asynq.TaskStatePending:
tasks, err = i.ListPendingTasks(qname, opts...)
case asynq.TaskStateScheduled:
tasks, err = i.ListScheduledTasks(qname, opts...)
case asynq.TaskStateRetry:
tasks, err = i.ListRetryTasks(qname, opts...)
case asynq.TaskStateArchived:
tasks, err = i.ListArchivedTasks(qname, opts...)
case asynq.TaskStateCompleted:
tasks, err = i.ListCompletedTasks(qname, opts...)
}
if err != nil {
errorCh <- err
return
}
tasksCh <- tasks
}
func (f *dataFetcher) fetchTaskInfo(qname, taskID string) {
var (
i = f.inspector
taskCh = f.taskCh
errorCh = f.errorCh
)
go fetchTaskInfo(i, qname, taskID, taskCh, errorCh)
}
func fetchTaskInfo(i *asynq.Inspector, qname, taskID string, taskCh chan<- *asynq.TaskInfo, errorCh chan<- error) {
info, err := i.GetTaskInfo(qname, taskID)
if err != nil {
errorCh <- err
return
}
taskCh <- info
}

View File

@ -0,0 +1,317 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"os"
"time"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
)
// keyEventHandler handles keyboard events and updates the state.
// It delegates data fetching to fetcher and UI rendering to drawer.
type keyEventHandler struct {
s tcell.Screen
state *State
done chan struct{}
fetcher fetcher
drawer drawer
ticker *time.Ticker
pollInterval time.Duration
}
func (h *keyEventHandler) quit() {
h.s.Fini()
close(h.done)
os.Exit(0)
}
func (h *keyEventHandler) HandleKeyEvent(ev *tcell.EventKey) {
if ev.Key() == tcell.KeyEscape || ev.Rune() == 'q' {
h.goBack() // Esc and 'q' key have "go back" semantics
} else if ev.Key() == tcell.KeyCtrlC {
h.quit()
} else if ev.Key() == tcell.KeyCtrlL {
h.s.Sync()
} else if ev.Key() == tcell.KeyDown || ev.Rune() == 'j' {
h.handleDownKey()
} else if ev.Key() == tcell.KeyUp || ev.Rune() == 'k' {
h.handleUpKey()
} else if ev.Key() == tcell.KeyRight || ev.Rune() == 'l' {
h.handleRightKey()
} else if ev.Key() == tcell.KeyLeft || ev.Rune() == 'h' {
h.handleLeftKey()
} else if ev.Key() == tcell.KeyEnter {
h.handleEnterKey()
} else if ev.Rune() == '?' {
h.showHelp()
} else if ev.Rune() == 'n' {
h.nextPage()
} else if ev.Rune() == 'p' {
h.prevPage()
}
}
func (h *keyEventHandler) goBack() {
var (
state = h.state
d = h.drawer
f = h.fetcher
)
if state.view == viewTypeHelp {
state.view = state.prevView // exit help
f.Fetch(state)
h.resetTicker()
d.Draw(state)
} else if state.view == viewTypeQueueDetails {
// if task modal is open close it; otherwise go back to the previous view
if state.taskID != "" {
state.taskID = ""
state.selectedTask = nil
d.Draw(state)
} else {
state.view = viewTypeQueues
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
} else {
h.quit()
}
}
func (h *keyEventHandler) handleDownKey() {
switch h.state.view {
case viewTypeQueues:
h.downKeyQueues()
case viewTypeQueueDetails:
h.downKeyQueueDetails()
}
}
func (h *keyEventHandler) downKeyQueues() {
if h.state.queueTableRowIdx < len(h.state.queues) {
h.state.queueTableRowIdx++
} else {
h.state.queueTableRowIdx = 0 // loop back
}
h.drawer.Draw(h.state)
}
func (h *keyEventHandler) downKeyQueueDetails() {
s, state := h.s, h.state
if shouldShowGroupTable(state) {
if state.groupTableRowIdx < groupPageSize(s) {
state.groupTableRowIdx++
} else {
state.groupTableRowIdx = 0 // loop back
}
} else if state.taskID == "" {
if state.taskTableRowIdx < len(state.tasks) {
state.taskTableRowIdx++
} else {
state.taskTableRowIdx = 0 // loop back
}
}
h.drawer.Draw(state)
}
func (h *keyEventHandler) handleUpKey() {
switch h.state.view {
case viewTypeQueues:
h.upKeyQueues()
case viewTypeQueueDetails:
h.upKeyQueueDetails()
}
}
func (h *keyEventHandler) upKeyQueues() {
state := h.state
if state.queueTableRowIdx == 0 {
state.queueTableRowIdx = len(state.queues)
} else {
state.queueTableRowIdx--
}
h.drawer.Draw(state)
}
func (h *keyEventHandler) upKeyQueueDetails() {
s, state := h.s, h.state
if shouldShowGroupTable(state) {
if state.groupTableRowIdx == 0 {
state.groupTableRowIdx = groupPageSize(s)
} else {
state.groupTableRowIdx--
}
} else if state.taskID == "" {
if state.taskTableRowIdx == 0 {
state.taskTableRowIdx = len(state.tasks)
} else {
state.taskTableRowIdx--
}
}
h.drawer.Draw(state)
}
func (h *keyEventHandler) handleEnterKey() {
switch h.state.view {
case viewTypeQueues:
h.enterKeyQueues()
case viewTypeQueueDetails:
h.enterKeyQueueDetails()
}
}
func (h *keyEventHandler) resetTicker() {
h.ticker.Reset(h.pollInterval)
}
func (h *keyEventHandler) enterKeyQueues() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if state.queueTableRowIdx != 0 {
state.selectedQueue = state.queues[state.queueTableRowIdx-1]
state.view = viewTypeQueueDetails
state.taskState = asynq.TaskStateActive
state.tasks = nil
state.pageNum = 1
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) enterKeyQueueDetails() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if shouldShowGroupTable(state) && state.groupTableRowIdx != 0 {
state.selectedGroup = state.groups[state.groupTableRowIdx-1]
state.tasks = nil
state.pageNum = 1
f.Fetch(state)
h.resetTicker()
d.Draw(state)
} else if !shouldShowGroupTable(state) && state.taskTableRowIdx != 0 {
task := state.tasks[state.taskTableRowIdx-1]
state.selectedTask = task
state.taskID = task.ID
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) handleLeftKey() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails && state.taskID == "" {
state.taskState = prevTaskState(state.taskState)
state.pageNum = 1
state.taskTableRowIdx = 0
state.tasks = nil
state.selectedGroup = nil
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) handleRightKey() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails && state.taskID == "" {
state.taskState = nextTaskState(state.taskState)
state.pageNum = 1
state.taskTableRowIdx = 0
state.tasks = nil
state.selectedGroup = nil
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) nextPage() {
var (
s = h.s
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails {
if shouldShowGroupTable(state) {
pageSize := groupPageSize(s)
total := len(state.groups)
start := (state.pageNum - 1) * pageSize
end := start + pageSize
if end <= total {
state.pageNum++
d.Draw(state)
}
} else {
pageSize := taskPageSize(s)
totalCount := getTaskCount(state.selectedQueue, state.taskState)
if (state.pageNum-1)*pageSize+len(state.tasks) < totalCount {
state.pageNum++
f.Fetch(state)
h.resetTicker()
}
}
}
}
func (h *keyEventHandler) prevPage() {
var (
s = h.s
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails {
if shouldShowGroupTable(state) {
pageSize := groupPageSize(s)
start := (state.pageNum - 1) * pageSize
if start > 0 {
state.pageNum--
d.Draw(state)
}
} else {
if state.pageNum > 1 {
state.pageNum--
f.Fetch(state)
h.resetTicker()
}
}
}
}
func (h *keyEventHandler) showHelp() {
var (
state = h.state
d = h.drawer
)
if state.view != viewTypeHelp {
state.prevView = state.view
state.view = viewTypeHelp
d.Draw(state)
}
}

View File

@ -0,0 +1,234 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"testing"
"time"
"github.com/gdamore/tcell/v2"
"github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq"
)
func makeKeyEventHandler(t *testing.T, state *State) *keyEventHandler {
ticker := time.NewTicker(time.Second)
t.Cleanup(func() { ticker.Stop() })
return &keyEventHandler{
s: tcell.NewSimulationScreen("UTF-8"),
state: state,
done: make(chan struct{}),
fetcher: &fakeFetcher{},
drawer: &fakeDrawer{},
ticker: ticker,
pollInterval: time.Second,
}
}
type keyEventHandlerTest struct {
desc string // test description
state *State // initial state, to be mutated by the handler
events []*tcell.EventKey // keyboard events
wantState State // expected state after the events
}
func TestKeyEventHandler(t *testing.T) {
tests := []*keyEventHandlerTest{
{
desc: "navigates to help view",
state: &State{view: viewTypeQueues},
events: []*tcell.EventKey{tcell.NewEventKey(tcell.KeyRune, '?', tcell.ModNone)},
wantState: State{view: viewTypeHelp},
},
{
desc: "navigates to queue details view",
state: &State{
view: viewTypeQueues,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 100, Active: 10, Pending: 40, Scheduled: 40, Completed: 10},
},
queueTableRowIdx: 0,
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyRune, 'j', tcell.ModNone), // down
tcell.NewEventKey(tcell.KeyEnter, '\n', tcell.ModNone), // Enter
},
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 100, Active: 10, Pending: 40, Scheduled: 40, Completed: 10},
},
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 100, Active: 10, Pending: 40, Scheduled: 40, Completed: 10},
queueTableRowIdx: 1,
taskState: asynq.TaskStateActive,
pageNum: 1,
},
},
{
desc: "does nothing if no queues are present",
state: &State{
view: viewTypeQueues,
queues: []*asynq.QueueInfo{}, // empty
queueTableRowIdx: 0,
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyRune, 'j', tcell.ModNone), // down
tcell.NewEventKey(tcell.KeyEnter, '\n', tcell.ModNone), // Enter
},
wantState: State{
view: viewTypeQueues,
queues: []*asynq.QueueInfo{},
queueTableRowIdx: 0,
},
},
{
desc: "opens task info modal",
state: &State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyEnter, '\n', tcell.ModNone), // Enter
},
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
// new states
taskID: "yyyy",
selectedTask: &asynq.TaskInfo{ID: "yyyy", Type: "bar"},
},
},
{
desc: "Esc closes task info modal",
state: &State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "yyyy", // presence of this field opens the modal
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyEscape, ' ', tcell.ModNone), // Esc
},
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "", // this field should be unset
},
},
{
desc: "Arrow keys are disabled while task info modal is open",
state: &State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "yyyy", // presence of this field opens the modal
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyLeft, ' ', tcell.ModNone),
},
// no change
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "yyyy", // presence of this field opens the modal
},
},
// TODO: Add more tests
}
for _, tc := range tests {
t.Run(tc.desc, func(t *testing.T) {
h := makeKeyEventHandler(t, tc.state)
for _, e := range tc.events {
h.HandleKeyEvent(e)
}
if diff := cmp.Diff(tc.wantState, *tc.state, cmp.AllowUnexported(State{})); diff != "" {
t.Errorf("after state was %+v, want %+v: (-want,+got)\n%s", *tc.state, tc.wantState, diff)
}
})
}
}
/*** fake implementation for tests ***/
type fakeFetcher struct{}
func (f *fakeFetcher) Fetch(s *State) {}
type fakeDrawer struct{}
func (d *fakeDrawer) Draw(s *State) {}

View File

@ -0,0 +1,100 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"strings"
"github.com/gdamore/tcell/v2"
"github.com/mattn/go-runewidth"
)
/*** Screen Drawer ***/
// ScreenDrawer is used to draw contents on screen.
//
// Usage example:
// d := NewScreenDrawer(s)
// d.Println("Hello world", mystyle)
// d.NL() // adds newline
// d.Print("foo", mystyle.Bold(true))
// d.Print("bar", mystyle.Italic(true))
type ScreenDrawer struct {
l *LineDrawer
}
func NewScreenDrawer(s tcell.Screen) *ScreenDrawer {
return &ScreenDrawer{l: NewLineDrawer(0, s)}
}
func (d *ScreenDrawer) Print(s string, style tcell.Style) {
d.l.Draw(s, style)
}
func (d *ScreenDrawer) Println(s string, style tcell.Style) {
d.Print(s, style)
d.NL()
}
// FillLine prints the given rune until the end of the current line
// and adds a newline.
func (d *ScreenDrawer) FillLine(r rune, style tcell.Style) {
w, _ := d.Screen().Size()
if w-d.l.col < 0 {
d.NL()
return
}
s := strings.Repeat(string(r), w-d.l.col)
d.Print(s, style)
d.NL()
}
func (d *ScreenDrawer) FillUntil(r rune, style tcell.Style, limit int) {
if d.l.col > limit {
return // already passed the limit
}
s := strings.Repeat(string(r), limit-d.l.col)
d.Print(s, style)
}
// NL adds a newline (i.e., moves to the next line).
func (d *ScreenDrawer) NL() {
d.l.row++
d.l.col = 0
}
func (d *ScreenDrawer) Screen() tcell.Screen {
return d.l.s
}
// Goto moves the screendrawer to the specified cell.
func (d *ScreenDrawer) Goto(x, y int) {
d.l.row = y
d.l.col = x
}
// Go to the bottom of the screen.
func (d *ScreenDrawer) GoToBottom() {
_, h := d.Screen().Size()
d.l.row = h - 1
d.l.col = 0
}
type LineDrawer struct {
s tcell.Screen
row int
col int
}
func NewLineDrawer(row int, s tcell.Screen) *LineDrawer {
return &LineDrawer{row: row, col: 0, s: s}
}
func (d *LineDrawer) Draw(s string, style tcell.Style) {
for _, r := range s {
d.s.SetContent(d.col, d.row, r, nil, style)
d.col += runewidth.RuneWidth(r)
}
}

View File

@ -0,0 +1,70 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"github.com/gdamore/tcell/v2"
"github.com/mattn/go-runewidth"
)
type columnAlignment int
const (
alignRight columnAlignment = iota
alignLeft
)
type columnConfig[V any] struct {
name string
alignment columnAlignment
displayFn func(v V) string
}
type column[V any] struct {
*columnConfig[V]
width int
}
// Helper to draw a table.
func drawTable[V any](d *ScreenDrawer, style tcell.Style, configs []*columnConfig[V], data []V, highlightRowIdx int) {
const colBuffer = " " // extra buffer between columns
cols := make([]*column[V], len(configs))
for i, cfg := range configs {
cols[i] = &column[V]{cfg, runewidth.StringWidth(cfg.name)}
}
// adjust the column width to accommodate the widest value.
for _, v := range data {
for _, col := range cols {
if w := runewidth.StringWidth(col.displayFn(v)); col.width < w {
col.width = w
}
}
}
// print header
headerStyle := style.Background(tcell.ColorDimGray).Foreground(tcell.ColorWhite)
for _, col := range cols {
if col.alignment == alignLeft {
d.Print(rpad(col.name, col.width)+colBuffer, headerStyle)
} else {
d.Print(lpad(col.name, col.width)+colBuffer, headerStyle)
}
}
d.FillLine(' ', headerStyle)
// print body
for i, v := range data {
rowStyle := style
if highlightRowIdx == i {
rowStyle = style.Background(tcell.ColorDarkOliveGreen)
}
for _, col := range cols {
if col.alignment == alignLeft {
d.Print(rpad(col.displayFn(v), col.width)+colBuffer, rowStyle)
} else {
d.Print(lpad(col.displayFn(v), col.width)+colBuffer, rowStyle)
}
}
d.FillLine(' ', rowStyle)
}
}

View File

@ -1,73 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// delCmd represents the del command
var delCmd = &cobra.Command{
Use: "del [task id]",
Short: "Deletes a task given an identifier",
Long: `Del (asynq del) will delete a task given an identifier.
The command takes one argument which specifies the task to delete.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynq ls" command.
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: del,
}
func init() {
rootCmd.AddCommand(delCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// delCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// delCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func del(cmd *cobra.Command, args []string) {
id, score, qtype, err := parseQueryID(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":
err = r.DeleteScheduledTask(id, score)
case "r":
err = r.DeleteRetryTask(id, score)
case "d":
err = r.DeleteDeadTask(id, score)
default:
fmt.Println("invalid argument")
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully deleted %v\n", args[0])
}

View File

@ -1,71 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var delallValidArgs = []string{"scheduled", "retry", "dead"}
// delallCmd represents the delall command
var delallCmd = &cobra.Command{
Use: "delall [state]",
Short: "Deletes all tasks in the specified state",
Long: `Delall (asynq delall) will delete all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
Example: asynq delall dead -> Deletes all dead tasks`,
ValidArgs: delallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: delall,
}
func init() {
rootCmd.AddCommand(delallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// delallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// delallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func delall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var err error
switch args[0] {
case "scheduled":
err = r.DeleteAllScheduledTasks()
case "retry":
err = r.DeleteAllRetryTasks()
case "dead":
err = r.DeleteAllDeadTasks()
default:
fmt.Printf("error: `asynq delall [state]` only accepts %v as the argument.\n", delallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Deleted all tasks in %q state\n", args[0])
}

View File

@ -1,76 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// enqCmd represents the enq command
var enqCmd = &cobra.Command{
Use: "enq [task id]",
Short: "Enqueues a task given an identifier",
Long: `Enq (asynq enq) will enqueue a task given an identifier.
The command takes one argument which specifies the task to enqueue.
The task should be in either scheduled, retry or dead state.
Identifier for a task should be obtained by running "asynq ls" command.
The task enqueued by this command will be processed as soon as the task
gets dequeued by a processor.
Example: asynq enq d:1575732274:bnogo8gt6toe23vhef0g`,
Args: cobra.ExactArgs(1),
Run: enq,
}
func init() {
rootCmd.AddCommand(enqCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// enqCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// enqCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func enq(cmd *cobra.Command, args []string) {
id, score, qtype, err := parseQueryID(args[0])
if err != nil {
fmt.Println(err)
os.Exit(1)
}
r := rdb.NewRDB(redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
}))
switch qtype {
case "s":
err = r.EnqueueScheduledTask(id, score)
case "r":
err = r.EnqueueRetryTask(id, score)
case "d":
err = r.EnqueueDeadTask(id, score)
default:
fmt.Println("invalid argument")
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Successfully enqueued %v\n", args[0])
}

View File

@ -1,75 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var enqallValidArgs = []string{"scheduled", "retry", "dead"}
// enqallCmd represents the enqall command
var enqallCmd = &cobra.Command{
Use: "enqall [state]",
Short: "Enqueues all tasks in the specified state",
Long: `Enqall (asynq enqall) will enqueue all tasks in the specified state.
The argument should be one of "scheduled", "retry", or "dead".
The tasks enqueued by this command will be processed as soon as it
gets dequeued by a processor.
Example: asynq enqall dead -> Enqueues all dead tasks`,
ValidArgs: enqallValidArgs,
Args: cobra.ExactValidArgs(1),
Run: enqall,
}
func init() {
rootCmd.AddCommand(enqallCmd)
// Here you will define your flags and configuration settings.
// Cobra supports Persistent Flags which will work for this command
// and all subcommands, e.g.:
// enqallCmd.PersistentFlags().String("foo", "", "A help for foo")
// Cobra supports local flags which will only run when this command
// is called directly, e.g.:
// enqallCmd.Flags().BoolP("toggle", "t", false, "Help message for toggle")
}
func enqall(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
var n int64
var err error
switch args[0] {
case "scheduled":
n, err = r.EnqueueAllScheduledTasks()
case "retry":
n, err = r.EnqueueAllRetryTasks()
case "dead":
n, err = r.EnqueueAllDeadTasks()
default:
fmt.Printf("error: `asynq enqall [state]` only accepts %v as the argument.\n", enqallValidArgs)
os.Exit(1)
}
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Enqueued %d tasks in %q state\n", n, args[0])
}

52
tools/asynq/cmd/group.go Normal file
View File

@ -0,0 +1,52 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/MakeNowJust/heredoc/v2"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(groupCmd)
groupCmd.AddCommand(groupListCmd)
groupListCmd.Flags().StringP("queue", "q", "", "queue to inspect")
groupListCmd.MarkFlagRequired("queue")
}
var groupCmd = &cobra.Command{
Use: "group <command> [flags]",
Short: "Manage groups",
Example: heredoc.Doc(`
$ asynq group list --queue=myqueue`),
}
var groupListCmd = &cobra.Command{
Use: "list",
Aliases: []string{"ls"},
Short: "List groups",
Args: cobra.NoArgs,
Run: groupLists,
}
func groupLists(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
inspector := createInspector()
groups, err := inspector.Groups(qname)
if len(groups) == 0 {
fmt.Printf("No groups found in queue %q\n", qname)
return
}
for _, g := range groups {
fmt.Println(g.Group)
}
}

View File

@ -1,71 +0,0 @@
// Copyright 2020 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"strings"
"text/tabwriter"
"github.com/go-redis/redis/v7"
"github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var days int
// historyCmd represents the history command
var historyCmd = &cobra.Command{
Use: "history",
Short: "Shows historical aggregate data",
Long: `History (asynq history) will show the number of processed and failed tasks
from the last x days.
By default, it will show the data from the last 10 days.
Example: asynq history -x=30 -> Shows stats from the last 30 days`,
Args: cobra.NoArgs,
Run: history,
}
func init() {
rootCmd.AddCommand(historyCmd)
historyCmd.Flags().IntVarP(&days, "days", "x", 10, "show data from last x days")
}
func history(cmd *cobra.Command, args []string) {
c := redis.NewClient(&redis.Options{
Addr: viper.GetString("uri"),
DB: viper.GetInt("db"),
Password: viper.GetString("password"),
})
r := rdb.NewRDB(c)
stats, err := r.HistoricalStats(days)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
printDailyStats(stats)
}
func printDailyStats(stats []*rdb.DailyStats) {
format := strings.Repeat("%v\t", 4) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "Date (UTC)", "Processed", "Failed", "Error Rate")
fmt.Fprintf(tw, format, "----------", "---------", "------", "----------")
for _, s := range stats {
var errrate string
if s.Processed == 0 {
errrate = "N/A"
} else {
errrate = fmt.Sprintf("%.2f%%", float64(s.Failed)/float64(s.Processed)*100)
}
fmt.Fprintf(tw, format, s.Time.Format("2006-01-02"), s.Processed, s.Failed, errrate)
}
tw.Flush()
}

Some files were not shown because too many files have changed in this diff Show More