2
0
mirror of https://github.com/hibiken/asynq.git synced 2025-10-20 09:16:12 +08:00

Compare commits

...

227 Commits

Author SHA1 Message Date
Mohammed Sohail
f1e7dc4056 fix: closing the Client also closes the broker
* The error was also previously unhandled. For shared connections an error will be returned by the broker itself because the sharedConnection bool is also set on the client. This also means we can get rid of the sharedConnection flag on the Scheduler itself and let it work internally.
2024-12-03 10:27:50 +03:00
Mohammed Sohail
ee17997650 fix: NewScheduler wrongly creates a client whose sharedConnection value is always true
* This is affecting the PeriodicManager as well as the Scheduler
2024-12-03 10:08:47 +03:00
dependabot[bot]
106c07adaa build(deps): bump codecov/codecov-action from 4 to 5 (#970)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 07:37:44 +03:00
dependabot[bot]
1c7195ff1a build(deps): bump google.golang.org/protobuf from 1.35.1 to 1.35.2 (#971)
Bumps google.golang.org/protobuf from 1.35.1 to 1.35.2.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 07:37:13 +03:00
Xijun Dai
12cbba4926 feat(periodic_task_manager): Add RedisUniversalClient support (#958)
Signed-off-by: Xijun Dai <daixijun1990@gmail.com>
2024-11-13 14:48:56 +03:00
Khash Sajadi
80479b528d Include registration error in the log (#657)
* Include registration error in the log

* remove chatty debug log

this will show in the logs every 5 seconds as debug (not even trace) which leads to a lot of noise
2024-11-13 14:09:59 +03:00
dependabot[bot]
e14c312fe3 build(deps): bump golang.org/x/sys from 0.26.0 to 0.27.0 (#963)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.26.0 to 0.27.0.
- [Commits](https://github.com/golang/sys/compare/v0.26.0...v0.27.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:38:50 +03:00
dependabot[bot]
ad1f587403 build(deps): bump golang.org/x/time from 0.7.0 to 0.8.0 (#964)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.7.0 to 0.8.0.
- [Commits](https://github.com/golang/time/compare/v0.7.0...v0.8.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:13:01 +03:00
dependabot[bot]
8b32b38fd5 build(deps): bump github.com/mattn/go-runewidth in /tools (#965)
Bumps [github.com/mattn/go-runewidth](https://github.com/mattn/go-runewidth) from 0.0.13 to 0.0.16.
- [Commits](https://github.com/mattn/go-runewidth/compare/v0.0.13...v0.0.16)

---
updated-dependencies:
- dependency-name: github.com/mattn/go-runewidth
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:12:43 +03:00
dependabot[bot]
96a84fac0c build(deps): bump github.com/hibiken/asynq in /tools (#966)
Bumps [github.com/hibiken/asynq](https://github.com/hibiken/asynq) from 0.24.1 to 0.25.0.
- [Release notes](https://github.com/hibiken/asynq/releases)
- [Changelog](https://github.com/hibiken/asynq/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hibiken/asynq/compare/v0.24.1...v0.25.0)

---
updated-dependencies:
- dependency-name: github.com/hibiken/asynq
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-12 08:11:52 +03:00
ghosx
d2c207fbb8 fix: queues map init with size (#673)
Co-authored-by: yipinhe <yipinhe@tencent.com>
2024-11-11 08:25:42 +03:00
Pior Bastida
1a7c61ac49 Use string concat instead of fmt.Sprintf (#962) 2024-11-11 08:20:16 +03:00
dependabot[bot]
87375b5534 Bump codecov/codecov-action from 1 to 4 (#930)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-08 08:23:18 +03:00
dependabot[bot]
ffd75ebb5f Bump actions/upload-artifact from 3 to 4 (#929)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-08 08:23:10 +03:00
dependabot[bot]
d64fd328cb Bump actions/download-artifact from 3 to 4 (#931)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-08 08:23:00 +03:00
Pior Bastida
4f00f52c1d Add the scheduler option HeartbeatInterval (#956)
* Add the scheduler option HeartbeatInterval

* Fix possible premature expiration of scheduler entries
2024-11-07 08:34:28 +03:00
dependabot[bot]
580d69e88f build(deps): bump github.com/redis/go-redis/v9 in /tools (#957)
Bumps [github.com/redis/go-redis/v9](https://github.com/redis/go-redis) from 9.0.5 to 9.7.0.
- [Release notes](https://github.com/redis/go-redis/releases)
- [Changelog](https://github.com/redis/go-redis/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/go-redis/compare/v9.0.5...v9.7.0)

---
updated-dependencies:
- dependency-name: github.com/redis/go-redis/v9
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-05 07:59:30 +03:00
dependabot[bot]
c97652d408 build(deps): bump github.com/google/go-cmp from 0.5.9 to 0.6.0 in /tools (#954)
Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.9 to 0.6.0.
- [Release notes](https://github.com/google/go-cmp/releases)
- [Commits](https://github.com/google/go-cmp/compare/v0.5.9...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/google/go-cmp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-02 09:58:40 +03:00
dependabot[bot]
4644d37ef4 Bump github.com/prometheus/client_golang from 1.11.1 to 1.20.5 in /x (#934)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.1 to 1.20.5.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.1...v1.20.5)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-02 09:58:08 +03:00
dependabot[bot]
45c0fc6ad9 build(deps): bump github.com/hibiken/asynq from 0.24.1 to 0.25.0 in /x (#955)
Bumps [github.com/hibiken/asynq](https://github.com/hibiken/asynq) from 0.24.1 to 0.25.0.
- [Release notes](https://github.com/hibiken/asynq/releases)
- [Changelog](https://github.com/hibiken/asynq/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hibiken/asynq/compare/v0.24.1...v0.25.0)

---
updated-dependencies:
- dependency-name: github.com/hibiken/asynq
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-02 08:16:01 +03:00
Mohamed Sohail
fd3eb86d95 release: v0.25.0
* prepare release (docs): v0.25.0

* docs: add PR 946 to changelog

* docs: update issue templates, add releatively stable update

* Ths project should be considered relatively stable because we haven't broken the API in over 2 years.

* docs: add Redis Cluster compatibility caveat
2024-11-01 11:13:57 +03:00
Pior Bastida
3dbda60333 Improve performance of enqueueing tasks (#946)
* Improve performance of enqueueing tasks

Add an in-memory cache to keep track of all the queues. Use this cache
to avoid sending an SADD since after the first call, that extra network
call isn't necessary.

The cache will expire every 10 secs so for cases where the queue is
deleted from asynq:queues set, it can be added again next time a task is
enqueued to it.

* Use sync.Map to simplify the conditional SADD

* Cleanup queuePublished in RemoveQueue

---------

Co-authored-by: Yousif <753751+yousifh@users.noreply.github.com>
2024-10-30 08:25:35 +03:00
dependabot[bot]
02c6dae7eb Bump golang.org/x/time from 0.3.0 to 0.7.0 (#948)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.3.0 to 0.7.0.
- [Commits](https://github.com/golang/time/compare/v0.3.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 10:12:44 +03:00
dependabot[bot]
5cfcb71139 Bump github.com/spf13/cast from 1.5.1 to 1.7.0 (#938)
Bumps [github.com/spf13/cast](https://github.com/spf13/cast) from 1.5.1 to 1.7.0.
- [Release notes](https://github.com/spf13/cast/releases)
- [Commits](https://github.com/spf13/cast/compare/v1.5.1...v1.7.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:59:38 +03:00
dependabot[bot]
c78e7b0ccd Bump golang.org/x/sys from 0.16.0 to 0.26.0 (#933)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.16.0 to 0.26.0.
- [Commits](https://github.com/golang/sys/compare/v0.16.0...v0.26.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:59:20 +03:00
dependabot[bot]
b4db174032 Bump github.com/redis/go-redis/v9 from 9.4.0 to 9.7.0 (#935)
Bumps [github.com/redis/go-redis/v9](https://github.com/redis/go-redis) from 9.4.0 to 9.7.0.
- [Release notes](https://github.com/redis/go-redis/releases)
- [Changelog](https://github.com/redis/go-redis/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/go-redis/compare/v9.4.0...v9.7.0)

---
updated-dependencies:
- dependency-name: github.com/redis/go-redis/v9
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:25:57 +03:00
dependabot[bot]
39f1d8c3e6 Bump github.com/fatih/color from 1.9.0 to 1.18.0 in /tools (#941)
Bumps [github.com/fatih/color](https://github.com/fatih/color) from 1.9.0 to 1.18.0.
- [Release notes](https://github.com/fatih/color/releases)
- [Commits](https://github.com/fatih/color/compare/v1.9.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/fatih/color
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 09:24:06 +03:00
Ahmed Radwan
e70de721b8 remove deprecated protobuf ptypes (#942)
* remove deprecated protobuf ptypes

* tidy compiled proto and go mod

* bump protobuf
2024-10-29 09:21:27 +03:00
Mohamed Sohail
6c06ad7e45 Revert "Bump golang.org/x/time from 0.3.0 to 0.7.0" (#947) 2024-10-29 09:20:45 +03:00
Mohamed Sohail
a676d3d2fa Merge pull request #937 from hibiken/dependabot/go_modules/golang.org/x/time-0.7.0
Bump golang.org/x/time from 0.3.0 to 0.7.0
2024-10-29 09:18:06 +03:00
Mohamed Sohail
ef0d32965f Merge pull request #945 from Shopify/fix-test-default-port
Update tests to use the configured Redis address
2024-10-29 08:45:33 +03:00
Mohamed Sohail
f16f9ac440 Merge pull request #944 from Shopify/randv2
Use math/rand/v2
2024-10-29 08:42:38 +03:00
Pior Bastida
63f7cb7b17 Use math/rand/v2 2024-10-28 18:39:54 +01:00
Pior Bastida
04b3a3475d Update tests to use the configured Redis address 2024-10-28 12:48:56 +01:00
Marcus Boorstin
013190b824 Add task enqueue command to cli (#918) 2024-10-26 13:04:54 +03:00
Skwol
1e102a5392 Need to support redis sentinel username. (#924) 2024-10-26 13:04:21 +03:00
dependabot[bot]
e1a8a366a6 Bump golang.org/x/time from 0.3.0 to 0.7.0
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.3.0 to 0.7.0.
- [Commits](https://github.com/golang/time/compare/v0.3.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-26 05:49:43 +00:00
Pior Bastida
c8c8adfaa6 Configure dependabot to update Github Actions (#928) 2024-10-26 08:48:41 +03:00
Pior Bastida
03f4799712 Run golangci-lint in CI (#927)
* Setup golangci-lint in CI and local-dev

* Fix linting error or locally disable linter
2024-10-26 08:48:12 +03:00
Pior Bastida
3f4e211a3b Call all context cancelFunc in processor (#926) 2024-10-26 08:39:13 +03:00
Pior Bastida
0655c569f5 Bump to Go 1.22 and 1.23 (#925) 2024-10-26 08:33:58 +03:00
Pior Bastida
95a0768ae0 Add jitter on the processor fetch backoff sleep (#868) 2024-10-19 10:46:48 +03:00
Mohammed Sohail
f4b56498f2 docs: small fix on semantics 2024-10-19 10:07:17 +03:00
kanzihuang
ae478d5b22 feat: revoke the task to modify task parameters and enqueue new task with the same task id (#882) 2024-10-19 10:06:12 +03:00
kanzihuang
ff7ef48463 fix: possible inconsistent scores between ProcessIn and ProcessAt (#876) 2024-10-19 09:45:18 +03:00
Patrick Barnum
b1e13893ff [RFC] Adds Ping() to client/scheduler/server (#585)
* [RFC] Adds Ping() to client/scheduler/server

* Checks for scheduler state closed
2024-10-19 09:44:06 +03:00
Harrison Miller
0dc670d7d8 Archived tasks that are trimmed from the set are deleted (#743)
* fixed trimmed archive tasks not being deleted.

* improved test case.

* changed ZRANGEBYSCORE to ZRANGE with BYSCORE option.

---------

Co-authored-by: Harrison <harrison@Harrisons-MacBook-Pro.local>
Co-authored-by: Harrison Miller <harrison.miller@MBP-Harrison-Miller-M2.local>
2024-10-19 09:18:09 +03:00
Mohammed Sohail
461d922616 docs: apply recommendaded updates
* additionally, we log an erro in the case the redis client cannot shutdown in the scheduler
2024-10-19 09:05:17 +03:00
Mohammed Sohail
5daa3c52ed Merge remote-tracking branch 'jerbob92-fork/feature/implement-reusing-redis-client' into develop 2024-10-19 08:58:39 +03:00
Tedja
d04888e748 feature: configurable janitor interval and deletion batch size (#715)
* feature: configurable janitor interval and deletion batch size

* warn user when they set a big number of janitor batch size

* Update CHANGELOG.md

---------

Co-authored-by: Agung Hariadi Tedja <agung.tedja@kumparan.com>
2024-05-06 14:11:52 +08:00
Trịnh Đức Bảo Linh(Kevin)
174008843d feat(*): correct panic error (#758)
* error panic handling

* updated CHANGELOG.md file

* correct msg panic error (#5)

* correct msg panic error
2024-05-06 13:46:19 +08:00
Mohamed Sohail 天命
2b632b93d5 chore: fix function names in comment (pull request #860 from camcui/master)
chore: fix function names in comment
2024-04-23 00:56:52 +08:00
camcui
b35b559d40 chore: fix function names in comment
Signed-off-by: camcui <cuishua@sina.cn>
2024-04-12 13:54:08 +08:00
Mohamed Sohail
8df0bfa583 Merge pull request #843 from mrusme/fix-bsd
Fix go:build for BSD
2024-03-15 13:32:19 +08:00
mrusme
b25d10b61d Fixed go:build for BSD 2024-03-14 20:26:33 +05:00
crazyoptimist
38f7499b71 fix(typo): delete-all to deleteall (#827)
* typo: delete-all to deleteall

* docs: update tools/asynq/README.md

* fix archiveall runall

---------

Co-authored-by: Mohamed Sohail <sohailsameja@gmail.com>
2024-02-23 09:17:12 +03:00
dependabot[bot]
0a73fc6201 Bump go.uber.org/goleak from 1.1.12 to 1.3.0 (#770)
Bumps [go.uber.org/goleak](https://github.com/uber-go/goleak) from 1.1.12 to 1.3.0.
- [Release notes](https://github.com/uber-go/goleak/releases)
- [Changelog](https://github.com/uber-go/goleak/blob/master/CHANGELOG.md)
- [Commits](https://github.com/uber-go/goleak/compare/v1.1.12...v1.3.0)

---
updated-dependencies:
- dependency-name: go.uber.org/goleak
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:37:18 +03:00
dependabot[bot]
1a11a33b4f Bump github.com/google/uuid from 1.3.0 to 1.6.0 (#810)
Bumps [github.com/google/uuid](https://github.com/google/uuid) from 1.3.0 to 1.6.0.
- [Release notes](https://github.com/google/uuid/releases)
- [Changelog](https://github.com/google/uuid/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/uuid/compare/v1.3.0...v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:35:55 +03:00
dependabot[bot]
f0888df813 Bump github.com/google/go-cmp from 0.5.9 to 0.6.0 (#767)
Bumps [github.com/google/go-cmp](https://github.com/google/go-cmp) from 0.5.9 to 0.6.0.
- [Release notes](https://github.com/google/go-cmp/releases)
- [Commits](https://github.com/google/go-cmp/compare/v0.5.9...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/google/go-cmp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:33:57 +03:00
dependabot[bot]
c2dd648a51 Bump github.com/redis/go-redis/v9 from 9.0.3 to 9.4.0 (#809)
Bumps [github.com/redis/go-redis/v9](https://github.com/redis/go-redis) from 9.0.3 to 9.4.0.
- [Release notes](https://github.com/redis/go-redis/releases)
- [Changelog](https://github.com/redis/go-redis/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/go-redis/compare/v9.0.3...v9.4.0)

---
updated-dependencies:
- dependency-name: github.com/redis/go-redis/v9
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 10:33:16 +03:00
Andrii Buriachevskyi
a3cca853a0 docs: include version in CLI package installation (#802)
go install github.com/hibiken/asynq/tools/asynq@latest
2024-01-29 09:46:47 +03:00
dependabot[bot]
83df622a92 Bump golang.org/x/sys from 0.0.0-20211216021012-1d35b9e2eb4e to 0.16.0 (#807)
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.0.0-20211216021012-1d35b9e2eb4e to 0.16.0.
- [Commits](https://github.com/golang/sys/commits/v0.16.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-29 09:42:38 +03:00
krhubert
fdbf54eb04 Update docs 2023-12-10 09:49:43 -08:00
Hubert Krauze
16ec43cbca Add option to configure task check interval 2023-12-10 09:49:43 -08:00
yeqown
1e0bf88bf3 fix: listLeaseExpiredCmd doesn't ignore possibly empty value of task in lua script 2023-12-10 09:47:55 -08:00
yeqown
d0041c55a3 fix(274): ignore empty data to append to msgs
fix issue 274
2023-12-10 09:47:55 -08:00
Mohammed Sohail
7ef0511f35 ci: upgrade benchstat actions, go version -> 1.21.x
* closes #759

Squashed commit of the following:

commit 3d94ee14aeaf9a868dbeed4b65f90ccdda1f08d6
Author: Mohammed Sohail <sohailsameja@gmail.com>
Date:   Thu Dec 7 11:49:07 2023 +0300

    ci: upgrade benchstat actions, go version -> 1.21.x

commit 129e225311
Author: angshumukherjee100 <angshumukherjee100@gmail.com>
Date:   Sun Oct 8 11:20:43 2023 +0530

    (workflow): bump go version to 1.18 in benchstat
2023-12-10 09:46:45 -08:00
Mohammed Sohail
1ec90810db chore: Update redis to v9 in x/go.mod (#795)
Squashed commit of the following:

commit 6e3656db222a3f9347ee4806ef065a1b9b01a214
Author: Mohammed Sohail <sohailsameja@gmail.com>
Date:   Thu Dec 7 11:12:41 2023 +0300

    pkg(x): go version update -> 1.20

commit 2931df3708
Author: Amaury <1293565+amaury1729@users.noreply.github.com>
Date:   Wed Dec 6 17:47:03 2023 +0100

    fix tests

commit 11227804cb
Author: Amaury <1293565+amaury1729@users.noreply.github.com>
Date:   Wed Dec 6 16:40:32 2023 +0100

    chore: Update redis to v9 in x/go.mod
2023-12-10 09:46:45 -08:00
Mohammed Sohail
90188a093d ci: upgrade actions, lock redis version 2023-12-10 09:46:45 -08:00
Mohammed Sohail
e05f0b7196 ci/docs: update go versions
Squashed commit of the following:

commit de18fe9839
Author: Ken Hibino <ken.hibino7@gmail.com>
Date:   Sun Sep 17 19:35:33 2023 -0700

    Update README about supported Go versions

commit 714d62bb75
Author: Ken Hibino <ken.hibino7@gmail.com>
Date:   Sun Sep 17 19:26:16 2023 -0700

    Bound build to the latest two go versions
2023-12-10 09:46:45 -08:00
Mohammed Sohail
c1096a0fae pkg: go version update -> 1.20 2023-12-10 09:46:45 -08:00
Jeroen Bobbeldijk
9e548fc097 Implement reusing redis client 2023-09-19 11:20:32 +02:00
dependabot[bot]
6a7bf2ceff Bump github.com/google/uuid from 1.3.0 to 1.3.1 in /x
Bumps [github.com/google/uuid](https://github.com/google/uuid) from 1.3.0 to 1.3.1.
- [Release notes](https://github.com/google/uuid/releases)
- [Changelog](https://github.com/google/uuid/blob/master/CHANGELOG.md)
- [Commits](https://github.com/google/uuid/compare/v1.3.0...v1.3.1)

---
updated-dependencies:
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-08 08:37:24 -07:00
dependabot[bot]
e7fa0ae865 Bump google.golang.org/protobuf from 1.26.0 to 1.31.0
Bumps google.golang.org/protobuf from 1.26.0 to 1.31.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-08 08:37:01 -07:00
dependabot[bot]
fc4b6713f6 Bump github.com/spf13/cast from 1.3.1 to 1.5.1
Bumps [github.com/spf13/cast](https://github.com/spf13/cast) from 1.3.1 to 1.5.1.
- [Release notes](https://github.com/spf13/cast/releases)
- [Commits](https://github.com/spf13/cast/compare/v1.3.1...v1.5.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cast
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-08 08:36:38 -07:00
dependabot[bot]
6b98c0bbae Bump github.com/google/uuid from 1.2.0 to 1.3.0 (#699)
Bumps [github.com/google/uuid](https://github.com/google/uuid) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/google/uuid/releases)
- [Commits](https://github.com/google/uuid/compare/v1.2.0...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 16:14:25 +08:00
dependabot[bot]
ed1ab8ee55 Bump github.com/golang/protobuf from 1.5.2 to 1.5.3 (#703)
Bumps [github.com/golang/protobuf](https://github.com/golang/protobuf) from 1.5.2 to 1.5.3.
- [Release notes](https://github.com/golang/protobuf/releases)
- [Commits](https://github.com/golang/protobuf/compare/v1.5.2...v1.5.3)

---
updated-dependencies:
- dependency-name: github.com/golang/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 16:14:12 +08:00
dependabot[bot]
e18c0381ad Bump golang.org/x/time from 0.0.0-20190308202827-9d24e82272b4 to 0.3.0 (#696)
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.0.0-20190308202827-9d24e82272b4 to 0.3.0.
- [Commits](https://github.com/golang/time/commits/v0.3.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-31 15:59:16 +08:00
Mohammed Sohail
8b422c237c feat (ci/cd): add dependabot weekly checks 2023-07-29 21:07:37 -07:00
dependabot[bot]
e6f74c1c2b Bump golang.org/x/text from 0.3.7 to 0.3.8 in /tools (#619)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.7 to 0.3.8.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.3.7...v0.3.8)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 15:12:45 +08:00
dependabot[bot]
6edba6994e Bump github.com/prometheus/client_golang from 1.11.0 to 1.11.1 in /tools (#614)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.0...v1.11.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 11:44:52 +08:00
dependabot[bot]
571f0d2613 Bump github.com/prometheus/client_golang from 1.11.0 to 1.11.1 in /x (#615)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.0 to 1.11.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.0...v1.11.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-24 11:43:45 +08:00
Andrew Bezzub
2165ed133b Upgrade tools to use redis v9 2023-07-20 07:04:41 -07:00
Trịnh Đức Bảo Linh(Kevin)
551b0c7119 feat (add): panic error handling (#491)
* closes #487
2023-07-20 21:33:39 +08:00
guoguangwu
123d560a44 chore: replace loop with mux.mws = append(mux.mws, mws...) 2023-07-07 21:01:54 -07:00
Ken Hibino
5bef53d1ac Update README.md to include sponsoring section 2023-07-07 21:00:05 -07:00
Ken Hibino
90af7749ca Update FUNDING.yml 2023-07-07 20:54:12 -07:00
guoguangwu
e4b8663154 chore: unnecessary use of fmt.Sprintf 2023-07-07 20:45:42 -07:00
Ken Hibino
fde294be32 v0.24.1 2023-05-01 06:48:07 -07:00
Mohammed Sohail
cbb1be34ac (tools & x): revert to v8 version cc777eb
* revert state to as it was before v9 updates for tools and x modules
2023-04-17 22:30:33 -07:00
Mohammed Sohail
6ed70adf3b fix: breaking build below go < 1.18
* see https://github.com/redis/go-redis/pull/2458 for more info
2023-04-17 22:30:33 -07:00
Mohammed Sohail
1f42d71e9b pkg (tools): revert replace directive in go.mod
* this was previously reverted also in #392
2023-04-17 22:30:33 -07:00
Phước Trung
f966a6c3b8 completely update Redis package
Signed-off-by: Mohammed Sohail <sohailsameja@gmail.com>
2023-04-17 22:30:33 -07:00
Mohammed Sohail
8b057b8767 tests: restore ignore goleak.IgnoreTopFunction 2023-04-17 22:30:33 -07:00
Phước Trung
c72bfef094 fix unit test
Signed-off-by: Mohammed Sohail <sohailsameja@gmail.com>
2023-04-17 22:30:33 -07:00
Mohammed Sohail
dffb78cca4 pkg: remove v8 refs 2023-04-17 22:30:33 -07:00
Emanuel Bennici
0275df8df4 Update redis/go-redis to v9
Version v9 implements the support for Redis v7 and has some
other improvements.
2023-04-17 22:30:33 -07:00
cui fliter
cc777ebdaa fix some typos
Signed-off-by: cui fliter <imcusg@gmail.com>
2023-01-05 20:03:02 -08:00
Ken Hibino
783071c47f v0.24.0 2023-01-02 14:55:33 -08:00
Ken Hibino
bafed907e9 Fix redis script error 2023-01-02 14:53:45 -08:00
Ken Hibino
0b8cfad703 (cli:fix) Read --cluster flag from config file 2022-12-18 21:11:01 -08:00
Zhidong Chen
c08f142b56 fix redis sentinel url parse 2022-09-25 15:04:04 -07:00
徐胖
c70ff6a335 fix a missing ticker.stop() 2022-06-26 13:10:06 -07:00
Ken Hibino
a04ba6411d Fix dash command flags 2022-06-22 20:57:11 -07:00
Ken Hibino
d0209d9273 Remove error log from Scheduler 2022-06-04 12:48:56 -07:00
Ken Hibino
6c954c87bf Update readme 2022-06-03 04:14:45 -07:00
Ken Hibino
86fe31990b (cli): Add dash command 2022-06-02 19:23:06 -07:00
Chih Sean Hsu
e0e5d1ac24 Add pre and post enqueue callback options for Scheduler 2022-05-27 10:50:02 -07:00
Trịnh Đức Bảo Linh
30d409371b Fix comment typos 2022-05-16 21:14:15 -07:00
Mohab Abd El-Dayem
aefd276146 Upgrade goleak version 2022-05-08 10:17:22 -07:00
Mohab Abd El-Dayem
94ad9e5e74 Update CONTRIBUTING.md to use git ssh 2022-05-08 09:21:33 -07:00
Ken Hibino
5187844ca5 Update CLI install command in README 2022-05-06 16:55:54 -07:00
Ken Hibino
4dd2b5738a (cli): Improve help command output 2022-05-06 16:18:40 -07:00
Jeffrey Lo
9116c096ec docs: correct typo (deafult => default) 2022-05-04 19:10:09 -07:00
Erwan Leboucher
5c723f597e Correct the error message to cancel an active tasks 2022-04-13 06:08:46 -07:00
Ken Hibino
dd6f84c575 (cli): Use asynq v0.23 2022-04-13 06:07:31 -07:00
Ken Hibino
c438339c3d Fix date in changelog 2022-04-12 06:22:42 -07:00
Ken Hibino
901938b0fe Update readme 2022-04-11 17:07:24 -07:00
Ken Hibino
245d4fe663 v0.23.0 2022-04-11 16:57:33 -07:00
Ken Hibino
94719e325c Update CHANGELOG.md 2022-04-11 16:55:43 -07:00
Ken Hibino
8b2a787759 Rename variables for public API documentation 2022-04-11 16:55:43 -07:00
Ken Hibino
451be7e50f (cli): Update stats command to print the number of aggregating tasks 2022-04-11 16:55:43 -07:00
Ken Hibino
578321f226 (cli): Extend task ls command to list aggregating tasks 2022-04-11 16:55:43 -07:00
Ken Hibino
2c783566f3 (cli): Add group ls command 2022-04-11 16:55:43 -07:00
Ken Hibino
39718f8bea Always enqueue the aggregated task in the same queue 2022-04-11 16:55:43 -07:00
Ken Hibino
829f64fd38 Define GroupAggregator interface 2022-04-11 16:55:43 -07:00
Ken Hibino
a369443955 Add batch actions to inspector for aggregating tasks
Added:
- Inspector.DeleteAllAggregatingTasks
- Inspector.ArchiveAllAggregatingTasks
- Inspector.RunAllAggregatingTasks
2022-04-11 16:55:43 -07:00
Ken Hibino
de139cc18e Update RDB.RunTask to schedule aggregating task 2022-04-11 16:55:43 -07:00
Ken Hibino
74db013ab9 Add RDB.RunAllAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
725105ca03 Update RDB.ArchiveTask to archive aggregating task 2022-04-11 16:55:43 -07:00
Ken Hibino
d8f31e45f1 Add RDB.ArchiveAllAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
9023cbf4be Update RDB.DeleteTask to handle aggregating task 2022-04-11 16:55:43 -07:00
Ken Hibino
9279c09125 Add RDB.DeleteAllAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
bc27126670 Fix memory usage lua script 2022-04-11 16:55:43 -07:00
Ken Hibino
0cfa7f47ba Fix memory_usage lua script 2022-04-11 16:55:43 -07:00
Ken Hibino
8a4fb71dd5 Update go.mod with replace directive 2022-04-11 16:55:43 -07:00
Ken Hibino
7fb5b25944 Add Inspector.ListAggregatingTasks 2022-04-11 16:55:43 -07:00
Ken Hibino
71bd8f0535 Add RDB.ListAggregating 2022-04-11 16:55:43 -07:00
Ken Hibino
4c8432e0ce Add Inspector.Groups method 2022-04-11 16:55:43 -07:00
Ken Hibino
e939b5d166 Rename asynqtest package to testutil 2022-04-11 16:55:43 -07:00
Ken Hibino
1acd62c760 Move test helpers to asynqtest package 2022-04-11 16:55:43 -07:00
Ken Hibino
0149396bae Add RDB.GroupStats for inspecting groups 2022-04-11 16:55:43 -07:00
Ken Hibino
45ed560708 Add Group field to TaskInfo struct 2022-04-11 16:55:43 -07:00
Ken Hibino
01eeb8756e (cli): Update queue inspect cmd to show # of groups and aggregating tasks 2022-04-11 16:55:43 -07:00
Ken Hibino
47af17cfb4 Fix RDB.CurrentStats to report the correct queue size 2022-04-11 16:55:43 -07:00
Ken Hibino
eb064c2bab Fix AggregationCheck with unlimited size to clear group name from
all-groups set
2022-04-11 16:55:43 -07:00
Ken Hibino
652939dd3a Update memory usage redis lua script to account for groups 2022-04-11 16:55:43 -07:00
Ken Hibino
efe3c74037 Show number of groups and aggregating task count in QueueInfo 2022-04-11 16:55:43 -07:00
Ken Hibino
74d2eea4e0 Clear group if aggregation set empties the group 2022-04-11 16:55:43 -07:00
Ken Hibino
60a4dc1401 Add test for DeleteAggregationSet error case 2022-04-11 16:55:43 -07:00
Ken Hibino
4b716780ef Rewrite test for DeleteAggregationSet function with a new pattern 2022-04-11 16:55:43 -07:00
Ken Hibino
e63f41fb24 Fix DeleteAggregationSet 2022-04-11 16:55:43 -07:00
Ken Hibino
1c388baf06 Implement RDB.ReclaimStaleAggregationSets 2022-04-11 16:55:43 -07:00
Ken Hibino
47a66231b3 Store aggregation set *key* in all aggreationsets zset 2022-04-11 16:55:43 -07:00
Ken Hibino
3551d3334c Use zset for aggregation set to preserve score 2022-04-11 16:55:43 -07:00
Ken Hibino
8b16ede8bc Declare ReclaimStaleAggregationSets 2022-04-11 16:55:43 -07:00
Ken Hibino
c8658a53e6 Add aggregator test 2022-04-11 16:55:43 -07:00
Ken Hibino
562506c7ba Fix client to return error when nil task is passed 2022-04-11 16:55:43 -07:00
Ken Hibino
888b5590fb Make GroupMaxSize and GroupMaxDelay config optional 2022-04-11 16:55:43 -07:00
Ken Hibino
196db64d4d Run aggregator on the server 2022-04-11 16:55:43 -07:00
Ken Hibino
4b35eb0e1a Fix RDB.AggregationCheck when run against an empty group 2022-04-11 16:55:43 -07:00
Ken Hibino
b29fe58434 Implement RDB.ListGroups 2022-04-11 16:55:43 -07:00
Ken Hibino
7849b1114c Implement RDB.DeleteAggregationSet 2022-04-11 16:55:43 -07:00
Ken Hibino
99c00bffeb Implement RDB.AggregationCheck 2022-04-11 16:55:43 -07:00
Ken Hibino
4542b52da8 Check for aggregation at an interval <= gracePeriod 2022-04-11 16:55:43 -07:00
Ken Hibino
d841dc2f8d Add initial implementation of aggregator 2022-04-11 16:55:43 -07:00
Ken Hibino
ab28234767 Update client dependency to base.Broker 2022-04-11 16:55:43 -07:00
Ken Hibino
eb27b0fe1e Add TaskMessageBuilder type as a test helper 2022-04-11 16:55:43 -07:00
Ken Hibino
088be63ee4 Update forwarder to use time.Timer 2022-04-11 16:55:43 -07:00
Ken Hibino
ed69667e86 Update ForwardIfReady test with group 2022-04-11 16:55:43 -07:00
Ken Hibino
4e8885276c Update client to store groupKey under TaskMessage 2022-04-11 16:55:43 -07:00
Ken Hibino
401f7fb4fe Add GroupKey field to TaskMessage 2022-04-11 16:55:43 -07:00
Ken Hibino
61854ea1dc Update RDB.ForwardIfReady to forward to group if groupKey is specified 2022-04-11 16:55:43 -07:00
Ken Hibino
f17c157b0f Update Client to add task to group if Group option is specified 2022-04-11 16:55:43 -07:00
Ken Hibino
8b582899ad Add RDB.AddToGroup and RDB.AddToGroupUnique methods 2022-04-11 16:55:43 -07:00
Ken Hibino
e3d2939a4c Add helper functions to generate group key 2022-04-11 16:55:43 -07:00
Ken Hibino
2ce71e83b0 Add Group task option 2022-04-11 16:55:43 -07:00
Ken Hibino
1608366032 Add group related configuration options 2022-04-11 16:55:43 -07:00
ashang
3f4f0c1daa Use explicit types for limit constants 2022-03-29 06:30:10 -07:00
Ken Hibino
f94a65dc9f Add go1.18 to build workflow matrix 2022-03-22 06:51:28 -07:00
Erwan Leboucher
04d7c8c38c Add rediss url parsing support 2022-02-24 08:30:55 -08:00
Ken Hibino
c04fd41653 v0.22.1 2022-02-20 06:22:55 -08:00
Ken Hibino
7e5efb0e30 Drop GT option from RDB.ExtendLease
GT option in ZAdd is supported for redis v6.2.0 or above.
This Change fixes redis version compatibility (currently v4.0+)
2022-02-20 06:20:38 -08:00
Ken Hibino
cabf8d3627 Fix changelog 2022-02-19 06:21:56 -08:00
Ken Hibino
a19909f5f4 v0.22.0 2022-02-19 06:20:05 -08:00
Ken Hibino
cea5110d15 Add IsOrphaned field to TaskInfo 2022-02-19 06:15:44 -08:00
Ken Hibino
9b63e23274 Update log messages 2022-02-19 06:15:44 -08:00
Ken Hibino
de25201d9f Make timeutil.SimulatedClock concurrency safe 2022-02-19 06:15:44 -08:00
Ken Hibino
ec560afb01 Fix processor test 2022-02-19 06:15:44 -08:00
Ken Hibino
d4006894ad Remove base.DeadlinesKey 2022-02-19 06:15:44 -08:00
Ken Hibino
59927509d8 Remove timeout and deadline fields under task key 2022-02-19 06:15:44 -08:00
Ken Hibino
8211167de2 Update processor to create a lease and watch for expiration 2022-02-19 06:15:44 -08:00
Ken Hibino
d7169cd445 Update heartbeat to extend lease of active workers 2022-02-19 06:15:44 -08:00
Ken Hibino
dfae8638e1 Update RDB methods to work with lease 2022-02-19 06:15:44 -08:00
Ken Hibino
b9943de2ab Add Lease type to base package 2022-02-19 06:15:44 -08:00
Ken Hibino
871474f220 Update heartbeat goroutine to call ExtendLease on active tasks 2022-02-19 06:15:44 -08:00
Ken Hibino
87dc392c7f Add RDB.ExtendLease method 2022-02-19 06:15:44 -08:00
Ken Hibino
dabcb120d5 Update recoverer to use ListLeaseExpired 2022-02-19 06:15:44 -08:00
Ken Hibino
bc2f1986d7 Update ListDeadlineExceeded to ListLeaseExpired 2022-02-19 06:15:44 -08:00
Ken Hibino
b8cb579407 Update RDB methods to use lease instead of deadlines set 2022-02-19 06:15:44 -08:00
Ken Hibino
bca624792c Move task deadline compute logic to processor 2022-02-19 06:15:44 -08:00
Ken Hibino
d865d89900 Update RDB.Dequeue to insert task ID to lease set 2022-02-19 06:15:44 -08:00
Ken Hibino
852af7abd1 Add base.LeaseKey helper function 2022-02-19 06:15:44 -08:00
Ken Hibino
5490d2c625 Fix tests 2022-02-16 07:08:01 -08:00
Binaek Sarkar
ebd7a32c0f conventions 2022-02-16 06:43:08 -08:00
Binaek Sarkar
55d0610a03 test and changelog 2022-02-16 06:43:08 -08:00
Binaek Sarkar
ab8a4f5b1e review corrections 2022-02-16 06:43:08 -08:00
Binaek Sarkar
d7ceb0c090 first cut 2022-02-16 06:43:08 -08:00
Ken Hibino
8bd70c6f84 (ci): Run go (build|test) commands for each module 2022-02-01 07:00:00 -08:00
Ken Hibino
10ab4e3745 Remove replace directives in go.mod 2022-02-01 06:18:41 -08:00
Ken Hibino
349f4c50fb Add example for ResultWriter 2022-01-31 09:08:41 -08:00
Ken Hibino
dff2e3a336 v0.21.0 2022-01-22 06:15:29 -08:00
Ken Hibino
65040af7b5 Update changelog 2022-01-22 06:14:24 -08:00
Ken Hibino
053fe2d1ee Create PeriodicTaskManager 2022-01-22 05:59:33 -08:00
Ken Hibino
25832e5e95 Fix bug related to concurrently executing server state changes 2022-01-12 09:10:56 -08:00
Ken Hibino
aa26f3819e Fix flaky tests 2022-01-05 09:07:42 -08:00
Ken Hibino
d94614bb9b Add CODE_OF_CONDUCT.md 2022-01-04 06:17:48 -08:00
Mahdi Dibaiee
ce46b07652 Allow configuration of DelayedTaskCheckInterval 2022-01-03 14:44:00 -08:00
Mahdi Dibaiee
2d0170541c Add --json flag for asynq stats command 2022-01-02 07:24:29 -08:00
Andreas Thomas
c1f08106da fix: missing import statement in example code 2021-12-27 05:40:10 -08:00
Ken Hibino
74cf804197 Update readme 2021-12-20 05:51:51 -08:00
Ken Hibino
8dfabfccb3 Fix build 2021-12-19 07:06:37 -08:00
Ken Hibino
5f20edcbd1 v0.20.0 2021-12-19 07:00:21 -08:00
Ken Hibino
1ddb2f7bce Use math.MaxInt64 instead of custom const 2021-12-19 06:58:12 -08:00
Ken Hibino
82d18e3d91 Record total tasks processed/failed 2021-12-16 16:53:02 -08:00
Ken Hibino
43cb4ddf19 Add queue metrics exporter
Changes:
- Added `x/metrics` package
- Added `tools/metrics_exporter` binary
2021-12-16 06:01:01 -08:00
Francisco Miamoto
ddfc6747a1 Fix typo in Server doc 2021-12-13 16:23:30 -08:00
91 changed files with 11882 additions and 2138 deletions

12
.github/FUNDING.yml vendored
View File

@@ -1,12 +1,4 @@
# These are supported funding model platforms # These are supported funding model platforms
github: [hibiken] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2] github: [hibiken]
patreon: # Replace with a single Patreon username open_collective: ken-hibino
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@@ -3,13 +3,20 @@ name: Bug report
about: Create a report to help us improve about: Create a report to help us improve
title: "[BUG] Description of the bug" title: "[BUG] Description of the bug"
labels: bug labels: bug
assignees: hibiken assignees:
- hibiken
- kamikazechaser
--- ---
**Describe the bug** **Describe the bug**
A clear and concise description of what the bug is. A clear and concise description of what the bug is.
**Environment (please complete the following information):**
- OS: [e.g. MacOS, Linux]
- `asynq` package version [e.g. v0.25.0]
- Redis/Valkey version
**To Reproduce** **To Reproduce**
Steps to reproduce the behavior (Code snippets if applicable): Steps to reproduce the behavior (Code snippets if applicable):
1. Setup background processing ... 1. Setup background processing ...
@@ -22,9 +29,5 @@ A clear and concise description of what you expected to happen.
**Screenshots** **Screenshots**
If applicable, add screenshots to help explain your problem. If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. MacOS, Linux]
- Version of `asynq` package [e.g. v1.0.0]
**Additional context** **Additional context**
Add any other context about the problem here. Add any other context about the problem here.

View File

@@ -3,7 +3,9 @@ name: Feature request
about: Suggest an idea for this project about: Suggest an idea for this project
title: "[FEATURE REQUEST] Description of the feature request" title: "[FEATURE REQUEST] Description of the feature request"
labels: enhancement labels: enhancement
assignees: hibiken assignees:
- hibiken
- kamikazechaser
--- ---

24
.github/dependabot.yaml vendored Normal file
View File

@@ -0,0 +1,24 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
labels:
- "pr-deps"
- package-ecosystem: "gomod"
directory: "/tools"
schedule:
interval: "weekly"
labels:
- "pr-deps"
- package-ecosystem: "gomod"
directory: "/x"
schedule:
interval: "weekly"
labels:
- "pr-deps"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -11,20 +11,20 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
services: services:
redis: redis:
image: redis image: redis:7
ports: ports:
- 6379:6379 - 6379:6379
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v5
with: with:
go-version: 1.16.x go-version: 1.23.x
- name: Benchmark - name: Benchmark
run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a new.txt run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a new.txt
- name: Upload Benchmark - name: Upload Benchmark
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: bench-incoming name: bench-incoming
path: new.txt path: new.txt
@@ -33,22 +33,22 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
services: services:
redis: redis:
image: redis image: redis:7
ports: ports:
- 6379:6379 - 6379:6379
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
with: with:
ref: master ref: master
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v5
with: with:
go-version: 1.15.x go-version: 1.23.x
- name: Benchmark - name: Benchmark
run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a old.txt run: go test -run=^$ -bench=. -count=5 -timeout=60m ./... | tee -a old.txt
- name: Upload Benchmark - name: Upload Benchmark
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: bench-current name: bench-current
path: old.txt path: old.txt
@@ -58,25 +58,25 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v5
with: with:
go-version: 1.15.x go-version: 1.23.x
- name: Install benchstat - name: Install benchstat
run: go get -u golang.org/x/perf/cmd/benchstat run: go get -u golang.org/x/perf/cmd/benchstat
- name: Download Incoming - name: Download Incoming
uses: actions/download-artifact@v2 uses: actions/download-artifact@v4
with: with:
name: bench-incoming name: bench-incoming
- name: Download Current - name: Download Current
uses: actions/download-artifact@v2 uses: actions/download-artifact@v4
with: with:
name: bench-current name: bench-current
- name: Benchstat Results - name: Benchstat Results
run: benchstat old.txt new.txt | tee -a benchstat.txt run: benchstat old.txt new.txt | tee -a benchstat.txt
- name: Upload benchstat results - name: Upload benchstat results
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v4
with: with:
name: benchstat name: benchstat
path: benchstat.txt path: benchstat.txt

View File

@@ -7,29 +7,77 @@ jobs:
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
go-version: [1.14.x, 1.15.x, 1.16.x, 1.17.x] go-version: [1.22.x, 1.23.x]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
services: services:
redis: redis:
image: redis image: redis:7
ports: ports:
- 6379:6379 - 6379:6379
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v4
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v5
with: with:
go-version: ${{ matrix.go-version }} go-version: ${{ matrix.go-version }}
cache: false
- name: Build - name: Build core module
run: go build -v ./... run: go build -v ./...
- name: Test - name: Build x module
run: cd x && go build -v ./... && cd ..
- name: Test core module
run: go test -race -v -coverprofile=coverage.txt -covermode=atomic ./... run: go test -race -v -coverprofile=coverage.txt -covermode=atomic ./...
- name: Test x module
run: cd x && go test -race -v ./... && cd ..
- name: Benchmark Test - name: Benchmark Test
run: go test -run=^$ -bench=. -loglevel=debug ./... run: go test -run=^$ -bench=. -loglevel=debug ./...
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
uses: codecov/codecov-action@v1 uses: codecov/codecov-action@v5
build-tool:
strategy:
matrix:
os: [ubuntu-latest]
go-version: [1.22.x, 1.23.x]
runs-on: ${{ matrix.os }}
services:
redis:
image: redis:7
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go-version }}
cache: false
- name: Build tools module
run: cd tools && go build -v ./... && cd ..
- name: Test tools module
run: cd tools && go test -race -v ./... && cd ..
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: stable
- name: golangci-lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.61

6
.gitignore vendored
View File

@@ -1,3 +1,4 @@
vendor
# Binaries for programs and plugins # Binaries for programs and plugins
*.exe *.exe
*.exe~ *.exe~
@@ -14,12 +15,13 @@
# Ignore examples for now # Ignore examples for now
/examples /examples
# Ignore command binary # Ignore tool binaries
/tools/asynq/asynq /tools/asynq/asynq
/tools/metrics_exporter/metrics_exporter
# Ignore asynq config file # Ignore asynq config file
.asynq.* .asynq.*
# Ignore editor config files # Ignore editor config files
.vscode .vscode
.idea .idea

View File

@@ -7,6 +7,98 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ## [Unreleased]
## [0.25.0] - 2024-10-29
### Upgrades
- Minumum go version is set to 1.22 (PR: https://github.com/hibiken/asynq/pull/925)
- Internal protobuf package is upgraded to address security advisories (PR: https://github.com/hibiken/asynq/pull/925)
- Most packages are upgraded
- CI/CD spec upgraded
### Added
- `IsPanicError` function is introduced to support catching of panic errors when processing tasks (PR: https://github.com/hibiken/asynq/pull/491)
- `JanitorInterval` and `JanitorBatchSize` are added as Server options (PR: https://github.com/hibiken/asynq/pull/715)
- `NewClientFromRedisClient` is introduced to allow reusing an existing redis client (PR: https://github.com/hibiken/asynq/pull/742)
- `TaskCheckInterval` config option is added to specify the interval between checks for new tasks to process when all queues are empty (PR: https://github.com/hibiken/asynq/pull/694)
- `Ping` method is added to Client, Server and Scheduler ((PR: https://github.com/hibiken/asynq/pull/585))
- `RevokeTask` error type is introduced to prevent a task from being retried or archived (PR: https://github.com/hibiken/asynq/pull/882)
- `SentinelUsername` is added as a redis config option (PR: https://github.com/hibiken/asynq/pull/924)
- Some jitter is introduced to improve latency when fetching jobs in the processor (PR: https://github.com/hibiken/asynq/pull/868)
- Add task enqueue command to the CLI (PR: https://github.com/hibiken/asynq/pull/918)
- Add a map cache (concurrent safe) to keep track of queues that ultimately reduces redis load when enqueuing tasks (PR: https://github.com/hibiken/asynq/pull/946)
### Fixes
- Archived tasks that are trimmed should now be deleted (PR: https://github.com/hibiken/asynq/pull/743)
- Fix lua script when listing task messages with an expired lease (PR: https://github.com/hibiken/asynq/pull/709)
- Fix potential context leaks due to cancellation not being called (PR: https://github.com/hibiken/asynq/pull/926)
- Misc documentation fixes
- Misc test fixes
## [0.24.1] - 2023-05-01
### Changed
- Updated package version dependency for go-redis
## [0.24.0] - 2023-01-02
### Added
- `PreEnqueueFunc`, `PostEnqueueFunc` is added in `Scheduler` and deprecated `EnqueueErrorHandler` (PR: https://github.com/hibiken/asynq/pull/476)
### Changed
- Removed error log when `Scheduler` failed to enqueue a task. Use `PostEnqueueFunc` to check for errors and task actions if needed.
- Changed log level from ERROR to WARNINING when `Scheduler` failed to record `SchedulerEnqueueEvent`.
## [0.23.0] - 2022-04-11
### Added
- `Group` option is introduced to enqueue task in a group.
- `GroupAggregator` and related types are introduced for task aggregation feature.
- `GroupGracePeriod`, `GroupMaxSize`, `GroupMaxDelay`, and `GroupAggregator` fields are added to `Config`.
- `Inspector` has new methods related to "aggregating tasks".
- `Group` field is added to `TaskInfo`.
- (CLI): `group ls` command is added
- (CLI): `task ls` supports listing aggregating tasks via `--state=aggregating --group=<GROUP>` flags
- Enable rediss url parsing support
### Fixed
- Fixed overflow issue with 32-bit systems (For details, see https://github.com/hibiken/asynq/pull/426)
## [0.22.1] - 2022-02-20
### Fixed
- Fixed Redis version compatibility: Keep support for redis v4.0+
## [0.22.0] - 2022-02-19
### Added
- `BaseContext` is introduced in `Config` to specify callback hook to provide a base `context` from which `Handler` `context` is derived
- `IsOrphaned` field is added to `TaskInfo` to describe a task left in active state with no worker processing it.
### Changed
- `Server` now recovers tasks with an expired lease. Recovered tasks are retried/archived with `ErrLeaseExpired` error.
## [0.21.0] - 2022-01-22
### Added
- `PeriodicTaskManager` is added. Prefer using this over `Scheduler` as it has better support for dynamic periodic tasks.
- The `asynq stats` command now supports a `--json` option, making its output a JSON object
- Introduced new configuration for `DelayedTaskCheckInterval`. See [godoc](https://godoc.org/github.com/hibiken/asynq) for more details.
## [0.20.0] - 2021-12-19
### Added
- Package `x/metrics` is added.
- Tool `tools/metrics_exporter` binary is added.
- `ProcessedTotal` and `FailedTotal` fields were added to `QueueInfo` struct.
## [0.19.1] - 2021-12-12 ## [0.19.1] - 2021-12-12
### Added ### Added
@@ -231,9 +323,9 @@ Use `ProcessIn` or `ProcessAt` option to schedule a task instead of `EnqueueIn`
#### `Inspector` #### `Inspector`
All Inspector methods are scoped to a queue, and the methods take `qname (string)` as the first argument. All Inspector methods are scoped to a queue, and the methods take `qname (string)` as the first argument.
`EnqueuedTask` is renamed to `PendingTask` and its corresponding methods. `EnqueuedTask` is renamed to `PendingTask` and its corresponding methods.
`InProgressTask` is renamed to `ActiveTask` and its corresponding methods. `InProgressTask` is renamed to `ActiveTask` and its corresponding methods.
Command "Enqueue" is replaced by the verb "Run" (e.g. `EnqueueAllScheduledTasks` --> `RunAllScheduledTasks`) Command "Enqueue" is replaced by the verb "Run" (e.g. `EnqueueAllScheduledTasks` --> `RunAllScheduledTasks`)
#### `CLI` #### `CLI`

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
ken.hibino7@gmail.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@@ -38,7 +38,7 @@ Thank you! We'll try to respond as quickly as possible.
## Contributing Code ## Contributing Code
1. Fork this repo 1. Fork this repo
2. Download your fork `git clone https://github.com/your-username/asynq && cd asynq` 2. Download your fork `git clone git@github.com:your-username/asynq.git && cd asynq`
3. Create your branch `git checkout -b your-branch-name` 3. Create your branch `git checkout -b your-branch-name`
4. Make and commit your changes 4. Make and commit your changes
5. Push the branch `git push origin your-branch-name` 5. Push the branch `git push origin your-branch-name`

View File

@@ -4,4 +4,8 @@ proto: internal/proto/asynq.proto
protoc -I=$(ROOT_DIR)/internal/proto \ protoc -I=$(ROOT_DIR)/internal/proto \
--go_out=$(ROOT_DIR)/internal/proto \ --go_out=$(ROOT_DIR)/internal/proto \
--go_opt=module=github.com/hibiken/asynq/internal/proto \ --go_opt=module=github.com/hibiken/asynq/internal/proto \
$(ROOT_DIR)/internal/proto/asynq.proto $(ROOT_DIR)/internal/proto/asynq.proto
.PHONY: lint
lint:
golangci-lint run

View File

@@ -33,23 +33,30 @@ Task queues are used as a mechanism to distribute work across multiple machines.
- Low latency to add a task since writes are fast in Redis - Low latency to add a task since writes are fast in Redis
- De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks) - De-duplication of tasks using [unique option](https://github.com/hibiken/asynq/wiki/Unique-Tasks)
- Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation) - Allow [timeout and deadline per task](https://github.com/hibiken/asynq/wiki/Task-Timeout-and-Cancelation)
- Allow [aggregating group of tasks](https://github.com/hibiken/asynq/wiki/Task-aggregation) to batch multiple successive operations
- [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive) - [Flexible handler interface with support for middlewares](https://github.com/hibiken/asynq/wiki/Handler-Deep-Dive)
- [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue - [Ability to pause queue](/tools/asynq/README.md#pause) to stop processing tasks from the queue
- [Periodic Tasks](https://github.com/hibiken/asynq/wiki/Periodic-Tasks) - [Periodic Tasks](https://github.com/hibiken/asynq/wiki/Periodic-Tasks)
- [Support Redis Cluster](https://github.com/hibiken/asynq/wiki/Redis-Cluster) for automatic sharding and high availability
- [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for high availability - [Support Redis Sentinels](https://github.com/hibiken/asynq/wiki/Automatic-Failover) for high availability
- Integration with [Prometheus](https://prometheus.io/) to collect and visualize queue metrics
- [Web UI](#web-ui) to inspect and remote-control queues and tasks - [Web UI](#web-ui) to inspect and remote-control queues and tasks
- [CLI](#command-line-tool) to inspect and remote-control queues and tasks - [CLI](#command-line-tool) to inspect and remote-control queues and tasks
## Stability and Compatibility ## Stability and Compatibility
**Status**: The library is currently undergoing **heavy development** with frequent, breaking API changes. **Status**: The library relatively stable and is currently undergoing **moderate development** with less frequent breaking API changes.
> ☝️ **Important Note**: Current major version is zero (`v0.x.x`) to accomodate rapid development and fast iteration while getting early feedback from users (_feedback on APIs are appreciated!_). The public API could change without a major version update before `v1.0.0` release. > ☝️ **Important Note**: Current major version is zero (`v0.x.x`) to accommodate rapid development and fast iteration while getting early feedback from users (_feedback on APIs are appreciated!_). The public API could change without a major version update before `v1.0.0` release.
### Redis Cluster Compatibility
Some of the lua scripts in this library may not be compatible with Redis Cluster.
## Sponsoring
If you are using this package in production, **please consider sponsoring the project to show your support!**
## Quickstart ## Quickstart
Make sure you have Go installed ([download](https://golang.org/dl/)). The **last two** Go versions are supported (See https://go.dev/dl).
Make sure you have Go installed ([download](https://golang.org/dl/)). Version `1.14` or higher is required.
Initialize your project by creating a folder and then running `go mod init github.com/your/repo` ([learn more](https://blog.golang.org/using-go-modules)) inside the folder. Then install Asynq library with the [`go get`](https://golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them) command: Initialize your project by creating a folder and then running `go mod init github.com/your/repo` ([learn more](https://blog.golang.org/using-go-modules)) inside the folder. Then install Asynq library with the [`go get`](https://golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them) command:
@@ -65,8 +72,11 @@ Next, write a package that encapsulates task creation and task handling.
package tasks package tasks
import ( import (
"context"
"encoding/json"
"fmt" "fmt"
"log"
"time"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
) )
@@ -271,6 +281,9 @@ Here's a few screenshots of the Web UI:
![Web UI TasksView](https://user-images.githubusercontent.com/11155743/114697070-1f0a0300-9d26-11eb-855c-d3ec263865b7.png) ![Web UI TasksView](https://user-images.githubusercontent.com/11155743/114697070-1f0a0300-9d26-11eb-855c-d3ec263865b7.png)
**Metrics view**
<img width="1532" alt="Screen Shot 2021-12-19 at 4 37 19 PM" src="https://user-images.githubusercontent.com/10953044/146777420-cae6c476-bac6-469c-acce-b2f6584e8707.png">
**Settings and adaptive dark mode** **Settings and adaptive dark mode**
![Web UI Settings and adaptive dark mode](https://user-images.githubusercontent.com/11155743/114697149-3517c380-9d26-11eb-9f7a-ae2dd00aad5b.png) ![Web UI Settings and adaptive dark mode](https://user-images.githubusercontent.com/11155743/114697149-3517c380-9d26-11eb-9f7a-ae2dd00aad5b.png)
@@ -284,12 +297,12 @@ Asynq ships with a command line tool to inspect the state of queues and tasks.
To install the CLI tool, run the following command: To install the CLI tool, run the following command:
```sh ```sh
go get -u github.com/hibiken/asynq/tools/asynq go install github.com/hibiken/asynq/tools/asynq@latest
``` ```
Here's an example of running the `asynq stats` command: Here's an example of running the `asynq dash` command:
![Gif](/docs/assets/demo.gif) ![Gif](/docs/assets/dash.gif)
For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md). For details on how to use the tool, refer to the tool's [README](/tools/asynq/README.md).

176
aggregator.go Normal file
View File

@@ -0,0 +1,176 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"context"
"sync"
"time"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log"
)
// An aggregator is responsible for checking groups and aggregate into one task
// if any of the grouping condition is met.
type aggregator struct {
logger *log.Logger
broker base.Broker
client *Client
// channel to communicate back to the long running "aggregator" goroutine.
done chan struct{}
// list of queue names to check and aggregate.
queues []string
// Group configurations
gracePeriod time.Duration
maxDelay time.Duration
maxSize int
// User provided group aggregator.
ga GroupAggregator
// interval used to check for aggregation
interval time.Duration
// sema is a counting semaphore to ensure the number of active aggregating function
// does not exceed the limit.
sema chan struct{}
}
type aggregatorParams struct {
logger *log.Logger
broker base.Broker
queues []string
gracePeriod time.Duration
maxDelay time.Duration
maxSize int
groupAggregator GroupAggregator
}
const (
// Maximum number of aggregation checks in flight concurrently.
maxConcurrentAggregationChecks = 3
// Default interval used for aggregation checks. If the provided gracePeriod is less than
// the default, use the gracePeriod.
defaultAggregationCheckInterval = 7 * time.Second
)
func newAggregator(params aggregatorParams) *aggregator {
interval := defaultAggregationCheckInterval
if params.gracePeriod < interval {
interval = params.gracePeriod
}
return &aggregator{
logger: params.logger,
broker: params.broker,
client: &Client{broker: params.broker},
done: make(chan struct{}),
queues: params.queues,
gracePeriod: params.gracePeriod,
maxDelay: params.maxDelay,
maxSize: params.maxSize,
ga: params.groupAggregator,
sema: make(chan struct{}, maxConcurrentAggregationChecks),
interval: interval,
}
}
func (a *aggregator) shutdown() {
if a.ga == nil {
return
}
a.logger.Debug("Aggregator shutting down...")
// Signal the aggregator goroutine to stop.
a.done <- struct{}{}
}
func (a *aggregator) start(wg *sync.WaitGroup) {
if a.ga == nil {
return
}
wg.Add(1)
go func() {
defer wg.Done()
ticker := time.NewTicker(a.interval)
for {
select {
case <-a.done:
a.logger.Debug("Waiting for all aggregation checks to finish...")
// block until all aggregation checks released the token
for i := 0; i < cap(a.sema); i++ {
a.sema <- struct{}{}
}
a.logger.Debug("Aggregator done")
ticker.Stop()
return
case t := <-ticker.C:
a.exec(t)
}
}
}()
}
func (a *aggregator) exec(t time.Time) {
select {
case a.sema <- struct{}{}: // acquire token
go a.aggregate(t)
default:
// If the semaphore blocks, then we are currently running max number of
// aggregation checks. Skip this round and log warning.
a.logger.Warnf("Max number of aggregation checks in flight. Skipping")
}
}
func (a *aggregator) aggregate(t time.Time) {
defer func() { <-a.sema /* release token */ }()
for _, qname := range a.queues {
groups, err := a.broker.ListGroups(qname)
if err != nil {
a.logger.Errorf("Failed to list groups in queue: %q", qname)
continue
}
for _, gname := range groups {
aggregationSetID, err := a.broker.AggregationCheck(
qname, gname, t, a.gracePeriod, a.maxDelay, a.maxSize)
if err != nil {
a.logger.Errorf("Failed to run aggregation check: queue=%q group=%q", qname, gname)
continue
}
if aggregationSetID == "" {
a.logger.Debugf("No aggregation needed at this time: queue=%q group=%q", qname, gname)
continue
}
// Aggregate and enqueue.
msgs, deadline, err := a.broker.ReadAggregationSet(qname, gname, aggregationSetID)
if err != nil {
a.logger.Errorf("Failed to read aggregation set: queue=%q, group=%q, setID=%q",
qname, gname, aggregationSetID)
continue
}
tasks := make([]*Task, len(msgs))
for i, m := range msgs {
tasks[i] = NewTask(m.Type, m.Payload)
}
aggregatedTask := a.ga.Aggregate(gname, tasks)
ctx, cancel := context.WithDeadline(context.Background(), deadline)
if _, err := a.client.EnqueueContext(ctx, aggregatedTask, Queue(qname)); err != nil {
a.logger.Errorf("Failed to enqueue aggregated task (queue=%q, group=%q, setID=%q): %v",
qname, gname, aggregationSetID, err)
cancel()
continue
}
if err := a.broker.DeleteAggregationSet(ctx, qname, gname, aggregationSetID); err != nil {
a.logger.Warnf("Failed to delete aggregation set: queue=%q, group=%q, setID=%q",
qname, gname, aggregationSetID)
}
cancel()
}
}
}

165
aggregator_test.go Normal file
View File

@@ -0,0 +1,165 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
)
func TestAggregator(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
client := Client{broker: rdbClient}
tests := []struct {
desc string
gracePeriod time.Duration
maxDelay time.Duration
maxSize int
aggregateFunc func(gname string, tasks []*Task) *Task
tasks []*Task // tasks to enqueue
enqueueFrequency time.Duration // time between one enqueue event to another
waitTime time.Duration // time to wait
wantGroups map[string]map[string][]base.Z
wantPending map[string][]*base.TaskMessage
}{
{
desc: "group older than the grace period should be aggregated",
gracePeriod: 1 * time.Second,
maxDelay: 0, // no maxdelay limit
maxSize: 0, // no maxsize limit
aggregateFunc: func(gname string, tasks []*Task) *Task {
return NewTask(gname, nil, MaxRetry(len(tasks))) // use max retry to see how many tasks were aggregated
},
tasks: []*Task{
NewTask("task1", nil, Group("mygroup")),
NewTask("task2", nil, Group("mygroup")),
NewTask("task3", nil, Group("mygroup")),
},
enqueueFrequency: 300 * time.Millisecond,
waitTime: 3 * time.Second,
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {},
},
},
wantPending: map[string][]*base.TaskMessage{
"default": {
h.NewTaskMessageBuilder().SetType("mygroup").SetRetry(3).Build(),
},
},
},
{
desc: "group older than the max-delay should be aggregated",
gracePeriod: 2 * time.Second,
maxDelay: 4 * time.Second,
maxSize: 0, // no maxsize limit
aggregateFunc: func(gname string, tasks []*Task) *Task {
return NewTask(gname, nil, MaxRetry(len(tasks))) // use max retry to see how many tasks were aggregated
},
tasks: []*Task{
NewTask("task1", nil, Group("mygroup")), // time 0
NewTask("task2", nil, Group("mygroup")), // time 1s
NewTask("task3", nil, Group("mygroup")), // time 2s
NewTask("task4", nil, Group("mygroup")), // time 3s
},
enqueueFrequency: 1 * time.Second,
waitTime: 4 * time.Second,
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {},
},
},
wantPending: map[string][]*base.TaskMessage{
"default": {
h.NewTaskMessageBuilder().SetType("mygroup").SetRetry(4).Build(),
},
},
},
{
desc: "group reached the max-size should be aggregated",
gracePeriod: 1 * time.Minute,
maxDelay: 0, // no maxdelay limit
maxSize: 5,
aggregateFunc: func(gname string, tasks []*Task) *Task {
return NewTask(gname, nil, MaxRetry(len(tasks))) // use max retry to see how many tasks were aggregated
},
tasks: []*Task{
NewTask("task1", nil, Group("mygroup")),
NewTask("task2", nil, Group("mygroup")),
NewTask("task3", nil, Group("mygroup")),
NewTask("task4", nil, Group("mygroup")),
NewTask("task5", nil, Group("mygroup")),
},
enqueueFrequency: 300 * time.Millisecond,
waitTime: defaultAggregationCheckInterval * 2,
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {},
},
},
wantPending: map[string][]*base.TaskMessage{
"default": {
h.NewTaskMessageBuilder().SetType("mygroup").SetRetry(5).Build(),
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
aggregator := newAggregator(aggregatorParams{
logger: testLogger,
broker: rdbClient,
queues: []string{"default"},
gracePeriod: tc.gracePeriod,
maxDelay: tc.maxDelay,
maxSize: tc.maxSize,
groupAggregator: GroupAggregatorFunc(tc.aggregateFunc),
})
var wg sync.WaitGroup
aggregator.start(&wg)
for _, task := range tc.tasks {
if _, err := client.Enqueue(task); err != nil {
t.Errorf("%s: Client Enqueue failed: %v", tc.desc, err)
aggregator.shutdown()
wg.Wait()
continue
}
time.Sleep(tc.enqueueFrequency)
}
time.Sleep(tc.waitTime)
for qname, groups := range tc.wantGroups {
for gname, want := range groups {
gotGroup := h.GetGroupEntries(t, r, qname, gname)
if diff := cmp.Diff(want, gotGroup, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s: mismatch found in %q; (-want,+got)\n%s", tc.desc, base.GroupKey(qname, gname), diff)
}
}
}
for qname, want := range tc.wantPending {
gotPending := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, gotPending, h.SortMsgOpt, h.IgnoreIDOpt); diff != "" {
t.Errorf("%s: mismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
}
}
aggregator.shutdown()
wg.Wait()
}
}

View File

@@ -8,12 +8,13 @@ import (
"context" "context"
"crypto/tls" "crypto/tls"
"fmt" "fmt"
"net"
"net/url" "net/url"
"strconv" "strconv"
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v8" "github.com/redis/go-redis/v9"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
) )
@@ -97,10 +98,26 @@ type TaskInfo struct {
// Deadline is the deadline for the task, zero value if not specified. // Deadline is the deadline for the task, zero value if not specified.
Deadline time.Time Deadline time.Time
// Group is the name of the group in which the task belongs.
//
// Tasks in the same queue can be grouped together by Group name and will be aggregated into one task
// by a Server processing the queue.
//
// Empty string (default) indicates task does not belong to any groups, and no aggregation will be applied to the task.
Group string
// NextProcessAt is the time the task is scheduled to be processed, // NextProcessAt is the time the task is scheduled to be processed,
// zero if not applicable. // zero if not applicable.
NextProcessAt time.Time NextProcessAt time.Time
// IsOrphaned describes whether the task is left in active state with no worker processing it.
// An orphaned task indicates that the worker has crashed or experienced network failures and was not able to
// extend its lease on the task.
//
// This task will be recovered by running a server against the queue the task is in.
// This field is only applicable to tasks with TaskStateActive.
IsOrphaned bool
// Retention is duration of the retention period after the task is successfully processed. // Retention is duration of the retention period after the task is successfully processed.
Retention time.Duration Retention time.Duration
@@ -131,6 +148,7 @@ func newTaskInfo(msg *base.TaskMessage, state base.TaskState, nextProcessAt time
MaxRetry: msg.Retry, MaxRetry: msg.Retry,
Retried: msg.Retried, Retried: msg.Retried,
LastErr: msg.ErrorMsg, LastErr: msg.ErrorMsg,
Group: msg.GroupKey,
Timeout: time.Duration(msg.Timeout) * time.Second, Timeout: time.Duration(msg.Timeout) * time.Second,
Deadline: fromUnixTimeOrZero(msg.Deadline), Deadline: fromUnixTimeOrZero(msg.Deadline),
Retention: time.Duration(msg.Retention) * time.Second, Retention: time.Duration(msg.Retention) * time.Second,
@@ -153,6 +171,8 @@ func newTaskInfo(msg *base.TaskMessage, state base.TaskState, nextProcessAt time
info.State = TaskStateArchived info.State = TaskStateArchived
case base.TaskStateCompleted: case base.TaskStateCompleted:
info.State = TaskStateCompleted info.State = TaskStateCompleted
case base.TaskStateAggregating:
info.State = TaskStateAggregating
default: default:
panic(fmt.Sprintf("internal error: unknown state: %d", state)) panic(fmt.Sprintf("internal error: unknown state: %d", state))
} }
@@ -180,6 +200,9 @@ const (
// Indicates that the task is processed successfully and retained until the retention TTL expires. // Indicates that the task is processed successfully and retained until the retention TTL expires.
TaskStateCompleted TaskStateCompleted
// Indicates that the task is waiting in a group to be aggregated into one task.
TaskStateAggregating
) )
func (s TaskState) String() string { func (s TaskState) String() string {
@@ -196,6 +219,8 @@ func (s TaskState) String() string {
return "archived" return "archived"
case TaskStateCompleted: case TaskStateCompleted:
return "completed" return "completed"
case TaskStateAggregating:
return "aggregating"
} }
panic("asynq: unknown task state") panic("asynq: unknown task state")
} }
@@ -291,6 +316,9 @@ type RedisFailoverClientOpt struct {
// https://redis.io/topics/sentinel. // https://redis.io/topics/sentinel.
SentinelAddrs []string SentinelAddrs []string
// Redis sentinel username.
SentinelUsername string
// Redis sentinel password. // Redis sentinel password.
SentinelPassword string SentinelPassword string
@@ -339,6 +367,7 @@ func (opt RedisFailoverClientOpt) MakeRedisClient() interface{} {
return redis.NewFailoverClient(&redis.FailoverOptions{ return redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: opt.MasterName, MasterName: opt.MasterName,
SentinelAddrs: opt.SentinelAddrs, SentinelAddrs: opt.SentinelAddrs,
SentinelUsername: opt.SentinelUsername,
SentinelPassword: opt.SentinelPassword, SentinelPassword: opt.SentinelPassword,
Username: opt.Username, Username: opt.Username,
Password: opt.Password, Password: opt.Password,
@@ -411,9 +440,10 @@ func (opt RedisClusterClientOpt) MakeRedisClient() interface{} {
// ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid. // ParseRedisURI parses redis uri string and returns RedisConnOpt if uri is valid.
// It returns a non-nil error if uri cannot be parsed. // It returns a non-nil error if uri cannot be parsed.
// //
// Three URI schemes are supported, which are redis:, redis-socket:, and redis-sentinel:. // Three URI schemes are supported, which are redis:, rediss:, redis-socket:, and redis-sentinel:.
// Supported formats are: // Supported formats are:
// redis://[:password@]host[:port][/dbnumber] // redis://[:password@]host[:port][/dbnumber]
// rediss://[:password@]host[:port][/dbnumber]
// redis-socket://[:password@]path[?db=dbnumber] // redis-socket://[:password@]path[?db=dbnumber]
// redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName] // redis-sentinel://[:password@]host1[:port][,host2:[:port]][,hostN:[:port]][?master=masterName]
func ParseRedisURI(uri string) (RedisConnOpt, error) { func ParseRedisURI(uri string) (RedisConnOpt, error) {
@@ -422,7 +452,7 @@ func ParseRedisURI(uri string) (RedisConnOpt, error) {
return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err) return nil, fmt.Errorf("asynq: could not parse redis uri: %v", err)
} }
switch u.Scheme { switch u.Scheme {
case "redis": case "redis", "rediss":
return parseRedisURI(u) return parseRedisURI(u)
case "redis-socket": case "redis-socket":
return parseRedisSocketURI(u) return parseRedisSocketURI(u)
@@ -436,6 +466,8 @@ func ParseRedisURI(uri string) (RedisConnOpt, error) {
func parseRedisURI(u *url.URL) (RedisConnOpt, error) { func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
var db int var db int
var err error var err error
var redisConnOpt RedisClientOpt
if len(u.Path) > 0 { if len(u.Path) > 0 {
xs := strings.Split(strings.Trim(u.Path, "/"), "/") xs := strings.Split(strings.Trim(u.Path, "/"), "/")
db, err = strconv.Atoi(xs[0]) db, err = strconv.Atoi(xs[0])
@@ -447,7 +479,20 @@ func parseRedisURI(u *url.URL) (RedisConnOpt, error) {
if v, ok := u.User.Password(); ok { if v, ok := u.User.Password(); ok {
password = v password = v
} }
return RedisClientOpt{Addr: u.Host, DB: db, Password: password}, nil
if u.Scheme == "rediss" {
h, _, err := net.SplitHostPort(u.Host)
if err != nil {
h = u.Host
}
redisConnOpt.TLSConfig = &tls.Config{ServerName: h}
}
redisConnOpt.Addr = u.Host
redisConnOpt.Password = password
redisConnOpt.DB = db
return redisConnOpt, nil
} }
func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) { func parseRedisSocketURI(u *url.URL) (RedisConnOpt, error) {
@@ -478,7 +523,7 @@ func parseRedisSentinelURI(u *url.URL) (RedisConnOpt, error) {
if v, ok := u.User.Password(); ok { if v, ok := u.User.Password(); ok {
password = v password = v
} }
return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, Password: password}, nil return RedisFailoverClientOpt{MasterName: master, SentinelAddrs: addrs, SentinelPassword: password}, nil
} }
// ResultWriter is a client interface to write result data for a task. // ResultWriter is a client interface to write result data for a task.

View File

@@ -5,15 +5,17 @@
package asynq package asynq
import ( import (
"crypto/tls"
"flag" "flag"
"sort" "sort"
"strings" "strings"
"testing" "testing"
"github.com/go-redis/redis/v8" "github.com/redis/go-redis/v9"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest" "github.com/google/go-cmp/cmp/cmpopts"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
h "github.com/hibiken/asynq/internal/testutil"
) )
//============================================================================ //============================================================================
@@ -99,6 +101,10 @@ func TestParseRedisURI(t *testing.T) {
"redis://localhost:6379", "redis://localhost:6379",
RedisClientOpt{Addr: "localhost:6379"}, RedisClientOpt{Addr: "localhost:6379"},
}, },
{
"rediss://localhost:6379",
RedisClientOpt{Addr: "localhost:6379", TLSConfig: &tls.Config{ServerName: "localhost"}},
},
{ {
"redis://localhost:6379/3", "redis://localhost:6379/3",
RedisClientOpt{Addr: "localhost:6379", DB: 3}, RedisClientOpt{Addr: "localhost:6379", DB: 3},
@@ -137,9 +143,9 @@ func TestParseRedisURI(t *testing.T) {
{ {
"redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster", "redis-sentinel://:mypassword@localhost:5000,localhost:5001,localhost:5002?master=mymaster",
RedisFailoverClientOpt{ RedisFailoverClientOpt{
MasterName: "mymaster", MasterName: "mymaster",
SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"}, SentinelAddrs: []string{"localhost:5000", "localhost:5001", "localhost:5002"},
Password: "mypassword", SentinelPassword: "mypassword",
}, },
}, },
} }
@@ -151,7 +157,7 @@ func TestParseRedisURI(t *testing.T) {
continue continue
} }
if diff := cmp.Diff(tc.want, got); diff != "" { if diff := cmp.Diff(tc.want, got, cmpopts.IgnoreUnexported(tls.Config{})); diff != "" {
t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff) t.Errorf("ParseRedisURI(%q) = %+v, want %+v\n(-want,+got)\n%s", tc.uri, got, tc.want, diff)
} }
} }

View File

@@ -12,7 +12,7 @@ import (
"testing" "testing"
"time" "time"
h "github.com/hibiken/asynq/internal/asynqtest" h "github.com/hibiken/asynq/internal/testutil"
) )
// Creates a new task of type "task<n>" with payload {"data": n}. // Creates a new task of type "task<n>" with payload {"data": n}.
@@ -55,7 +55,7 @@ func BenchmarkEndToEndSimple(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
srv.Start(HandlerFunc(handler)) _ = srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
@@ -117,7 +117,7 @@ func BenchmarkEndToEnd(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
srv.Start(HandlerFunc(handler)) _ = srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
@@ -174,7 +174,7 @@ func BenchmarkEndToEndMultipleQueues(b *testing.B) {
} }
b.StartTimer() // end setup b.StartTimer() // end setup
srv.Start(HandlerFunc(handler)) _ = srv.Start(HandlerFunc(handler))
wg.Wait() wg.Wait()
b.StopTimer() // begin teardown b.StopTimer() // begin teardown
@@ -215,7 +215,7 @@ func BenchmarkClientWhileServerRunning(b *testing.B) {
handler := func(ctx context.Context, t *Task) error { handler := func(ctx context.Context, t *Task) error {
return nil return nil
} }
srv.Start(HandlerFunc(handler)) _ = srv.Start(HandlerFunc(handler))
b.StartTimer() // end setup b.StartTimer() // end setup

120
client.go
View File

@@ -10,11 +10,11 @@ import (
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors" "github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/redis/go-redis/v9"
) )
// A Client is responsible for scheduling tasks. // A Client is responsible for scheduling tasks.
@@ -24,16 +24,27 @@ import (
// //
// Clients are safe for concurrent use by multiple goroutines. // Clients are safe for concurrent use by multiple goroutines.
type Client struct { type Client struct {
rdb *rdb.RDB broker base.Broker
// When a Client has been created with an existing Redis connection, we do
// not want to close it.
sharedConnection bool
} }
// NewClient returns a new Client instance given a redis connection option. // NewClient returns a new Client instance given a redis connection option.
func NewClient(r RedisConnOpt) *Client { func NewClient(r RedisConnOpt) *Client {
c, ok := r.MakeRedisClient().(redis.UniversalClient) redisClient, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok { if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r)) panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
} }
return &Client{rdb: rdb.NewRDB(c)} client := NewClientFromRedisClient(redisClient)
client.sharedConnection = false
return client
}
// NewClientFromRedisClient returns a new instance of Client given a redis.UniversalClient
// Warning: The underlying redis connection pool will not be closed by Asynq, you are responsible for closing it.
func NewClientFromRedisClient(c redis.UniversalClient) *Client {
return &Client{broker: rdb.NewRDB(c), sharedConnection: true}
} }
type OptionType int type OptionType int
@@ -48,6 +59,7 @@ const (
ProcessInOpt ProcessInOpt
TaskIDOpt TaskIDOpt
RetentionOpt RetentionOpt
GroupOpt
) )
// Option specifies the task processing behavior. // Option specifies the task processing behavior.
@@ -73,6 +85,7 @@ type (
processAtOption time.Time processAtOption time.Time
processInOption time.Duration processInOption time.Duration
retentionOption time.Duration retentionOption time.Duration
groupOption string
) )
// MaxRetry returns an option to specify the max number of times // MaxRetry returns an option to specify the max number of times
@@ -91,13 +104,13 @@ func (n retryOption) Type() OptionType { return MaxRetryOpt }
func (n retryOption) Value() interface{} { return int(n) } func (n retryOption) Value() interface{} { return int(n) }
// Queue returns an option to specify the queue to enqueue the task into. // Queue returns an option to specify the queue to enqueue the task into.
func Queue(qname string) Option { func Queue(name string) Option {
return queueOption(qname) return queueOption(name)
} }
func (qname queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(qname)) } func (name queueOption) String() string { return fmt.Sprintf("Queue(%q)", string(name)) }
func (qname queueOption) Type() OptionType { return QueueOpt } func (name queueOption) Type() OptionType { return QueueOpt }
func (qname queueOption) Value() interface{} { return string(qname) } func (name queueOption) Value() interface{} { return string(name) }
// TaskID returns an option to specify the task ID. // TaskID returns an option to specify the task ID.
func TaskID(id string) Option { func TaskID(id string) Option {
@@ -142,14 +155,15 @@ func (t deadlineOption) Value() interface{} { return time.Time(t) }
// Unique returns an option to enqueue a task only if the given task is unique. // Unique returns an option to enqueue a task only if the given task is unique.
// Task enqueued with this option is guaranteed to be unique within the given ttl. // Task enqueued with this option is guaranteed to be unique within the given ttl.
// Once the task gets processed successfully or once the TTL has expired, another task with the same uniqueness may be enqueued. // Once the task gets processed successfully or once the TTL has expired,
// another task with the same uniqueness may be enqueued.
// ErrDuplicateTask error is returned when enqueueing a duplicate task. // ErrDuplicateTask error is returned when enqueueing a duplicate task.
// TTL duration must be greater than or equal to 1 second. // TTL duration must be greater than or equal to 1 second.
// //
// Uniqueness of a task is based on the following properties: // Uniqueness of a task is based on the following properties:
// - Task Type // - Task Type
// - Task Payload // - Task Payload
// - Queue Name // - Queue Name
func Unique(ttl time.Duration) Option { func Unique(ttl time.Duration) Option {
return uniqueOption(ttl) return uniqueOption(ttl)
} }
@@ -193,6 +207,16 @@ func (ttl retentionOption) String() string { return fmt.Sprintf("Retention(%
func (ttl retentionOption) Type() OptionType { return RetentionOpt } func (ttl retentionOption) Type() OptionType { return RetentionOpt }
func (ttl retentionOption) Value() interface{} { return time.Duration(ttl) } func (ttl retentionOption) Value() interface{} { return time.Duration(ttl) }
// Group returns an option to specify the group used for the task.
// Tasks in a given queue with the same group will be aggregated into one task before passed to Handler.
func Group(name string) Option {
return groupOption(name)
}
func (name groupOption) String() string { return fmt.Sprintf("Group(%q)", string(name)) }
func (name groupOption) Type() OptionType { return GroupOpt }
func (name groupOption) Value() interface{} { return string(name) }
// ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task. // ErrDuplicateTask indicates that the given task could not be enqueued since it's a duplicate of another task.
// //
// ErrDuplicateTask error only applies to tasks enqueued with a Unique option. // ErrDuplicateTask error only applies to tasks enqueued with a Unique option.
@@ -212,6 +236,7 @@ type option struct {
uniqueTTL time.Duration uniqueTTL time.Duration
processAt time.Time processAt time.Time
retention time.Duration retention time.Duration
group string
} }
// composeOptions merges user provided options into the default options // composeOptions merges user provided options into the default options
@@ -223,7 +248,7 @@ func composeOptions(opts ...Option) (option, error) {
retry: defaultMaxRetry, retry: defaultMaxRetry,
queue: base.DefaultQueueName, queue: base.DefaultQueueName,
taskID: uuid.NewString(), taskID: uuid.NewString(),
timeout: 0, // do not set to deafultTimeout here timeout: 0, // do not set to defaultTimeout here
deadline: time.Time{}, deadline: time.Time{},
processAt: time.Now(), processAt: time.Now(),
} }
@@ -239,8 +264,8 @@ func composeOptions(opts ...Option) (option, error) {
res.queue = qname res.queue = qname
case taskIDOption: case taskIDOption:
id := string(opt) id := string(opt)
if err := validateTaskID(id); err != nil { if isBlank(id) {
return option{}, err return option{}, errors.New("task ID cannot be empty")
} }
res.taskID = id res.taskID = id
case timeoutOption: case timeoutOption:
@@ -259,6 +284,12 @@ func composeOptions(opts ...Option) (option, error) {
res.processAt = time.Now().Add(time.Duration(opt)) res.processAt = time.Now().Add(time.Duration(opt))
case retentionOption: case retentionOption:
res.retention = time.Duration(opt) res.retention = time.Duration(opt)
case groupOption:
key := string(opt)
if isBlank(key) {
return option{}, errors.New("group key cannot be empty")
}
res.group = key
default: default:
// ignore unexpected option // ignore unexpected option
} }
@@ -266,12 +297,9 @@ func composeOptions(opts ...Option) (option, error) {
return res, nil return res, nil
} }
// validates user provided task ID string. // isBlank returns true if the given s is empty or consist of all whitespaces.
func validateTaskID(id string) error { func isBlank(s string) bool {
if strings.TrimSpace(id) == "" { return strings.TrimSpace(s) == ""
return errors.New("task ID cannot be empty")
}
return nil
} }
const ( const (
@@ -290,7 +318,10 @@ var (
// Close closes the connection with redis. // Close closes the connection with redis.
func (c *Client) Close() error { func (c *Client) Close() error {
return c.rdb.Close() if c.sharedConnection {
return fmt.Errorf("redis connection is shared so the Client can't be closed through asynq")
}
return c.broker.Close()
} }
// Enqueue enqueues the given task to a queue. // Enqueue enqueues the given task to a queue.
@@ -300,7 +331,7 @@ func (c *Client) Close() error {
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
// Any options provided to NewTask can be overridden by options passed to Enqueue. // Any options provided to NewTask can be overridden by options passed to Enqueue.
// By deafult, max retry is set to 25 and timeout is set to 30 minutes. // By default, max retry is set to 25 and timeout is set to 30 minutes.
// //
// If no ProcessAt or ProcessIn options are provided, the task will be pending immediately. // If no ProcessAt or ProcessIn options are provided, the task will be pending immediately.
// //
@@ -316,12 +347,15 @@ func (c *Client) Enqueue(task *Task, opts ...Option) (*TaskInfo, error) {
// The argument opts specifies the behavior of task processing. // The argument opts specifies the behavior of task processing.
// If there are conflicting Option values the last one overrides others. // If there are conflicting Option values the last one overrides others.
// Any options provided to NewTask can be overridden by options passed to Enqueue. // Any options provided to NewTask can be overridden by options passed to Enqueue.
// By deafult, max retry is set to 25 and timeout is set to 30 minutes. // By default, max retry is set to 25 and timeout is set to 30 minutes.
// //
// If no ProcessAt or ProcessIn options are provided, the task will be pending immediately. // If no ProcessAt or ProcessIn options are provided, the task will be pending immediately.
// //
// The first argument context applies to the enqueue operation. To specify task timeout and deadline, use Timeout and Deadline option instead. // The first argument context applies to the enqueue operation. To specify task timeout and deadline, use Timeout and Deadline option instead.
func (c *Client) EnqueueContext(ctx context.Context, task *Task, opts ...Option) (*TaskInfo, error) { func (c *Client) EnqueueContext(ctx context.Context, task *Task, opts ...Option) (*TaskInfo, error) {
if task == nil {
return nil, fmt.Errorf("task cannot be nil")
}
if strings.TrimSpace(task.Type()) == "" { if strings.TrimSpace(task.Type()) == "" {
return nil, fmt.Errorf("task typename cannot be empty") return nil, fmt.Errorf("task typename cannot be empty")
} }
@@ -356,17 +390,23 @@ func (c *Client) EnqueueContext(ctx context.Context, task *Task, opts ...Option)
Deadline: deadline.Unix(), Deadline: deadline.Unix(),
Timeout: int64(timeout.Seconds()), Timeout: int64(timeout.Seconds()),
UniqueKey: uniqueKey, UniqueKey: uniqueKey,
GroupKey: opt.group,
Retention: int64(opt.retention.Seconds()), Retention: int64(opt.retention.Seconds()),
} }
now := time.Now() now := time.Now()
var state base.TaskState var state base.TaskState
if opt.processAt.Before(now) || opt.processAt.Equal(now) { if opt.processAt.After(now) {
err = c.schedule(ctx, msg, opt.processAt, opt.uniqueTTL)
state = base.TaskStateScheduled
} else if opt.group != "" {
// Use zero value for processAt since we don't know when the task will be aggregated and processed.
opt.processAt = time.Time{}
err = c.addToGroup(ctx, msg, opt.group, opt.uniqueTTL)
state = base.TaskStateAggregating
} else {
opt.processAt = now opt.processAt = now
err = c.enqueue(ctx, msg, opt.uniqueTTL) err = c.enqueue(ctx, msg, opt.uniqueTTL)
state = base.TaskStatePending state = base.TaskStatePending
} else {
err = c.schedule(ctx, msg, opt.processAt, opt.uniqueTTL)
state = base.TaskStateScheduled
} }
switch { switch {
case errors.Is(err, errors.ErrDuplicateTask): case errors.Is(err, errors.ErrDuplicateTask):
@@ -379,17 +419,29 @@ func (c *Client) EnqueueContext(ctx context.Context, task *Task, opts ...Option)
return newTaskInfo(msg, state, opt.processAt, nil), nil return newTaskInfo(msg, state, opt.processAt, nil), nil
} }
// Ping performs a ping against the redis connection.
func (c *Client) Ping() error {
return c.broker.Ping()
}
func (c *Client) enqueue(ctx context.Context, msg *base.TaskMessage, uniqueTTL time.Duration) error { func (c *Client) enqueue(ctx context.Context, msg *base.TaskMessage, uniqueTTL time.Duration) error {
if uniqueTTL > 0 { if uniqueTTL > 0 {
return c.rdb.EnqueueUnique(ctx, msg, uniqueTTL) return c.broker.EnqueueUnique(ctx, msg, uniqueTTL)
} }
return c.rdb.Enqueue(ctx, msg) return c.broker.Enqueue(ctx, msg)
} }
func (c *Client) schedule(ctx context.Context, msg *base.TaskMessage, t time.Time, uniqueTTL time.Duration) error { func (c *Client) schedule(ctx context.Context, msg *base.TaskMessage, t time.Time, uniqueTTL time.Duration) error {
if uniqueTTL > 0 { if uniqueTTL > 0 {
ttl := t.Add(uniqueTTL).Sub(time.Now()) ttl := time.Until(t.Add(uniqueTTL))
return c.rdb.ScheduleUnique(ctx, msg, t, ttl) return c.broker.ScheduleUnique(ctx, msg, t, ttl)
} }
return c.rdb.Schedule(ctx, msg, t) return c.broker.Schedule(ctx, msg, t)
}
func (c *Client) addToGroup(ctx context.Context, msg *base.TaskMessage, group string, uniqueTTL time.Duration) error {
if uniqueTTL > 0 {
return c.broker.AddToGroupUnique(ctx, msg, group, uniqueTTL)
}
return c.broker.AddToGroup(ctx, msg, group)
} }

View File

@@ -12,8 +12,9 @@ import (
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
h "github.com/hibiken/asynq/internal/testutil"
"github.com/redis/go-redis/v9"
) )
func TestClientEnqueueWithProcessAtOption(t *testing.T) { func TestClientEnqueueWithProcessAtOption(t *testing.T) {
@@ -143,11 +144,7 @@ func TestClientEnqueueWithProcessAtOption(t *testing.T) {
} }
} }
func TestClientEnqueue(t *testing.T) { func testClientEnqueue(t *testing.T, client *Client, r redis.UniversalClient) {
r := setup(t)
client := NewClient(getRedisConnOpt(t))
defer client.Close()
task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"})) task := NewTask("send_email", h.JSON(map[string]interface{}{"to": "customer@gmail.com", "from": "merchant@example.com"}))
now := time.Now() now := time.Now()
@@ -478,6 +475,176 @@ func TestClientEnqueue(t *testing.T) {
} }
} }
func TestClientEnqueue(t *testing.T) {
r := setup(t)
client := NewClient(getRedisConnOpt(t))
defer client.Close()
testClientEnqueue(t, client, r)
}
func TestClientFromRedisClientEnqueue(t *testing.T) {
r := setup(t)
redisClient := getRedisConnOpt(t).MakeRedisClient().(redis.UniversalClient)
client := NewClientFromRedisClient(redisClient)
testClientEnqueue(t, client, r)
err := client.Close()
if err == nil {
t.Error("client.Close() should have failed because of a shared client but it didn't")
}
}
func TestClientEnqueueWithGroupOption(t *testing.T) {
r := setup(t)
client := NewClient(getRedisConnOpt(t))
defer client.Close()
task := NewTask("mytask", []byte("foo"))
now := time.Now()
tests := []struct {
desc string
task *Task
opts []Option
wantInfo *TaskInfo
wantPending map[string][]*base.TaskMessage
wantGroups map[string]map[string][]base.Z // map queue name to a set of groups
wantScheduled map[string][]base.Z
}{
{
desc: "With only Group option",
task: task,
opts: []Option{
Group("mygroup"),
},
wantInfo: &TaskInfo{
Queue: "default",
Group: "mygroup",
Type: task.Type(),
Payload: task.Payload(),
State: TaskStateAggregating,
MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: time.Time{},
},
wantPending: map[string][]*base.TaskMessage{
"default": {}, // should not be pending
},
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {
{
Message: &base.TaskMessage{
Type: task.Type(),
Payload: task.Payload(),
Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
GroupKey: "mygroup",
},
Score: now.Unix(),
},
},
},
},
wantScheduled: map[string][]base.Z{
"default": {},
},
},
{
desc: "With Group and ProcessAt options",
task: task,
opts: []Option{
Group("mygroup"),
ProcessAt(now.Add(30 * time.Minute)),
},
wantInfo: &TaskInfo{
Queue: "default",
Group: "mygroup",
Type: task.Type(),
Payload: task.Payload(),
State: TaskStateScheduled,
MaxRetry: defaultMaxRetry,
Retried: 0,
LastErr: "",
LastFailedAt: time.Time{},
Timeout: defaultTimeout,
Deadline: time.Time{},
NextProcessAt: now.Add(30 * time.Minute),
},
wantPending: map[string][]*base.TaskMessage{
"default": {}, // should not be pending
},
wantGroups: map[string]map[string][]base.Z{
"default": {
"mygroup": {}, // should not be added to the group yet
},
},
wantScheduled: map[string][]base.Z{
"default": {
{
Message: &base.TaskMessage{
Type: task.Type(),
Payload: task.Payload(),
Retry: defaultMaxRetry,
Queue: "default",
Timeout: int64(defaultTimeout.Seconds()),
Deadline: noDeadline.Unix(),
GroupKey: "mygroup",
},
Score: now.Add(30 * time.Minute).Unix(),
},
},
},
},
}
for _, tc := range tests {
h.FlushDB(t, r) // clean up db before each test case.
gotInfo, err := client.Enqueue(tc.task, tc.opts...)
if err != nil {
t.Error(err)
continue
}
cmpOptions := []cmp.Option{
cmpopts.IgnoreFields(TaskInfo{}, "ID"),
cmpopts.EquateApproxTime(500 * time.Millisecond),
}
if diff := cmp.Diff(tc.wantInfo, gotInfo, cmpOptions...); diff != "" {
t.Errorf("%s;\nEnqueue(task) returned %v, want %v; (-want,+got)\n%s",
tc.desc, gotInfo, tc.wantInfo, diff)
}
for qname, want := range tc.wantPending {
got := h.GetPendingMessages(t, r, qname)
if diff := cmp.Diff(want, got, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.PendingKey(qname), diff)
}
}
for qname, groups := range tc.wantGroups {
for groupKey, want := range groups {
got := h.GetGroupEntries(t, r, qname, groupKey)
if diff := cmp.Diff(want, got, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.GroupKey(qname, groupKey), diff)
}
}
}
for qname, want := range tc.wantScheduled {
gotScheduled := h.GetScheduledEntries(t, r, qname)
if diff := cmp.Diff(want, gotScheduled, h.IgnoreIDOpt, cmpopts.EquateEmpty()); diff != "" {
t.Errorf("%s;\nmismatch found in %q; (-want,+got)\n%s", tc.desc, base.ScheduledKey(qname), diff)
}
}
}
}
func TestClientEnqueueWithTaskIDOption(t *testing.T) { func TestClientEnqueueWithTaskIDOption(t *testing.T) {
r := setup(t) r := setup(t)
client := NewClient(getRedisConnOpt(t)) client := NewClient(getRedisConnOpt(t))
@@ -707,6 +874,11 @@ func TestClientEnqueueError(t *testing.T) {
task *Task task *Task
opts []Option opts []Option
}{ }{
{
desc: "With nil task",
task: nil,
opts: []Option{},
},
{ {
desc: "With empty queue name", desc: "With empty queue name",
task: task, task: task,
@@ -1001,7 +1173,7 @@ func TestClientEnqueueUniqueWithProcessAtOption(t *testing.T) {
} }
gotTTL := r.TTL(context.Background(), base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val() gotTTL := r.TTL(context.Background(), base.UniqueKey(base.DefaultQueueName, tc.task.Type(), tc.task.Payload())).Val()
wantTTL := tc.at.Add(tc.ttl).Sub(time.Now()) wantTTL := time.Until(tc.at.Add(tc.ttl))
if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) { if !cmp.Equal(wantTTL.Seconds(), gotTTL.Seconds(), cmpopts.EquateApprox(0, 1)) {
t.Errorf("TTL = %v, want %v", gotTTL, wantTTL) t.Errorf("TTL = %v, want %v", gotTTL, wantTTL)
continue continue

View File

@@ -28,7 +28,7 @@ func GetRetryCount(ctx context.Context) (n int, ok bool) {
// GetMaxRetry extracts maximum retry from a context, if any. // GetMaxRetry extracts maximum retry from a context, if any.
// //
// Return value n indicates the maximum number of times the assoicated task // Return value n indicates the maximum number of times the associated task
// can be retried if ProcessTask returns a non-nil error. // can be retried if ProcessTask returns a non-nil error.
func GetMaxRetry(ctx context.Context) (n int, ok bool) { func GetMaxRetry(ctx context.Context) (n int, ok bool) {
return asynqcontext.GetMaxRetry(ctx) return asynqcontext.GetMaxRetry(ctx)
@@ -36,7 +36,7 @@ func GetMaxRetry(ctx context.Context) (n int, ok bool) {
// GetQueueName extracts queue name from a context, if any. // GetQueueName extracts queue name from a context, if any.
// //
// Return value qname indicates which queue the task was pulled from. // Return value queue indicates which queue the task was pulled from.
func GetQueueName(ctx context.Context) (qname string, ok bool) { func GetQueueName(ctx context.Context) (queue string, ok bool) {
return asynqcontext.GetQueueName(ctx) return asynqcontext.GetQueueName(ctx)
} }

BIN
docs/assets/dash.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 809 KiB

View File

@@ -5,6 +5,7 @@
package asynq_test package asynq_test
import ( import (
"context"
"fmt" "fmt"
"log" "log"
"os" "os"
@@ -113,3 +114,20 @@ func ExampleParseRedisURI() {
// localhost:6379 // localhost:6379
// 10 // 10
} }
func ExampleResultWriter() {
// ResultWriter is only accessible in Handler.
h := func(ctx context.Context, task *asynq.Task) error {
// .. do task processing work
res := []byte("task result data")
n, err := task.ResultWriter().Write(res) // implements io.Writer
if err != nil {
return fmt.Errorf("failed to write task result: %v", err)
}
log.Printf(" %d bytes written", n)
return nil
}
_ = h
}

View File

@@ -56,13 +56,15 @@ func (f *forwarder) start(wg *sync.WaitGroup) {
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
timer := time.NewTimer(f.avgInterval)
for { for {
select { select {
case <-f.done: case <-f.done:
f.logger.Debug("Forwarder done") f.logger.Debug("Forwarder done")
return return
case <-time.After(f.avgInterval): case <-timer.C:
f.exec() f.exec()
timer.Reset(f.avgInterval)
} }
} }
}() }()
@@ -70,6 +72,6 @@ func (f *forwarder) start(wg *sync.WaitGroup) {
func (f *forwarder) exec() { func (f *forwarder) exec() {
if err := f.broker.ForwardIfReady(f.queues...); err != nil { if err := f.broker.ForwardIfReady(f.queues...); err != nil {
f.logger.Errorf("Could not enqueue scheduled tasks: %v", err) f.logger.Errorf("Failed to forward scheduled tasks: %v", err)
} }
} }

View File

@@ -10,9 +10,9 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
) )
func TestForwarder(t *testing.T) { func TestForwarder(t *testing.T) {

27
go.mod
View File

@@ -1,19 +1,20 @@
module github.com/hibiken/asynq module github.com/hibiken/asynq
go 1.13 go 1.22
require ( require (
github.com/go-redis/redis/v8 v8.11.2 github.com/google/go-cmp v0.6.0
github.com/golang/protobuf v1.4.2 github.com/google/uuid v1.6.0
github.com/google/go-cmp v0.5.6 github.com/redis/go-redis/v9 v9.7.0
github.com/google/uuid v1.2.0
github.com/kr/pretty v0.1.0 // indirect
github.com/robfig/cron/v3 v3.0.1 github.com/robfig/cron/v3 v3.0.1
github.com/spf13/cast v1.3.1 github.com/spf13/cast v1.7.0
github.com/stretchr/testify v1.6.1 // indirect go.uber.org/goleak v1.3.0
go.uber.org/goleak v0.10.0 golang.org/x/sys v0.27.0
golang.org/x/sys v0.0.0-20210112080510-489259a85091 golang.org/x/time v0.8.0
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 google.golang.org/protobuf v1.35.2
google.golang.org/protobuf v1.25.0 )
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 // indirect
require (
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
) )

203
go.sum
View File

@@ -1,177 +1,42 @@
cloud.google.com/go v0.26.0 h1:e0WKqKTd5BnrG8aKH3J3h+QvEIQtSUcf2n5UZ5ZgLtQ= github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/census-instrumentation/opencensus-proto v0.2.1 h1:glEXhBS5PSLLv4IXzLA5yPRVX4bilULVyxxbrfOtDAk= github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4 h1:ta993UF76GwbvJcIo3Y68y/M3WxlpEHPWIGDkJYwzJI=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473 h1:4cmBvAEBNJaGARUEs3/suWRyfyBfhf7I60WBZq+bv2w= github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/envoyproxy/protoc-gen-validate v0.1.0 h1:EQciDnbrYxy13PgWoY8AqoxGiPrpgBZ1R8UNe3ddc+A= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/go-redis/redis/v8 v8.11.2 h1:WqlSpAwz8mxDSMCvbyz1Mkiqe0LE5OY4j3lgkvu1Ts0= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/go-redis/redis/v8 v8.11.2/go.mod h1:DLomh7y2e3ggQXQLd1YgmvIfecPJoFl7WU5SOQ/r06M= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1 h1:G5FRp8JnTd7RQH5kemVNlMeyXQAztQ3mOWV95KxsXH8=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1 h1:VkoXIwSboBpnk99O/KFauAEILuNHv5DVFKZMBN/gUgw=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.15.0 h1:1V1NfVQR87RtWAgp1lv9JZJ5Jap+XFGKPi00andXGi4=
github.com/onsi/ginkgo v1.15.0/go.mod h1:hF8qUzuuC8DJGygJH3726JnCZX4MYbRB8yFfISqnKUg=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.10.5 h1:7n6FEkpFmfCoo2t+YYqXH0evK+a9ICQz0xcAy9dYcaQ=
github.com/onsi/gomega v1.10.5/go.mod h1:gza4q3jKQJijlu05nKWRCW/GavJumGt8aNRxWg7mt48=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM= github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs= github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro= github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/stretchr/objx v0.1.0 h1:4G4v2dO3VZwixGIRoQ5Lfboy6nUhCyYzaqnIAPPhYs4= github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
github.com/yuin/goldmark v1.2.1 h1:ruQGxdhGHe7FWOJPT0mKs5+pD2Xs1Bm/kdGlHO04FmM= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s=
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4= golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI= golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI= google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4 h1:c2HOrn5iMezYjSlGPncknSEr/8x5LELb/ilJbXi9DEA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3 h1:XQyxROzUlZH+WIQwySDgnISgOivlhjIEwaQaJEJrrN0=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.3.0 h1:RM4zey1++hCTbCVQfnWeKs9/IEsaBLA8vTkd0WVtmH4=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb h1:eBmm0M9fYhWpKZLjQUUKka/LtIxf46G4fxeEz5KJr9U=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be h1:vEDujvNQGv4jgYKudGeI/+DAX4Jffq6hpD55MmoEvKs=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 h1:SQFwaSi55rU7vdNs9Yr0Z324VNlrF+0wMqRXT4St8ck=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091 h1:DMyOG0U+gKfu8JZzg2UQe9MeaC1X+xQWlAKcRnjxjCw=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e h1:4nW4NLDYnU28ojHaHO8OVxFHk/aQ33U01a9cjED+pzE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 h1:+kGHl1aib/qcwaRi1CbqBZ1rk19r85MNUf8HaBghugY=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.27.0 h1:rRYRFMVgRv6E0D70Skyfsr28tDXIuuPZyWGMPdMcnXg=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc h1:/hemPrYIhOhy8zYrNj+069zDB68us2sMGsfkFJO0iZs=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@@ -12,6 +12,7 @@ import (
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/timeutil"
) )
// heartbeater is responsible for writing process info to redis periodically to // heartbeater is responsible for writing process info to redis periodically to
@@ -19,6 +20,7 @@ import (
type heartbeater struct { type heartbeater struct {
logger *log.Logger logger *log.Logger
broker base.Broker broker base.Broker
clock timeutil.Clock
// channel to communicate back to the long running "heartbeater" goroutine. // channel to communicate back to the long running "heartbeater" goroutine.
done chan struct{} done chan struct{}
@@ -41,7 +43,7 @@ type heartbeater struct {
workers map[string]*workerInfo workers map[string]*workerInfo
// state is shared with other goroutine but is concurrency safe. // state is shared with other goroutine but is concurrency safe.
state *base.ServerState state *serverState
// channels to receive updates on active workers. // channels to receive updates on active workers.
starting <-chan *workerInfo starting <-chan *workerInfo
@@ -55,7 +57,7 @@ type heartbeaterParams struct {
concurrency int concurrency int
queues map[string]int queues map[string]int
strictPriority bool strictPriority bool
state *base.ServerState state *serverState
starting <-chan *workerInfo starting <-chan *workerInfo
finished <-chan *base.TaskMessage finished <-chan *base.TaskMessage
} }
@@ -69,6 +71,7 @@ func newHeartbeater(params heartbeaterParams) *heartbeater {
return &heartbeater{ return &heartbeater{
logger: params.logger, logger: params.logger,
broker: params.broker, broker: params.broker,
clock: timeutil.NewRealClock(),
done: make(chan struct{}), done: make(chan struct{}),
interval: params.interval, interval: params.interval,
@@ -100,6 +103,8 @@ type workerInfo struct {
started time.Time started time.Time
// deadline the worker has to finish processing the task by. // deadline the worker has to finish processing the task by.
deadline time.Time deadline time.Time
// lease the worker holds for the task.
lease *base.Lease
} }
func (h *heartbeater) start(wg *sync.WaitGroup) { func (h *heartbeater) start(wg *sync.WaitGroup) {
@@ -107,7 +112,7 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
go func() { go func() {
defer wg.Done() defer wg.Done()
h.started = time.Now() h.started = h.clock.Now()
h.beat() h.beat()
@@ -115,7 +120,9 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
for { for {
select { select {
case <-h.done: case <-h.done:
h.broker.ClearServerState(h.host, h.pid, h.serverID) if err := h.broker.ClearServerState(h.host, h.pid, h.serverID); err != nil {
h.logger.Errorf("Failed to clear server state: %v", err)
}
h.logger.Debug("Heartbeater done") h.logger.Debug("Heartbeater done")
timer.Stop() timer.Stop()
return return
@@ -134,7 +141,12 @@ func (h *heartbeater) start(wg *sync.WaitGroup) {
}() }()
} }
// beat extends lease for workers and writes server/worker info to redis.
func (h *heartbeater) beat() { func (h *heartbeater) beat() {
h.state.mu.Lock()
srvStatus := h.state.value.String()
h.state.mu.Unlock()
info := base.ServerInfo{ info := base.ServerInfo{
Host: h.host, Host: h.host,
PID: h.pid, PID: h.pid,
@@ -142,12 +154,13 @@ func (h *heartbeater) beat() {
Concurrency: h.concurrency, Concurrency: h.concurrency,
Queues: h.queues, Queues: h.queues,
StrictPriority: h.strictPriority, StrictPriority: h.strictPriority,
Status: h.state.String(), Status: srvStatus,
Started: h.started, Started: h.started,
ActiveWorkerCount: len(h.workers), ActiveWorkerCount: len(h.workers),
} }
var ws []*base.WorkerInfo var ws []*base.WorkerInfo
idsByQueue := make(map[string][]string)
for id, w := range h.workers { for id, w := range h.workers {
ws = append(ws, &base.WorkerInfo{ ws = append(ws, &base.WorkerInfo{
Host: h.host, Host: h.host,
@@ -160,11 +173,30 @@ func (h *heartbeater) beat() {
Started: w.started, Started: w.started,
Deadline: w.deadline, Deadline: w.deadline,
}) })
// Check lease before adding to the set to make sure not to extend the lease if the lease is already expired.
if w.lease.IsValid() {
idsByQueue[w.msg.Queue] = append(idsByQueue[w.msg.Queue], id)
} else {
w.lease.NotifyExpiration() // notify processor if the lease is expired
}
} }
// Note: Set TTL to be long enough so that it won't expire before we write again // Note: Set TTL to be long enough so that it won't expire before we write again
// and short enough to expire quickly once the process is shut down or killed. // and short enough to expire quickly once the process is shut down or killed.
if err := h.broker.WriteServerState(&info, ws, h.interval*2); err != nil { if err := h.broker.WriteServerState(&info, ws, h.interval*2); err != nil {
h.logger.Errorf("could not write server state data: %v", err) h.logger.Errorf("Failed to write server state data: %v", err)
}
for qname, ids := range idsByQueue {
expirationTime, err := h.broker.ExtendLease(qname, ids...)
if err != nil {
h.logger.Errorf("Failed to extend lease for tasks %v: %v", ids, err)
continue
}
for _, id := range ids {
if l := h.workers[id].lease; !l.Reset(expirationTime) {
h.logger.Warnf("Lease reset failed for %s; lease deadline: %v", id, l.Deadline())
}
}
} }
} }

View File

@@ -5,31 +5,154 @@
package asynq package asynq
import ( import (
"context"
"sync" "sync"
"testing" "testing"
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker" "github.com/hibiken/asynq/internal/testbroker"
h "github.com/hibiken/asynq/internal/testutil"
"github.com/hibiken/asynq/internal/timeutil"
) )
// Test goes through a few phases.
//
// Phase1: Simulate Server startup; Simulate starting tasks listed in startedWorkers
// Phase2: Simulate finishing tasks listed in finishedTasks
// Phase3: Simulate Server shutdown;
func TestHeartbeater(t *testing.T) { func TestHeartbeater(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close() defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
now := time.Now()
const elapsedTime = 10 * time.Second // simulated time elapsed between phase1 and phase2
clock := timeutil.NewSimulatedClock(time.Time{}) // time will be set in each test
t1 := h.NewTaskMessageWithQueue("task1", nil, "default")
t2 := h.NewTaskMessageWithQueue("task2", nil, "default")
t3 := h.NewTaskMessageWithQueue("task3", nil, "default")
t4 := h.NewTaskMessageWithQueue("task4", nil, "custom")
t5 := h.NewTaskMessageWithQueue("task5", nil, "custom")
t6 := h.NewTaskMessageWithQueue("task6", nil, "default")
// Note: intentionally set to time less than now.Add(rdb.LeaseDuration) to test lease extension is working.
lease1 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease2 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease3 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease4 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease5 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
lease6 := h.NewLeaseWithClock(now.Add(10*time.Second), clock)
tests := []struct { tests := []struct {
interval time.Duration desc string
// Interval between heartbeats.
interval time.Duration
// Server info.
host string host string
pid int pid int
queues map[string]int queues map[string]int
concurrency int concurrency int
active map[string][]*base.TaskMessage // initial active set state
lease map[string][]base.Z // initial lease set state
wantLease1 map[string][]base.Z // expected lease set state after starting all startedWorkers
wantLease2 map[string][]base.Z // expected lease set state after finishing all finishedTasks
startedWorkers []*workerInfo // workerInfo to send via the started channel
finishedTasks []*base.TaskMessage // tasks to send via the finished channel
startTime time.Time // simulated start time
elapsedTime time.Duration // simulated time elapsed between starting and finishing processing tasks
}{ }{
{2 * time.Second, "localhost", 45678, map[string]int{"default": 1}, 10}, {
desc: "With single queue",
interval: 2 * time.Second,
host: "localhost",
pid: 45678,
queues: map[string]int{"default": 1},
concurrency: 10,
active: map[string][]*base.TaskMessage{
"default": {t1, t2, t3},
},
lease: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Add(10 * time.Second).Unix()},
{Message: t2, Score: now.Add(10 * time.Second).Unix()},
{Message: t3, Score: now.Add(10 * time.Second).Unix()},
},
},
startedWorkers: []*workerInfo{
{msg: t1, started: now, deadline: now.Add(2 * time.Minute), lease: lease1},
{msg: t2, started: now, deadline: now.Add(2 * time.Minute), lease: lease2},
{msg: t3, started: now, deadline: now.Add(2 * time.Minute), lease: lease3},
},
finishedTasks: []*base.TaskMessage{t1, t2},
wantLease1: map[string][]base.Z{
"default": {
{Message: t1, Score: now.Add(rdb.LeaseDuration).Unix()},
{Message: t2, Score: now.Add(rdb.LeaseDuration).Unix()},
{Message: t3, Score: now.Add(rdb.LeaseDuration).Unix()},
},
},
wantLease2: map[string][]base.Z{
"default": {
{Message: t3, Score: now.Add(elapsedTime).Add(rdb.LeaseDuration).Unix()},
},
},
startTime: now,
elapsedTime: elapsedTime,
},
{
desc: "With multiple queue",
interval: 2 * time.Second,
host: "localhost",
pid: 45678,
queues: map[string]int{"default": 1, "custom": 2},
concurrency: 10,
active: map[string][]*base.TaskMessage{
"default": {t6},
"custom": {t4, t5},
},
lease: map[string][]base.Z{
"default": {
{Message: t6, Score: now.Add(10 * time.Second).Unix()},
},
"custom": {
{Message: t4, Score: now.Add(10 * time.Second).Unix()},
{Message: t5, Score: now.Add(10 * time.Second).Unix()},
},
},
startedWorkers: []*workerInfo{
{msg: t6, started: now, deadline: now.Add(2 * time.Minute), lease: lease6},
{msg: t4, started: now, deadline: now.Add(2 * time.Minute), lease: lease4},
{msg: t5, started: now, deadline: now.Add(2 * time.Minute), lease: lease5},
},
finishedTasks: []*base.TaskMessage{t6, t5},
wantLease1: map[string][]base.Z{
"default": {
{Message: t6, Score: now.Add(rdb.LeaseDuration).Unix()},
},
"custom": {
{Message: t4, Score: now.Add(rdb.LeaseDuration).Unix()},
{Message: t5, Score: now.Add(rdb.LeaseDuration).Unix()},
},
},
wantLease2: map[string][]base.Z{
"default": {},
"custom": {
{Message: t4, Score: now.Add(elapsedTime).Add(rdb.LeaseDuration).Unix()},
},
},
startTime: now,
elapsedTime: elapsedTime,
},
} }
timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond) timeCmpOpt := cmpopts.EquateApproxTime(10 * time.Millisecond)
@@ -37,8 +160,15 @@ func TestHeartbeater(t *testing.T) {
ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID") ignoreFieldOpt := cmpopts.IgnoreFields(base.ServerInfo{}, "ServerID")
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
h.SeedAllActiveQueues(t, r, tc.active)
h.SeedAllLease(t, r, tc.lease)
state := base.NewServerState() clock.SetTime(tc.startTime)
rdbClient.SetClock(clock)
srvState := &serverState{}
startingCh := make(chan *workerInfo)
finishedCh := make(chan *base.TaskMessage)
hb := newHeartbeater(heartbeaterParams{ hb := newHeartbeater(heartbeaterParams{
logger: testLogger, logger: testLogger,
broker: rdbClient, broker: rdbClient,
@@ -46,72 +176,134 @@ func TestHeartbeater(t *testing.T) {
concurrency: tc.concurrency, concurrency: tc.concurrency,
queues: tc.queues, queues: tc.queues,
strictPriority: false, strictPriority: false,
state: state, state: srvState,
starting: make(chan *workerInfo), starting: startingCh,
finished: make(chan *base.TaskMessage), finished: finishedCh,
}) })
hb.clock = clock
// Change host and pid fields for testing purpose. // Change host and pid fields for testing purpose.
hb.host = tc.host hb.host = tc.host
hb.pid = tc.pid hb.pid = tc.pid
state.Set(base.StateActive) //===================
// Start Phase1
//===================
srvState.mu.Lock()
srvState.value = srvStateActive // simulating Server.Start
srvState.mu.Unlock()
var wg sync.WaitGroup var wg sync.WaitGroup
hb.start(&wg) hb.start(&wg)
want := &base.ServerInfo{ // Simulate processor starting to work on tasks.
Host: tc.host, for _, w := range tc.startedWorkers {
PID: tc.pid, startingCh <- w
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: time.Now(),
Status: "active",
} }
// allow for heartbeater to write to redis // Wait for heartbeater to write to redis
time.Sleep(tc.interval) time.Sleep(tc.interval * 2)
ss, err := rdbClient.ListServers() ss, err := rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read server info from redis: %v", err) t.Errorf("%s: could not read server info from redis: %v", tc.desc, err)
hb.shutdown() hb.shutdown()
continue continue
} }
if len(ss) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListServers returned %d process info, want 1", len(ss)) t.Errorf("%s: (*RDB).ListServers returned %d server info, want 1", tc.desc, len(ss))
hb.shutdown() hb.shutdown()
continue continue
} }
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" { wantInfo := &base.ServerInfo{
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff) Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: now,
Status: "active",
ActiveWorkerCount: len(tc.startedWorkers),
}
if diff := cmp.Diff(wantInfo, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("%s: redis stored server status %+v, want %+v; (-want, +got)\n%s", tc.desc, ss[0], wantInfo, diff)
hb.shutdown() hb.shutdown()
continue continue
} }
// status change for qname, wantLease := range tc.wantLease1 {
state.Set(base.StateClosed) gotLease := h.GetLeaseEntries(t, r, qname)
if diff := cmp.Diff(wantLease, gotLease, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s: mismatch found in %q: (-want,+got):\n%s", tc.desc, base.LeaseKey(qname), diff)
}
}
// allow for heartbeater to write to redis for _, w := range tc.startedWorkers {
if want := now.Add(rdb.LeaseDuration); w.lease.Deadline() != want {
t.Errorf("%s: lease deadline for %v is set to %v, want %v", tc.desc, w.msg, w.lease.Deadline(), want)
}
}
//===================
// Start Phase2
//===================
clock.AdvanceTime(tc.elapsedTime)
// Simulate processor finished processing tasks.
for _, msg := range tc.finishedTasks {
if err := rdbClient.Done(context.Background(), msg); err != nil {
t.Fatalf("RDB.Done failed: %v", err)
}
finishedCh <- msg
}
// Wait for heartbeater to write to redis
time.Sleep(tc.interval * 2) time.Sleep(tc.interval * 2)
want.Status = "closed" for qname, wantLease := range tc.wantLease2 {
gotLease := h.GetLeaseEntries(t, r, qname)
if diff := cmp.Diff(wantLease, gotLease, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s: mismatch found in %q: (-want,+got):\n%s", tc.desc, base.LeaseKey(qname), diff)
}
}
//===================
// Start Phase3
//===================
// Server state change; simulating Server.Shutdown
srvState.mu.Lock()
srvState.value = srvStateClosed
srvState.mu.Unlock()
// Wait for heartbeater to write to redis
time.Sleep(tc.interval * 2)
wantInfo = &base.ServerInfo{
Host: tc.host,
PID: tc.pid,
Queues: tc.queues,
Concurrency: tc.concurrency,
Started: now,
Status: "closed",
ActiveWorkerCount: len(tc.startedWorkers) - len(tc.finishedTasks),
}
ss, err = rdbClient.ListServers() ss, err = rdbClient.ListServers()
if err != nil { if err != nil {
t.Errorf("could not read process status from redis: %v", err) t.Errorf("%s: could not read server status from redis: %v", tc.desc, err)
hb.shutdown() hb.shutdown()
continue continue
} }
if len(ss) != 1 { if len(ss) != 1 {
t.Errorf("(*RDB).ListProcesses returned %d process info, want 1", len(ss)) t.Errorf("%s: (*RDB).ListServers returned %d server info, want 1", tc.desc, len(ss))
hb.shutdown() hb.shutdown()
continue continue
} }
if diff := cmp.Diff(want, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" { if diff := cmp.Diff(wantInfo, ss[0], timeCmpOpt, ignoreOpt, ignoreFieldOpt); diff != "" {
t.Errorf("redis stored process status %+v, want %+v; (-want, +got)\n%s", ss[0], want, diff) t.Errorf("%s: redis stored process status %+v, want %+v; (-want, +got)\n%s", tc.desc, ss[0], wantInfo, diff)
hb.shutdown() hb.shutdown()
continue continue
} }
@@ -131,8 +323,7 @@ func TestHeartbeaterWithRedisDown(t *testing.T) {
r := rdb.NewRDB(setup(t)) r := rdb.NewRDB(setup(t))
defer r.Close() defer r.Close()
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
state := base.NewServerState() state := &serverState{value: srvStateActive}
state.Set(base.StateActive)
hb := newHeartbeater(heartbeaterParams{ hb := newHeartbeater(heartbeaterParams{
logger: testLogger, logger: testLogger,
broker: testBroker, broker: testBroker,

View File

@@ -10,16 +10,19 @@ import (
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors" "github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/redis/go-redis/v9"
) )
// Inspector is a client interface to inspect and mutate the state of // Inspector is a client interface to inspect and mutate the state of
// queues and tasks. // queues and tasks.
type Inspector struct { type Inspector struct {
rdb *rdb.RDB rdb *rdb.RDB
// When an Inspector has been created with an existing Redis connection, we do
// not want to close it.
sharedConnection bool
} }
// New returns a new instance of Inspector. // New returns a new instance of Inspector.
@@ -28,13 +31,25 @@ func NewInspector(r RedisConnOpt) *Inspector {
if !ok { if !ok {
panic(fmt.Sprintf("inspeq: unsupported RedisConnOpt type %T", r)) panic(fmt.Sprintf("inspeq: unsupported RedisConnOpt type %T", r))
} }
inspector := NewInspectorFromRedisClient(c)
inspector.sharedConnection = false
return inspector
}
// NewInspectorFromRedisClient returns a new instance of Inspector given a redis.UniversalClient
// Warning: The underlying redis connection pool will not be closed by Asynq, you are responsible for closing it.
func NewInspectorFromRedisClient(c redis.UniversalClient) *Inspector {
return &Inspector{ return &Inspector{
rdb: rdb.NewRDB(c), rdb: rdb.NewRDB(c),
sharedConnection: true,
} }
} }
// Close closes the connection with redis. // Close closes the connection with redis.
func (i *Inspector) Close() error { func (i *Inspector) Close() error {
if i.sharedConnection {
return fmt.Errorf("redis connection is shared so the Inspector can't be closed through asynq")
}
return i.rdb.Close() return i.rdb.Close()
} }
@@ -43,7 +58,32 @@ func (i *Inspector) Queues() ([]string, error) {
return i.rdb.AllQueues() return i.rdb.AllQueues()
} }
// QueueInfo represents a state of queues at a certain time. // Groups returns a list of all groups within the given queue.
func (i *Inspector) Groups(queue string) ([]*GroupInfo, error) {
stats, err := i.rdb.GroupStats(queue)
if err != nil {
return nil, err
}
var res []*GroupInfo
for _, s := range stats {
res = append(res, &GroupInfo{
Group: s.Group,
Size: s.Size,
})
}
return res, nil
}
// GroupInfo represents a state of a group at a certain time.
type GroupInfo struct {
// Name of the group.
Group string
// Size is the total number of tasks in the group.
Size int
}
// QueueInfo represents a state of a queue at a certain time.
type QueueInfo struct { type QueueInfo struct {
// Name of the queue. // Name of the queue.
Queue string Queue string
@@ -56,9 +96,12 @@ type QueueInfo struct {
Latency time.Duration Latency time.Duration
// Size is the total number of tasks in the queue. // Size is the total number of tasks in the queue.
// The value is the sum of Pending, Active, Scheduled, Retry, and Archived. // The value is the sum of Pending, Active, Scheduled, Retry, Aggregating and Archived.
Size int Size int
// Groups is the total number of groups in the queue.
Groups int
// Number of pending tasks. // Number of pending tasks.
Pending int Pending int
// Number of active tasks. // Number of active tasks.
@@ -71,13 +114,20 @@ type QueueInfo struct {
Archived int Archived int
// Number of stored completed tasks. // Number of stored completed tasks.
Completed int Completed int
// Number of aggregating tasks.
Aggregating int
// Total number of tasks being processed during the given date. // Total number of tasks being processed within the given date (counter resets daily).
// The number includes both succeeded and failed tasks. // The number includes both succeeded and failed tasks.
Processed int Processed int
// Total number of tasks failed to be processed during the given date. // Total number of tasks failed to be processed within the given date (counter resets daily).
Failed int Failed int
// Total number of tasks processed (cumulative).
ProcessedTotal int
// Total number of tasks failed (cumulative).
FailedTotal int
// Paused indicates whether the queue is paused. // Paused indicates whether the queue is paused.
// If true, tasks in the queue will not be processed. // If true, tasks in the queue will not be processed.
Paused bool Paused bool
@@ -87,29 +137,33 @@ type QueueInfo struct {
} }
// GetQueueInfo returns current information of the given queue. // GetQueueInfo returns current information of the given queue.
func (i *Inspector) GetQueueInfo(qname string) (*QueueInfo, error) { func (i *Inspector) GetQueueInfo(queue string) (*QueueInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, err return nil, err
} }
stats, err := i.rdb.CurrentStats(qname) stats, err := i.rdb.CurrentStats(queue)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &QueueInfo{ return &QueueInfo{
Queue: stats.Queue, Queue: stats.Queue,
MemoryUsage: stats.MemoryUsage, MemoryUsage: stats.MemoryUsage,
Latency: stats.Latency, Latency: stats.Latency,
Size: stats.Size, Size: stats.Size,
Pending: stats.Pending, Groups: stats.Groups,
Active: stats.Active, Pending: stats.Pending,
Scheduled: stats.Scheduled, Active: stats.Active,
Retry: stats.Retry, Scheduled: stats.Scheduled,
Archived: stats.Archived, Retry: stats.Retry,
Completed: stats.Completed, Archived: stats.Archived,
Processed: stats.Processed, Completed: stats.Completed,
Failed: stats.Failed, Aggregating: stats.Aggregating,
Paused: stats.Paused, Processed: stats.Processed,
Timestamp: stats.Timestamp, Failed: stats.Failed,
ProcessedTotal: stats.ProcessedTotal,
FailedTotal: stats.FailedTotal,
Paused: stats.Paused,
Timestamp: stats.Timestamp,
}, nil }, nil
} }
@@ -127,11 +181,11 @@ type DailyStats struct {
} }
// History returns a list of stats from the last n days. // History returns a list of stats from the last n days.
func (i *Inspector) History(qname string, n int) ([]*DailyStats, error) { func (i *Inspector) History(queue string, n int) ([]*DailyStats, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, err return nil, err
} }
stats, err := i.rdb.HistoricalStats(qname, n) stats, err := i.rdb.HistoricalStats(queue, n)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -168,13 +222,13 @@ var (
// If the specified queue does not exist, DeleteQueue returns ErrQueueNotFound. // If the specified queue does not exist, DeleteQueue returns ErrQueueNotFound.
// If force is set to false and the specified queue is not empty, DeleteQueue // If force is set to false and the specified queue is not empty, DeleteQueue
// returns ErrQueueNotEmpty. // returns ErrQueueNotEmpty.
func (i *Inspector) DeleteQueue(qname string, force bool) error { func (i *Inspector) DeleteQueue(queue string, force bool) error {
err := i.rdb.RemoveQueue(qname, force) err := i.rdb.RemoveQueue(queue, force)
if errors.IsQueueNotFound(err) { if errors.IsQueueNotFound(err) {
return fmt.Errorf("%w: queue=%q", ErrQueueNotFound, qname) return fmt.Errorf("%w: queue=%q", ErrQueueNotFound, queue)
} }
if errors.IsQueueNotEmpty(err) { if errors.IsQueueNotEmpty(err) {
return fmt.Errorf("%w: queue=%q", ErrQueueNotEmpty, qname) return fmt.Errorf("%w: queue=%q", ErrQueueNotEmpty, queue)
} }
return err return err
} }
@@ -183,8 +237,8 @@ func (i *Inspector) DeleteQueue(qname string, force bool) error {
// //
// Returns an error wrapping ErrQueueNotFound if a queue with the given name doesn't exist. // Returns an error wrapping ErrQueueNotFound if a queue with the given name doesn't exist.
// Returns an error wrapping ErrTaskNotFound if a task with the given id doesn't exist in the queue. // Returns an error wrapping ErrTaskNotFound if a task with the given id doesn't exist in the queue.
func (i *Inspector) GetTaskInfo(qname, id string) (*TaskInfo, error) { func (i *Inspector) GetTaskInfo(queue, id string) (*TaskInfo, error) {
info, err := i.rdb.GetTaskInfo(qname, id) info, err := i.rdb.GetTaskInfo(queue, id)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound) return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -260,13 +314,13 @@ func Page(n int) ListOption {
// ListPendingTasks retrieves pending tasks from the specified queue. // ListPendingTasks retrieves pending tasks from the specified queue.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) { func (i *Inspector) ListPendingTasks(queue string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, fmt.Errorf("asynq: %v", err) return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
infos, err := i.rdb.ListPending(qname, pgn) infos, err := i.rdb.ListPending(queue, pgn)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound) return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -288,13 +342,53 @@ func (i *Inspector) ListPendingTasks(qname string, opts ...ListOption) ([]*TaskI
// ListActiveTasks retrieves active tasks from the specified queue. // ListActiveTasks retrieves active tasks from the specified queue.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) { func (i *Inspector) ListActiveTasks(queue string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, fmt.Errorf("asynq: %v", err) return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
infos, err := i.rdb.ListActive(qname, pgn) infos, err := i.rdb.ListActive(queue, pgn)
switch {
case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
case err != nil:
return nil, fmt.Errorf("asynq: %v", err)
}
expired, err := i.rdb.ListLeaseExpired(time.Now(), queue)
if err != nil {
return nil, fmt.Errorf("asynq: %v", err)
}
expiredSet := make(map[string]struct{}) // set of expired message IDs
for _, msg := range expired {
expiredSet[msg.ID] = struct{}{}
}
var tasks []*TaskInfo
for _, i := range infos {
t := newTaskInfo(
i.Message,
i.State,
i.NextProcessAt,
i.Result,
)
if _, ok := expiredSet[i.Message.ID]; ok {
t.IsOrphaned = true
}
tasks = append(tasks, t)
}
return tasks, nil
}
// ListAggregatingTasks retrieves scheduled tasks from the specified group.
//
// By default, it retrieves the first 30 tasks.
func (i *Inspector) ListAggregatingTasks(queue, group string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(queue); err != nil {
return nil, fmt.Errorf("asynq: %v", err)
}
opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
infos, err := i.rdb.ListAggregating(queue, group, pgn)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound) return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -310,20 +404,20 @@ func (i *Inspector) ListActiveTasks(qname string, opts ...ListOption) ([]*TaskIn
i.Result, i.Result,
)) ))
} }
return tasks, err return tasks, nil
} }
// ListScheduledTasks retrieves scheduled tasks from the specified queue. // ListScheduledTasks retrieves scheduled tasks from the specified queue.
// Tasks are sorted by NextProcessAt in ascending order. // Tasks are sorted by NextProcessAt in ascending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) { func (i *Inspector) ListScheduledTasks(queue string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, fmt.Errorf("asynq: %v", err) return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
infos, err := i.rdb.ListScheduled(qname, pgn) infos, err := i.rdb.ListScheduled(queue, pgn)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound) return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -346,13 +440,13 @@ func (i *Inspector) ListScheduledTasks(qname string, opts ...ListOption) ([]*Tas
// Tasks are sorted by NextProcessAt in ascending order. // Tasks are sorted by NextProcessAt in ascending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) { func (i *Inspector) ListRetryTasks(queue string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, fmt.Errorf("asynq: %v", err) return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
infos, err := i.rdb.ListRetry(qname, pgn) infos, err := i.rdb.ListRetry(queue, pgn)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound) return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -375,13 +469,13 @@ func (i *Inspector) ListRetryTasks(qname string, opts ...ListOption) ([]*TaskInf
// Tasks are sorted by LastFailedAt in descending order. // Tasks are sorted by LastFailedAt in descending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListArchivedTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) { func (i *Inspector) ListArchivedTasks(queue string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, fmt.Errorf("asynq: %v", err) return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
infos, err := i.rdb.ListArchived(qname, pgn) infos, err := i.rdb.ListArchived(queue, pgn)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound) return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -404,13 +498,13 @@ func (i *Inspector) ListArchivedTasks(qname string, opts ...ListOption) ([]*Task
// Tasks are sorted by expiration time (i.e. CompletedAt + Retention) in descending order. // Tasks are sorted by expiration time (i.e. CompletedAt + Retention) in descending order.
// //
// By default, it retrieves the first 30 tasks. // By default, it retrieves the first 30 tasks.
func (i *Inspector) ListCompletedTasks(qname string, opts ...ListOption) ([]*TaskInfo, error) { func (i *Inspector) ListCompletedTasks(queue string, opts ...ListOption) ([]*TaskInfo, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return nil, fmt.Errorf("asynq: %v", err) return nil, fmt.Errorf("asynq: %v", err)
} }
opt := composeListOptions(opts...) opt := composeListOptions(opts...)
pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1} pgn := rdb.Pagination{Size: opt.pageSize, Page: opt.pageNum - 1}
infos, err := i.rdb.ListCompleted(qname, pgn) infos, err := i.rdb.ListCompleted(queue, pgn)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound) return nil, fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -431,51 +525,61 @@ func (i *Inspector) ListCompletedTasks(qname string, opts ...ListOption) ([]*Tas
// DeleteAllPendingTasks deletes all pending tasks from the specified queue, // DeleteAllPendingTasks deletes all pending tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllPendingTasks(qname string) (int, error) { func (i *Inspector) DeleteAllPendingTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.DeleteAllPendingTasks(qname) n, err := i.rdb.DeleteAllPendingTasks(queue)
return int(n), err return int(n), err
} }
// DeleteAllScheduledTasks deletes all scheduled tasks from the specified queue, // DeleteAllScheduledTasks deletes all scheduled tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllScheduledTasks(qname string) (int, error) { func (i *Inspector) DeleteAllScheduledTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.DeleteAllScheduledTasks(qname) n, err := i.rdb.DeleteAllScheduledTasks(queue)
return int(n), err return int(n), err
} }
// DeleteAllRetryTasks deletes all retry tasks from the specified queue, // DeleteAllRetryTasks deletes all retry tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllRetryTasks(qname string) (int, error) { func (i *Inspector) DeleteAllRetryTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.DeleteAllRetryTasks(qname) n, err := i.rdb.DeleteAllRetryTasks(queue)
return int(n), err return int(n), err
} }
// DeleteAllArchivedTasks deletes all archived tasks from the specified queue, // DeleteAllArchivedTasks deletes all archived tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllArchivedTasks(qname string) (int, error) { func (i *Inspector) DeleteAllArchivedTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.DeleteAllArchivedTasks(qname) n, err := i.rdb.DeleteAllArchivedTasks(queue)
return int(n), err return int(n), err
} }
// DeleteAllCompletedTasks deletes all completed tasks from the specified queue, // DeleteAllCompletedTasks deletes all completed tasks from the specified queue,
// and reports the number tasks deleted. // and reports the number tasks deleted.
func (i *Inspector) DeleteAllCompletedTasks(qname string) (int, error) { func (i *Inspector) DeleteAllCompletedTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.DeleteAllCompletedTasks(qname) n, err := i.rdb.DeleteAllCompletedTasks(queue)
return int(n), err
}
// DeleteAllAggregatingTasks deletes all tasks from the specified group,
// and reports the number of tasks deleted.
func (i *Inspector) DeleteAllAggregatingTasks(queue, group string) (int, error) {
if err := base.ValidateQueueName(queue); err != nil {
return 0, err
}
n, err := i.rdb.DeleteAllAggregatingTasks(queue, group)
return int(n), err return int(n), err
} }
@@ -486,11 +590,11 @@ func (i *Inspector) DeleteAllCompletedTasks(qname string) (int, error) {
// If a queue with the given name doesn't exist, it returns an error wrapping ErrQueueNotFound. // If a queue with the given name doesn't exist, it returns an error wrapping ErrQueueNotFound.
// If a task with the given id doesn't exist in the queue, it returns an error wrapping ErrTaskNotFound. // If a task with the given id doesn't exist in the queue, it returns an error wrapping ErrTaskNotFound.
// If the task is in active state, it returns a non-nil error. // If the task is in active state, it returns a non-nil error.
func (i *Inspector) DeleteTask(qname, id string) error { func (i *Inspector) DeleteTask(queue, id string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return fmt.Errorf("asynq: %v", err) return fmt.Errorf("asynq: %v", err)
} }
err := i.rdb.DeleteTask(qname, id) err := i.rdb.DeleteTask(queue, id)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return fmt.Errorf("asynq: %w", ErrQueueNotFound) return fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -503,33 +607,43 @@ func (i *Inspector) DeleteTask(qname, id string) error {
} }
// RunAllScheduledTasks transition all scheduled tasks to pending state from the given queue, // RunAllScheduledTasks schedules all scheduled tasks from the given queue to run,
// and reports the number of tasks transitioned. // and reports the number of tasks scheduled to run.
func (i *Inspector) RunAllScheduledTasks(qname string) (int, error) { func (i *Inspector) RunAllScheduledTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.RunAllScheduledTasks(qname) n, err := i.rdb.RunAllScheduledTasks(queue)
return int(n), err return int(n), err
} }
// RunAllRetryTasks transition all retry tasks to pending state from the given queue, // RunAllRetryTasks schedules all retry tasks from the given queue to run,
// and reports the number of tasks transitioned. // and reports the number of tasks scheduled to run.
func (i *Inspector) RunAllRetryTasks(qname string) (int, error) { func (i *Inspector) RunAllRetryTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.RunAllRetryTasks(qname) n, err := i.rdb.RunAllRetryTasks(queue)
return int(n), err return int(n), err
} }
// RunAllArchivedTasks transition all archived tasks to pending state from the given queue, // RunAllArchivedTasks schedules all archived tasks from the given queue to run,
// and reports the number of tasks transitioned. // and reports the number of tasks scheduled to run.
func (i *Inspector) RunAllArchivedTasks(qname string) (int, error) { func (i *Inspector) RunAllArchivedTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.RunAllArchivedTasks(qname) n, err := i.rdb.RunAllArchivedTasks(queue)
return int(n), err
}
// RunAllAggregatingTasks schedules all tasks from the given grou to run.
// and reports the number of tasks scheduled to run.
func (i *Inspector) RunAllAggregatingTasks(queue, group string) (int, error) {
if err := base.ValidateQueueName(queue); err != nil {
return 0, err
}
n, err := i.rdb.RunAllAggregatingTasks(queue, group)
return int(n), err return int(n), err
} }
@@ -540,11 +654,11 @@ func (i *Inspector) RunAllArchivedTasks(qname string) (int, error) {
// If a queue with the given name doesn't exist, it returns an error wrapping ErrQueueNotFound. // If a queue with the given name doesn't exist, it returns an error wrapping ErrQueueNotFound.
// If a task with the given id doesn't exist in the queue, it returns an error wrapping ErrTaskNotFound. // If a task with the given id doesn't exist in the queue, it returns an error wrapping ErrTaskNotFound.
// If the task is in pending or active state, it returns a non-nil error. // If the task is in pending or active state, it returns a non-nil error.
func (i *Inspector) RunTask(qname, id string) error { func (i *Inspector) RunTask(queue, id string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return fmt.Errorf("asynq: %v", err) return fmt.Errorf("asynq: %v", err)
} }
err := i.rdb.RunTask(qname, id) err := i.rdb.RunTask(queue, id)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return fmt.Errorf("asynq: %w", ErrQueueNotFound) return fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -558,31 +672,41 @@ func (i *Inspector) RunTask(qname, id string) error {
// ArchiveAllPendingTasks archives all pending tasks from the given queue, // ArchiveAllPendingTasks archives all pending tasks from the given queue,
// and reports the number of tasks archived. // and reports the number of tasks archived.
func (i *Inspector) ArchiveAllPendingTasks(qname string) (int, error) { func (i *Inspector) ArchiveAllPendingTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.ArchiveAllPendingTasks(qname) n, err := i.rdb.ArchiveAllPendingTasks(queue)
return int(n), err return int(n), err
} }
// ArchiveAllScheduledTasks archives all scheduled tasks from the given queue, // ArchiveAllScheduledTasks archives all scheduled tasks from the given queue,
// and reports the number of tasks archiveed. // and reports the number of tasks archiveed.
func (i *Inspector) ArchiveAllScheduledTasks(qname string) (int, error) { func (i *Inspector) ArchiveAllScheduledTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.ArchiveAllScheduledTasks(qname) n, err := i.rdb.ArchiveAllScheduledTasks(queue)
return int(n), err return int(n), err
} }
// ArchiveAllRetryTasks archives all retry tasks from the given queue, // ArchiveAllRetryTasks archives all retry tasks from the given queue,
// and reports the number of tasks archiveed. // and reports the number of tasks archiveed.
func (i *Inspector) ArchiveAllRetryTasks(qname string) (int, error) { func (i *Inspector) ArchiveAllRetryTasks(queue string) (int, error) {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return 0, err return 0, err
} }
n, err := i.rdb.ArchiveAllRetryTasks(qname) n, err := i.rdb.ArchiveAllRetryTasks(queue)
return int(n), err
}
// ArchiveAllAggregatingTasks archives all tasks from the given group,
// and reports the number of tasks archived.
func (i *Inspector) ArchiveAllAggregatingTasks(queue, group string) (int, error) {
if err := base.ValidateQueueName(queue); err != nil {
return 0, err
}
n, err := i.rdb.ArchiveAllAggregatingTasks(queue, group)
return int(n), err return int(n), err
} }
@@ -593,11 +717,11 @@ func (i *Inspector) ArchiveAllRetryTasks(qname string) (int, error) {
// If a queue with the given name doesn't exist, it returns an error wrapping ErrQueueNotFound. // If a queue with the given name doesn't exist, it returns an error wrapping ErrQueueNotFound.
// If a task with the given id doesn't exist in the queue, it returns an error wrapping ErrTaskNotFound. // If a task with the given id doesn't exist in the queue, it returns an error wrapping ErrTaskNotFound.
// If the task is in already archived, it returns a non-nil error. // If the task is in already archived, it returns a non-nil error.
func (i *Inspector) ArchiveTask(qname, id string) error { func (i *Inspector) ArchiveTask(queue, id string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return fmt.Errorf("asynq: err") return fmt.Errorf("asynq: err")
} }
err := i.rdb.ArchiveTask(qname, id) err := i.rdb.ArchiveTask(queue, id)
switch { switch {
case errors.IsQueueNotFound(err): case errors.IsQueueNotFound(err):
return fmt.Errorf("asynq: %w", ErrQueueNotFound) return fmt.Errorf("asynq: %w", ErrQueueNotFound)
@@ -619,20 +743,20 @@ func (i *Inspector) CancelProcessing(id string) error {
// PauseQueue pauses task processing on the specified queue. // PauseQueue pauses task processing on the specified queue.
// If the queue is already paused, it will return a non-nil error. // If the queue is already paused, it will return a non-nil error.
func (i *Inspector) PauseQueue(qname string) error { func (i *Inspector) PauseQueue(queue string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return err return err
} }
return i.rdb.Pause(qname) return i.rdb.Pause(queue)
} }
// UnpauseQueue resumes task processing on the specified queue. // UnpauseQueue resumes task processing on the specified queue.
// If the queue is not paused, it will return a non-nil error. // If the queue is not paused, it will return a non-nil error.
func (i *Inspector) UnpauseQueue(qname string) error { func (i *Inspector) UnpauseQueue(queue string) error {
if err := base.ValidateQueueName(qname); err != nil { if err := base.ValidateQueueName(queue); err != nil {
return err return err
} }
return i.rdb.Unpause(qname) return i.rdb.Unpause(queue)
} }
// Servers return a list of running servers' information. // Servers return a list of running servers' information.
@@ -722,8 +846,8 @@ type WorkerInfo struct {
} }
// ClusterKeySlot returns an integer identifying the hash slot the given queue hashes to. // ClusterKeySlot returns an integer identifying the hash slot the given queue hashes to.
func (i *Inspector) ClusterKeySlot(qname string) (int64, error) { func (i *Inspector) ClusterKeySlot(queue string) (int64, error) {
return i.rdb.ClusterKeySlot(qname) return i.rdb.ClusterKeySlot(queue)
} }
// ClusterNode describes a node in redis cluster. // ClusterNode describes a node in redis cluster.
@@ -738,8 +862,8 @@ type ClusterNode struct {
// ClusterNodes returns a list of nodes the given queue belongs to. // ClusterNodes returns a list of nodes the given queue belongs to.
// //
// Only relevant if task queues are stored in redis cluster. // Only relevant if task queues are stored in redis cluster.
func (i *Inspector) ClusterNodes(qname string) ([]*ClusterNode, error) { func (i *Inspector) ClusterNodes(queue string) ([]*ClusterNode, error) {
nodes, err := i.rdb.ClusterNodes(qname) nodes, err := i.rdb.ClusterNodes(queue)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -807,11 +931,11 @@ func parseOption(s string) (Option, error) {
fn, arg := parseOptionFunc(s), parseOptionArg(s) fn, arg := parseOptionFunc(s), parseOptionArg(s)
switch fn { switch fn {
case "Queue": case "Queue":
qname, err := strconv.Unquote(arg) queue, err := strconv.Unquote(arg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return Queue(qname), nil return Queue(queue), nil
case "MaxRetry": case "MaxRetry":
n, err := strconv.Atoi(arg) n, err := strconv.Atoi(arg)
if err != nil { if err != nil {

View File

@@ -8,7 +8,6 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"math"
"sort" "sort"
"testing" "testing"
"time" "time"
@@ -16,17 +15,14 @@ import (
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid" "github.com/google/uuid"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
"github.com/hibiken/asynq/internal/timeutil" "github.com/hibiken/asynq/internal/timeutil"
"github.com/redis/go-redis/v9"
) )
func TestInspectorQueues(t *testing.T) { func testInspectorQueues(t *testing.T, inspector *Inspector, r redis.UniversalClient) {
r := setup(t)
defer r.Close()
inspector := NewInspector(getRedisConnOpt(t))
tests := []struct { tests := []struct {
queues []string queues []string
}{ }{
@@ -52,7 +48,21 @@ func TestInspectorQueues(t *testing.T) {
t.Errorf("Queues() = %v, want %v; (-want, +got)\n%s", got, tc.queues, diff) t.Errorf("Queues() = %v, want %v; (-want, +got)\n%s", got, tc.queues, diff)
} }
} }
}
func TestInspectorQueues(t *testing.T) {
r := setup(t)
defer r.Close()
inspector := NewInspector(getRedisConnOpt(t))
testInspectorQueues(t, inspector, r)
}
func TestInspectorFromRedisClientQueues(t *testing.T) {
r := setup(t)
defer r.Close()
redisClient := getRedisConnOpt(t).MakeRedisClient().(redis.UniversalClient)
inspector := NewInspectorFromRedisClient(redisClient)
testInspectorQueues(t, inspector, r)
} }
func TestInspectorDeleteQueue(t *testing.T) { func TestInspectorDeleteQueue(t *testing.T) {
@@ -281,6 +291,8 @@ func TestInspectorGetQueueInfo(t *testing.T) {
completed map[string][]base.Z completed map[string][]base.Z
processed map[string]int processed map[string]int
failed map[string]int failed map[string]int
processedTotal map[string]int
failedTotal map[string]int
oldestPendingMessageEnqueueTime map[string]time.Time oldestPendingMessageEnqueueTime map[string]time.Time
qname string qname string
want *QueueInfo want *QueueInfo
@@ -329,6 +341,16 @@ func TestInspectorGetQueueInfo(t *testing.T) {
"critical": 0, "critical": 0,
"low": 5, "low": 5,
}, },
processedTotal: map[string]int{
"default": 11111,
"critical": 22222,
"low": 33333,
},
failedTotal: map[string]int{
"default": 111,
"critical": 222,
"low": 333,
},
oldestPendingMessageEnqueueTime: map[string]time.Time{ oldestPendingMessageEnqueueTime: map[string]time.Time{
"default": now.Add(-15 * time.Second), "default": now.Add(-15 * time.Second),
"critical": now.Add(-200 * time.Millisecond), "critical": now.Add(-200 * time.Millisecond),
@@ -336,19 +358,21 @@ func TestInspectorGetQueueInfo(t *testing.T) {
}, },
qname: "default", qname: "default",
want: &QueueInfo{ want: &QueueInfo{
Queue: "default", Queue: "default",
Latency: 15 * time.Second, Latency: 15 * time.Second,
Size: 4, Size: 4,
Pending: 1, Pending: 1,
Active: 1, Active: 1,
Scheduled: 2, Scheduled: 2,
Retry: 0, Retry: 0,
Archived: 0, Archived: 0,
Completed: 0, Completed: 0,
Processed: 120, Processed: 120,
Failed: 2, Failed: 2,
Paused: false, ProcessedTotal: 11111,
Timestamp: now, FailedTotal: 111,
Paused: false,
Timestamp: now,
}, },
}, },
} }
@@ -363,12 +387,16 @@ func TestInspectorGetQueueInfo(t *testing.T) {
h.SeedAllCompletedQueues(t, r, tc.completed) h.SeedAllCompletedQueues(t, r, tc.completed)
ctx := context.Background() ctx := context.Background()
for qname, n := range tc.processed { for qname, n := range tc.processed {
processedKey := base.ProcessedKey(qname, now) r.Set(ctx, base.ProcessedKey(qname, now), n, 0)
r.Set(ctx, processedKey, n, 0)
} }
for qname, n := range tc.failed { for qname, n := range tc.failed {
failedKey := base.FailedKey(qname, now) r.Set(ctx, base.FailedKey(qname, now), n, 0)
r.Set(ctx, failedKey, n, 0) }
for qname, n := range tc.processedTotal {
r.Set(ctx, base.ProcessedTotalKey(qname), n, 0)
}
for qname, n := range tc.failedTotal {
r.Set(ctx, base.FailedTotalKey(qname), n, 0)
} }
for qname, enqueueTime := range tc.oldestPendingMessageEnqueueTime { for qname, enqueueTime := range tc.oldestPendingMessageEnqueueTime {
if enqueueTime.IsZero() { if enqueueTime.IsZero() {
@@ -728,6 +756,12 @@ func TestInspectorListPendingTasks(t *testing.T) {
} }
} }
func newOrphanedTaskInfo(msg *base.TaskMessage) *TaskInfo {
info := newTaskInfo(msg, base.TaskStateActive, time.Time{}, nil)
info.IsOrphaned = true
return info
}
func TestInspectorListActiveTasks(t *testing.T) { func TestInspectorListActiveTasks(t *testing.T) {
r := setup(t) r := setup(t)
defer r.Close() defer r.Close()
@@ -737,10 +771,12 @@ func TestInspectorListActiveTasks(t *testing.T) {
m4 := h.NewTaskMessageWithQueue("task4", nil, "custom") m4 := h.NewTaskMessageWithQueue("task4", nil, "custom")
inspector := NewInspector(getRedisConnOpt(t)) inspector := NewInspector(getRedisConnOpt(t))
now := time.Now()
tests := []struct { tests := []struct {
desc string desc string
active map[string][]*base.TaskMessage active map[string][]*base.TaskMessage
lease map[string][]base.Z
qname string qname string
want []*TaskInfo want []*TaskInfo
}{ }{
@@ -750,10 +786,42 @@ func TestInspectorListActiveTasks(t *testing.T) {
"default": {m1, m2}, "default": {m1, m2},
"custom": {m3, m4}, "custom": {m3, m4},
}, },
lease: map[string][]base.Z{
"default": {
{Message: m1, Score: now.Add(20 * time.Second).Unix()},
{Message: m2, Score: now.Add(20 * time.Second).Unix()},
},
"custom": {
{Message: m3, Score: now.Add(20 * time.Second).Unix()},
{Message: m4, Score: now.Add(20 * time.Second).Unix()},
},
},
qname: "custom",
want: []*TaskInfo{
newTaskInfo(m3, base.TaskStateActive, time.Time{}, nil),
newTaskInfo(m4, base.TaskStateActive, time.Time{}, nil),
},
},
{
desc: "with an orphaned task",
active: map[string][]*base.TaskMessage{
"default": {m1, m2},
"custom": {m3, m4},
},
lease: map[string][]base.Z{
"default": {
{Message: m1, Score: now.Add(20 * time.Second).Unix()},
{Message: m2, Score: now.Add(-10 * time.Second).Unix()}, // orphaned task
},
"custom": {
{Message: m3, Score: now.Add(20 * time.Second).Unix()},
{Message: m4, Score: now.Add(20 * time.Second).Unix()},
},
},
qname: "default", qname: "default",
want: []*TaskInfo{ want: []*TaskInfo{
newTaskInfo(m1, base.TaskStateActive, time.Time{}, nil), newTaskInfo(m1, base.TaskStateActive, time.Time{}, nil),
newTaskInfo(m2, base.TaskStateActive, time.Time{}, nil), newOrphanedTaskInfo(m2),
}, },
}, },
} }
@@ -761,6 +829,7 @@ func TestInspectorListActiveTasks(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
h.SeedAllActiveQueues(t, r, tc.active) h.SeedAllActiveQueues(t, r, tc.active)
h.SeedAllLease(t, r, tc.lease)
got, err := inspector.ListActiveTasks(tc.qname) got, err := inspector.ListActiveTasks(tc.qname)
if err != nil { if err != nil {
@@ -1062,6 +1131,106 @@ func TestInspectorListCompletedTasks(t *testing.T) {
} }
} }
func TestInspectorListAggregatingTasks(t *testing.T) {
r := setup(t)
defer r.Close()
now := time.Now()
m1 := h.NewTaskMessageBuilder().SetType("task1").SetQueue("default").SetGroup("group1").Build()
m2 := h.NewTaskMessageBuilder().SetType("task2").SetQueue("default").SetGroup("group1").Build()
m3 := h.NewTaskMessageBuilder().SetType("task3").SetQueue("default").SetGroup("group1").Build()
m4 := h.NewTaskMessageBuilder().SetType("task4").SetQueue("default").SetGroup("group2").Build()
m5 := h.NewTaskMessageBuilder().SetType("task5").SetQueue("custom").SetGroup("group1").Build()
inspector := NewInspector(getRedisConnOpt(t))
fxt := struct {
tasks []*h.TaskSeedData
allQueues []string
allGroups map[string][]string
groups map[string][]redis.Z
}{
tasks: []*h.TaskSeedData{
{Msg: m1, State: base.TaskStateAggregating},
{Msg: m2, State: base.TaskStateAggregating},
{Msg: m3, State: base.TaskStateAggregating},
{Msg: m4, State: base.TaskStateAggregating},
{Msg: m5, State: base.TaskStateAggregating},
},
allQueues: []string{"default", "custom"},
allGroups: map[string][]string{
base.AllGroups("default"): {"group1", "group2"},
base.AllGroups("custom"): {"group1"},
},
groups: map[string][]redis.Z{
base.GroupKey("default", "group1"): {
{Member: m1.ID, Score: float64(now.Add(-30 * time.Second).Unix())},
{Member: m2.ID, Score: float64(now.Add(-20 * time.Second).Unix())},
{Member: m3.ID, Score: float64(now.Add(-10 * time.Second).Unix())},
},
base.GroupKey("default", "group2"): {
{Member: m4.ID, Score: float64(now.Add(-30 * time.Second).Unix())},
},
base.GroupKey("custom", "group1"): {
{Member: m5.ID, Score: float64(now.Add(-30 * time.Second).Unix())},
},
},
}
tests := []struct {
desc string
qname string
gname string
want []*TaskInfo
}{
{
desc: "default queue group1",
qname: "default",
gname: "group1",
want: []*TaskInfo{
createAggregatingTaskInfo(m1),
createAggregatingTaskInfo(m2),
createAggregatingTaskInfo(m3),
},
},
{
desc: "custom queue group1",
qname: "custom",
gname: "group1",
want: []*TaskInfo{
createAggregatingTaskInfo(m5),
},
},
}
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedTasks(t, r, fxt.tasks)
h.SeedRedisSet(t, r, base.AllQueues, fxt.allQueues)
h.SeedRedisSets(t, r, fxt.allGroups)
h.SeedRedisZSets(t, r, fxt.groups)
t.Run(tc.desc, func(t *testing.T) {
got, err := inspector.ListAggregatingTasks(tc.qname, tc.gname)
if err != nil {
t.Fatalf("ListAggregatingTasks returned error: %v", err)
}
cmpOpts := []cmp.Option{
cmpopts.EquateApproxTime(2 * time.Second),
cmp.AllowUnexported(TaskInfo{}),
}
if diff := cmp.Diff(tc.want, got, cmpOpts...); diff != "" {
t.Errorf("ListAggregatingTasks = %v, want = %v; (-want,+got)\n%s", got, tc.want, diff)
}
})
}
}
func createAggregatingTaskInfo(msg *base.TaskMessage) *TaskInfo {
return newTaskInfo(msg, base.TaskStateAggregating, time.Time{}, nil)
}
func TestInspectorListPagination(t *testing.T) { func TestInspectorListPagination(t *testing.T) {
// Create 100 tasks. // Create 100 tasks.
var msgs []*base.TaskMessage var msgs []*base.TaskMessage
@@ -1163,6 +1332,9 @@ func TestInspectorListTasksQueueNotFoundError(t *testing.T) {
if _, err := inspector.ListCompletedTasks(tc.qname); !errors.Is(err, tc.wantErr) { if _, err := inspector.ListCompletedTasks(tc.qname); !errors.Is(err, tc.wantErr) {
t.Errorf("ListCompletedTasks(%q) returned error %v, want %v", tc.qname, err, tc.wantErr) t.Errorf("ListCompletedTasks(%q) returned error %v, want %v", tc.qname, err, tc.wantErr)
} }
if _, err := inspector.ListAggregatingTasks(tc.qname, "mygroup"); !errors.Is(err, tc.wantErr) {
t.Errorf("ListAggregatingTasks(%q, \"mygroup\") returned error %v, want %v", tc.qname, err, tc.wantErr)
}
} }
} }
@@ -1505,6 +1677,7 @@ func TestInspectorArchiveAllPendingTasks(t *testing.T) {
z1 := base.Z{Message: m1, Score: now.Add(5 * time.Minute).Unix()} z1 := base.Z{Message: m1, Score: now.Add(5 * time.Minute).Unix()}
z2 := base.Z{Message: m2, Score: now.Add(15 * time.Minute).Unix()} z2 := base.Z{Message: m2, Score: now.Add(15 * time.Minute).Unix()}
inspector := NewInspector(getRedisConnOpt(t)) inspector := NewInspector(getRedisConnOpt(t))
inspector.rdb.SetClock(timeutil.NewSimulatedClock(now))
tests := []struct { tests := []struct {
pending map[string][]*base.TaskMessage pending map[string][]*base.TaskMessage
@@ -1596,12 +1769,8 @@ func TestInspectorArchiveAllPendingTasks(t *testing.T) {
} }
} }
for qname, want := range tc.wantArchived { for qname, want := range tc.wantArchived {
// Allow Z.Score to differ by up to 2.
approxOpt := cmp.Comparer(func(a, b int64) bool {
return math.Abs(float64(a-b)) < 2
})
gotArchived := h.GetArchivedEntries(t, r, qname) gotArchived := h.GetArchivedEntries(t, r, qname)
if diff := cmp.Diff(want, gotArchived, h.SortZSetEntryOpt, approxOpt); diff != "" { if diff := cmp.Diff(want, gotArchived, h.SortZSetEntryOpt); diff != "" {
t.Errorf("unexpected archived tasks in queue %q: (-want, +got)\n%s", qname, diff) t.Errorf("unexpected archived tasks in queue %q: (-want, +got)\n%s", qname, diff)
} }
} }
@@ -1622,6 +1791,7 @@ func TestInspectorArchiveAllScheduledTasks(t *testing.T) {
z4 := base.Z{Message: m4, Score: now.Add(2 * time.Minute).Unix()} z4 := base.Z{Message: m4, Score: now.Add(2 * time.Minute).Unix()}
inspector := NewInspector(getRedisConnOpt(t)) inspector := NewInspector(getRedisConnOpt(t))
inspector.rdb.SetClock(timeutil.NewSimulatedClock(now))
tests := []struct { tests := []struct {
scheduled map[string][]base.Z scheduled map[string][]base.Z
@@ -1729,12 +1899,8 @@ func TestInspectorArchiveAllScheduledTasks(t *testing.T) {
} }
} }
for qname, want := range tc.wantArchived { for qname, want := range tc.wantArchived {
// Allow Z.Score to differ by up to 2.
approxOpt := cmp.Comparer(func(a, b int64) bool {
return math.Abs(float64(a-b)) < 2
})
gotArchived := h.GetArchivedEntries(t, r, qname) gotArchived := h.GetArchivedEntries(t, r, qname)
if diff := cmp.Diff(want, gotArchived, h.SortZSetEntryOpt, approxOpt); diff != "" { if diff := cmp.Diff(want, gotArchived, h.SortZSetEntryOpt); diff != "" {
t.Errorf("unexpected archived tasks in queue %q: (-want, +got)\n%s", qname, diff) t.Errorf("unexpected archived tasks in queue %q: (-want, +got)\n%s", qname, diff)
} }
} }
@@ -1755,6 +1921,7 @@ func TestInspectorArchiveAllRetryTasks(t *testing.T) {
z4 := base.Z{Message: m4, Score: now.Add(2 * time.Minute).Unix()} z4 := base.Z{Message: m4, Score: now.Add(2 * time.Minute).Unix()}
inspector := NewInspector(getRedisConnOpt(t)) inspector := NewInspector(getRedisConnOpt(t))
inspector.rdb.SetClock(timeutil.NewSimulatedClock(now))
tests := []struct { tests := []struct {
retry map[string][]base.Z retry map[string][]base.Z
@@ -1845,10 +2012,9 @@ func TestInspectorArchiveAllRetryTasks(t *testing.T) {
t.Errorf("unexpected retry tasks in queue %q: (-want, +got)\n%s", qname, diff) t.Errorf("unexpected retry tasks in queue %q: (-want, +got)\n%s", qname, diff)
} }
} }
cmpOpt := h.EquateInt64Approx(2) // allow for 2 seconds difference in Z.Score
for qname, want := range tc.wantArchived { for qname, want := range tc.wantArchived {
wantArchived := h.GetArchivedEntries(t, r, qname) wantArchived := h.GetArchivedEntries(t, r, qname)
if diff := cmp.Diff(want, wantArchived, h.SortZSetEntryOpt, cmpOpt); diff != "" { if diff := cmp.Diff(want, wantArchived, h.SortZSetEntryOpt); diff != "" {
t.Errorf("unexpected archived tasks in queue %q: (-want, +got)\n%s", qname, diff) t.Errorf("unexpected archived tasks in queue %q: (-want, +got)\n%s", qname, diff)
} }
} }
@@ -2796,8 +2962,9 @@ func TestInspectorArchiveTaskArchivesPendingTask(t *testing.T) {
m1 := h.NewTaskMessage("task1", nil) m1 := h.NewTaskMessage("task1", nil)
m2 := h.NewTaskMessageWithQueue("task2", nil, "custom") m2 := h.NewTaskMessageWithQueue("task2", nil, "custom")
m3 := h.NewTaskMessageWithQueue("task3", nil, "custom") m3 := h.NewTaskMessageWithQueue("task3", nil, "custom")
inspector := NewInspector(getRedisConnOpt(t))
now := time.Now() now := time.Now()
inspector := NewInspector(getRedisConnOpt(t))
inspector.rdb.SetClock(timeutil.NewSimulatedClock(now))
tests := []struct { tests := []struct {
pending map[string][]*base.TaskMessage pending map[string][]*base.TaskMessage
@@ -2892,6 +3059,7 @@ func TestInspectorArchiveTaskArchivesScheduledTask(t *testing.T) {
z3 := base.Z{Message: m3, Score: now.Add(2 * time.Minute).Unix()} z3 := base.Z{Message: m3, Score: now.Add(2 * time.Minute).Unix()}
inspector := NewInspector(getRedisConnOpt(t)) inspector := NewInspector(getRedisConnOpt(t))
inspector.rdb.SetClock(timeutil.NewSimulatedClock(now))
tests := []struct { tests := []struct {
scheduled map[string][]base.Z scheduled map[string][]base.Z
@@ -2968,6 +3136,7 @@ func TestInspectorArchiveTaskArchivesRetryTask(t *testing.T) {
z3 := base.Z{Message: m3, Score: now.Add(2 * time.Minute).Unix()} z3 := base.Z{Message: m3, Score: now.Add(2 * time.Minute).Unix()}
inspector := NewInspector(getRedisConnOpt(t)) inspector := NewInspector(getRedisConnOpt(t))
inspector.rdb.SetClock(timeutil.NewSimulatedClock(now))
tests := []struct { tests := []struct {
retry map[string][]base.Z retry map[string][]base.Z
@@ -3042,6 +3211,7 @@ func TestInspectorArchiveTaskError(t *testing.T) {
z3 := base.Z{Message: m3, Score: now.Add(2 * time.Minute).Unix()} z3 := base.Z{Message: m3, Score: now.Add(2 * time.Minute).Unix()}
inspector := NewInspector(getRedisConnOpt(t)) inspector := NewInspector(getRedisConnOpt(t))
inspector.rdb.SetClock(timeutil.NewSimulatedClock(now))
tests := []struct { tests := []struct {
retry map[string][]base.Z retry map[string][]base.Z
@@ -3267,3 +3437,99 @@ func TestParseOption(t *testing.T) {
}) })
} }
} }
func TestInspectorGroups(t *testing.T) {
r := setup(t)
defer r.Close()
inspector := NewInspector(getRedisConnOpt(t))
m1 := h.NewTaskMessageBuilder().SetGroup("group1").Build()
m2 := h.NewTaskMessageBuilder().SetGroup("group1").Build()
m3 := h.NewTaskMessageBuilder().SetGroup("group1").Build()
m4 := h.NewTaskMessageBuilder().SetGroup("group2").Build()
m5 := h.NewTaskMessageBuilder().SetQueue("custom").SetGroup("group1").Build()
m6 := h.NewTaskMessageBuilder().SetQueue("custom").SetGroup("group1").Build()
now := time.Now()
fixtures := struct {
tasks []*h.TaskSeedData
allGroups map[string][]string
groups map[string][]redis.Z
}{
tasks: []*h.TaskSeedData{
{Msg: m1, State: base.TaskStateAggregating},
{Msg: m2, State: base.TaskStateAggregating},
{Msg: m3, State: base.TaskStateAggregating},
{Msg: m4, State: base.TaskStateAggregating},
{Msg: m5, State: base.TaskStateAggregating},
},
allGroups: map[string][]string{
base.AllGroups("default"): {"group1", "group2"},
base.AllGroups("custom"): {"group1"},
},
groups: map[string][]redis.Z{
base.GroupKey("default", "group1"): {
{Member: m1.ID, Score: float64(now.Add(-10 * time.Second).Unix())},
{Member: m2.ID, Score: float64(now.Add(-20 * time.Second).Unix())},
{Member: m3.ID, Score: float64(now.Add(-30 * time.Second).Unix())},
},
base.GroupKey("default", "group2"): {
{Member: m4.ID, Score: float64(now.Add(-20 * time.Second).Unix())},
},
base.GroupKey("custom", "group1"): {
{Member: m5.ID, Score: float64(now.Add(-10 * time.Second).Unix())},
{Member: m6.ID, Score: float64(now.Add(-20 * time.Second).Unix())},
},
},
}
tests := []struct {
desc string
qname string
want []*GroupInfo
}{
{
desc: "default queue groups",
qname: "default",
want: []*GroupInfo{
{Group: "group1", Size: 3},
{Group: "group2", Size: 1},
},
},
{
desc: "custom queue groups",
qname: "custom",
want: []*GroupInfo{
{Group: "group1", Size: 2},
},
},
}
var sortGroupInfosOpt = cmp.Transformer(
"SortGroupInfos",
func(in []*GroupInfo) []*GroupInfo {
out := append([]*GroupInfo(nil), in...)
sort.Slice(out, func(i, j int) bool {
return out[i].Group < out[j].Group
})
return out
})
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedTasks(t, r, fixtures.tasks)
h.SeedRedisSets(t, r, fixtures.allGroups)
h.SeedRedisZSets(t, r, fixtures.groups)
t.Run(tc.desc, func(t *testing.T) {
got, err := inspector.Groups(tc.qname)
if err != nil {
t.Fatalf("Groups returned error: %v", err)
}
if diff := cmp.Diff(tc.want, got, sortGroupInfosOpt); diff != "" {
t.Errorf("Groups = %v, want %v; (-want,+got)\n%s", got, tc.want, diff)
}
})
}
}

View File

@@ -14,15 +14,16 @@ import (
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/golang/protobuf/ptypes"
"github.com/hibiken/asynq/internal/errors" "github.com/hibiken/asynq/internal/errors"
pb "github.com/hibiken/asynq/internal/proto" pb "github.com/hibiken/asynq/internal/proto"
"github.com/hibiken/asynq/internal/timeutil"
"github.com/redis/go-redis/v9"
"google.golang.org/protobuf/proto" "google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/timestamppb"
) )
// Version of asynq library and CLI. // Version of asynq library and CLI.
const Version = "0.19.1" const Version = "0.25.0"
// DefaultQueueName is the queue name used if none are specified by user. // DefaultQueueName is the queue name used if none are specified by user.
const DefaultQueueName = "default" const DefaultQueueName = "default"
@@ -49,6 +50,7 @@ const (
TaskStateRetry TaskStateRetry
TaskStateArchived TaskStateArchived
TaskStateCompleted TaskStateCompleted
TaskStateAggregating // describes a state where task is waiting in a group to be aggregated
) )
func (s TaskState) String() string { func (s TaskState) String() string {
@@ -65,6 +67,8 @@ func (s TaskState) String() string {
return "archived" return "archived"
case TaskStateCompleted: case TaskStateCompleted:
return "completed" return "completed"
case TaskStateAggregating:
return "aggregating"
} }
panic(fmt.Sprintf("internal error: unknown task state %d", s)) panic(fmt.Sprintf("internal error: unknown task state %d", s))
} }
@@ -83,6 +87,8 @@ func TaskStateFromString(s string) (TaskState, error) {
return TaskStateArchived, nil return TaskStateArchived, nil
case "completed": case "completed":
return TaskStateCompleted, nil return TaskStateCompleted, nil
case "aggregating":
return TaskStateAggregating, nil
} }
return 0, errors.E(errors.FailedPrecondition, fmt.Sprintf("%q is not supported task state", s)) return 0, errors.E(errors.FailedPrecondition, fmt.Sprintf("%q is not supported task state", s))
} }
@@ -98,66 +104,76 @@ func ValidateQueueName(qname string) error {
// QueueKeyPrefix returns a prefix for all keys in the given queue. // QueueKeyPrefix returns a prefix for all keys in the given queue.
func QueueKeyPrefix(qname string) string { func QueueKeyPrefix(qname string) string {
return fmt.Sprintf("asynq:{%s}:", qname) return "asynq:{" + qname + "}:"
} }
// TaskKeyPrefix returns a prefix for task key. // TaskKeyPrefix returns a prefix for task key.
func TaskKeyPrefix(qname string) string { func TaskKeyPrefix(qname string) string {
return fmt.Sprintf("%st:", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "t:"
} }
// TaskKey returns a redis key for the given task message. // TaskKey returns a redis key for the given task message.
func TaskKey(qname, id string) string { func TaskKey(qname, id string) string {
return fmt.Sprintf("%s%s", TaskKeyPrefix(qname), id) return TaskKeyPrefix(qname) + id
} }
// PendingKey returns a redis key for the given queue name. // PendingKey returns a redis key for the given queue name.
func PendingKey(qname string) string { func PendingKey(qname string) string {
return fmt.Sprintf("%spending", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "pending"
} }
// ActiveKey returns a redis key for the active tasks. // ActiveKey returns a redis key for the active tasks.
func ActiveKey(qname string) string { func ActiveKey(qname string) string {
return fmt.Sprintf("%sactive", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "active"
} }
// ScheduledKey returns a redis key for the scheduled tasks. // ScheduledKey returns a redis key for the scheduled tasks.
func ScheduledKey(qname string) string { func ScheduledKey(qname string) string {
return fmt.Sprintf("%sscheduled", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "scheduled"
} }
// RetryKey returns a redis key for the retry tasks. // RetryKey returns a redis key for the retry tasks.
func RetryKey(qname string) string { func RetryKey(qname string) string {
return fmt.Sprintf("%sretry", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "retry"
} }
// ArchivedKey returns a redis key for the archived tasks. // ArchivedKey returns a redis key for the archived tasks.
func ArchivedKey(qname string) string { func ArchivedKey(qname string) string {
return fmt.Sprintf("%sarchived", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "archived"
} }
// DeadlinesKey returns a redis key for the deadlines. // LeaseKey returns a redis key for the lease.
func DeadlinesKey(qname string) string { func LeaseKey(qname string) string {
return fmt.Sprintf("%sdeadlines", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "lease"
} }
func CompletedKey(qname string) string { func CompletedKey(qname string) string {
return fmt.Sprintf("%scompleted", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "completed"
} }
// PausedKey returns a redis key to indicate that the given queue is paused. // PausedKey returns a redis key to indicate that the given queue is paused.
func PausedKey(qname string) string { func PausedKey(qname string) string {
return fmt.Sprintf("%spaused", QueueKeyPrefix(qname)) return QueueKeyPrefix(qname) + "paused"
}
// ProcessedTotalKey returns a redis key for total processed count for the given queue.
func ProcessedTotalKey(qname string) string {
return QueueKeyPrefix(qname) + "processed"
}
// FailedTotalKey returns a redis key for total failure count for the given queue.
func FailedTotalKey(qname string) string {
return QueueKeyPrefix(qname) + "failed"
} }
// ProcessedKey returns a redis key for processed count for the given day for the queue. // ProcessedKey returns a redis key for processed count for the given day for the queue.
func ProcessedKey(qname string, t time.Time) string { func ProcessedKey(qname string, t time.Time) string {
return fmt.Sprintf("%sprocessed:%s", QueueKeyPrefix(qname), t.UTC().Format("2006-01-02")) return QueueKeyPrefix(qname) + "processed:" + t.UTC().Format("2006-01-02")
} }
// FailedKey returns a redis key for failure count for the given day for the queue. // FailedKey returns a redis key for failure count for the given day for the queue.
func FailedKey(qname string, t time.Time) string { func FailedKey(qname string, t time.Time) string {
return fmt.Sprintf("%sfailed:%s", QueueKeyPrefix(qname), t.UTC().Format("2006-01-02")) return QueueKeyPrefix(qname) + "failed:" + t.UTC().Format("2006-01-02")
} }
// ServerInfoKey returns a redis key for process info. // ServerInfoKey returns a redis key for process info.
@@ -172,21 +188,47 @@ func WorkersKey(hostname string, pid int, serverID string) string {
// SchedulerEntriesKey returns a redis key for the scheduler entries given scheduler ID. // SchedulerEntriesKey returns a redis key for the scheduler entries given scheduler ID.
func SchedulerEntriesKey(schedulerID string) string { func SchedulerEntriesKey(schedulerID string) string {
return fmt.Sprintf("asynq:schedulers:{%s}", schedulerID) return "asynq:schedulers:{" + schedulerID + "}"
} }
// SchedulerHistoryKey returns a redis key for the scheduler's history for the given entry. // SchedulerHistoryKey returns a redis key for the scheduler's history for the given entry.
func SchedulerHistoryKey(entryID string) string { func SchedulerHistoryKey(entryID string) string {
return fmt.Sprintf("asynq:scheduler_history:%s", entryID) return "asynq:scheduler_history:" + entryID
} }
// UniqueKey returns a redis key with the given type, payload, and queue name. // UniqueKey returns a redis key with the given type, payload, and queue name.
func UniqueKey(qname, tasktype string, payload []byte) string { func UniqueKey(qname, tasktype string, payload []byte) string {
if payload == nil { if payload == nil {
return fmt.Sprintf("%sunique:%s:", QueueKeyPrefix(qname), tasktype) return QueueKeyPrefix(qname) + "unique:" + tasktype + ":"
} }
checksum := md5.Sum(payload) checksum := md5.Sum(payload)
return fmt.Sprintf("%sunique:%s:%s", QueueKeyPrefix(qname), tasktype, hex.EncodeToString(checksum[:])) return QueueKeyPrefix(qname) + "unique:" + tasktype + ":" + hex.EncodeToString(checksum[:])
}
// GroupKeyPrefix returns a prefix for group key.
func GroupKeyPrefix(qname string) string {
return QueueKeyPrefix(qname) + "g:"
}
// GroupKey returns a redis key used to group tasks belong in the same group.
func GroupKey(qname, gkey string) string {
return GroupKeyPrefix(qname) + gkey
}
// AggregationSetKey returns a redis key used for an aggregation set.
func AggregationSetKey(qname, gname, setID string) string {
return GroupKey(qname, gname) + ":" + setID
}
// AllGroups return a redis key used to store all group keys used in a given queue.
func AllGroups(qname string) string {
return QueueKeyPrefix(qname) + "groups"
}
// AllAggregationSets returns a redis key used to store all aggregation sets (set of tasks staged to be aggregated)
// in a given queue.
func AllAggregationSets(qname string) string {
return QueueKeyPrefix(qname) + "aggregation_sets"
} }
// TaskMessage is the internal representation of a task with additional metadata fields. // TaskMessage is the internal representation of a task with additional metadata fields.
@@ -239,6 +281,11 @@ type TaskMessage struct {
// Empty string indicates that no uniqueness lock was used. // Empty string indicates that no uniqueness lock was used.
UniqueKey string UniqueKey string
// GroupKey holds the group key used for task aggregation.
//
// Empty string indicates no aggregation is used for this task.
GroupKey string
// Retention specifies the number of seconds the task should be retained after completion. // Retention specifies the number of seconds the task should be retained after completion.
Retention int64 Retention int64
@@ -266,6 +313,7 @@ func EncodeMessage(msg *TaskMessage) ([]byte, error) {
Timeout: msg.Timeout, Timeout: msg.Timeout,
Deadline: msg.Deadline, Deadline: msg.Deadline,
UniqueKey: msg.UniqueKey, UniqueKey: msg.UniqueKey,
GroupKey: msg.GroupKey,
Retention: msg.Retention, Retention: msg.Retention,
CompletedAt: msg.CompletedAt, CompletedAt: msg.CompletedAt,
}) })
@@ -289,6 +337,7 @@ func DecodeMessage(data []byte) (*TaskMessage, error) {
Timeout: pbmsg.GetTimeout(), Timeout: pbmsg.GetTimeout(),
Deadline: pbmsg.GetDeadline(), Deadline: pbmsg.GetDeadline(),
UniqueKey: pbmsg.GetUniqueKey(), UniqueKey: pbmsg.GetUniqueKey(),
GroupKey: pbmsg.GetGroupKey(),
Retention: pbmsg.GetRetention(), Retention: pbmsg.GetRetention(),
CompletedAt: pbmsg.GetCompletedAt(), CompletedAt: pbmsg.GetCompletedAt(),
}, nil }, nil
@@ -308,68 +357,6 @@ type Z struct {
Score int64 Score int64
} }
// ServerState represents state of a server.
// ServerState methods are concurrency safe.
type ServerState struct {
mu sync.Mutex
val ServerStateValue
}
// NewServerState returns a new state instance.
// Initial state is set to StateNew.
func NewServerState() *ServerState {
return &ServerState{val: StateNew}
}
type ServerStateValue int
const (
// StateNew represents a new server. Server begins in
// this state and then transition to StatusActive when
// Start or Run is callled.
StateNew ServerStateValue = iota
// StateActive indicates the server is up and active.
StateActive
// StateStopped indicates the server is up but no longer processing new tasks.
StateStopped
// StateClosed indicates the server has been shutdown.
StateClosed
)
var serverStates = []string{
"new",
"active",
"stopped",
"closed",
}
func (s *ServerState) String() string {
s.mu.Lock()
defer s.mu.Unlock()
if StateNew <= s.val && s.val <= StateClosed {
return serverStates[s.val]
}
return "unknown status"
}
// Get returns the status value.
func (s *ServerState) Get() ServerStateValue {
s.mu.Lock()
v := s.val
s.mu.Unlock()
return v
}
// Set sets the status value.
func (s *ServerState) Set(v ServerStateValue) {
s.mu.Lock()
s.val = v
s.mu.Unlock()
}
// ServerInfo holds information about a running server. // ServerInfo holds information about a running server.
type ServerInfo struct { type ServerInfo struct {
Host string Host string
@@ -388,14 +375,12 @@ func EncodeServerInfo(info *ServerInfo) ([]byte, error) {
if info == nil { if info == nil {
return nil, fmt.Errorf("cannot encode nil server info") return nil, fmt.Errorf("cannot encode nil server info")
} }
queues := make(map[string]int32) queues := make(map[string]int32, len(info.Queues))
for q, p := range info.Queues { for q, p := range info.Queues {
queues[q] = int32(p) queues[q] = int32(p)
} }
started, err := ptypes.TimestampProto(info.Started) started := timestamppb.New(info.Started)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.ServerInfo{ return proto.Marshal(&pb.ServerInfo{
Host: info.Host, Host: info.Host,
Pid: int32(info.PID), Pid: int32(info.PID),
@@ -415,14 +400,12 @@ func DecodeServerInfo(b []byte) (*ServerInfo, error) {
if err := proto.Unmarshal(b, &pbmsg); err != nil { if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err return nil, err
} }
queues := make(map[string]int) queues := make(map[string]int, len(pbmsg.GetQueues()))
for q, p := range pbmsg.GetQueues() { for q, p := range pbmsg.GetQueues() {
queues[q] = int(p) queues[q] = int(p)
} }
startTime, err := ptypes.Timestamp(pbmsg.GetStartTime()) startTime := pbmsg.GetStartTime()
if err != nil {
return nil, err
}
return &ServerInfo{ return &ServerInfo{
Host: pbmsg.GetHost(), Host: pbmsg.GetHost(),
PID: int(pbmsg.GetPid()), PID: int(pbmsg.GetPid()),
@@ -431,7 +414,7 @@ func DecodeServerInfo(b []byte) (*ServerInfo, error) {
Queues: queues, Queues: queues,
StrictPriority: pbmsg.GetStrictPriority(), StrictPriority: pbmsg.GetStrictPriority(),
Status: pbmsg.GetStatus(), Status: pbmsg.GetStatus(),
Started: startTime, Started: startTime.AsTime(),
ActiveWorkerCount: int(pbmsg.GetActiveWorkerCount()), ActiveWorkerCount: int(pbmsg.GetActiveWorkerCount()),
}, nil }, nil
} }
@@ -454,14 +437,9 @@ func EncodeWorkerInfo(info *WorkerInfo) ([]byte, error) {
if info == nil { if info == nil {
return nil, fmt.Errorf("cannot encode nil worker info") return nil, fmt.Errorf("cannot encode nil worker info")
} }
startTime, err := ptypes.TimestampProto(info.Started) startTime := timestamppb.New(info.Started)
if err != nil { deadline := timestamppb.New(info.Deadline)
return nil, err
}
deadline, err := ptypes.TimestampProto(info.Deadline)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.WorkerInfo{ return proto.Marshal(&pb.WorkerInfo{
Host: info.Host, Host: info.Host,
Pid: int32(info.PID), Pid: int32(info.PID),
@@ -481,14 +459,9 @@ func DecodeWorkerInfo(b []byte) (*WorkerInfo, error) {
if err := proto.Unmarshal(b, &pbmsg); err != nil { if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err return nil, err
} }
startTime, err := ptypes.Timestamp(pbmsg.GetStartTime()) startTime := pbmsg.GetStartTime()
if err != nil { deadline := pbmsg.GetDeadline()
return nil, err
}
deadline, err := ptypes.Timestamp(pbmsg.GetDeadline())
if err != nil {
return nil, err
}
return &WorkerInfo{ return &WorkerInfo{
Host: pbmsg.GetHost(), Host: pbmsg.GetHost(),
PID: int(pbmsg.GetPid()), PID: int(pbmsg.GetPid()),
@@ -497,8 +470,8 @@ func DecodeWorkerInfo(b []byte) (*WorkerInfo, error) {
Type: pbmsg.GetTaskType(), Type: pbmsg.GetTaskType(),
Payload: pbmsg.GetTaskPayload(), Payload: pbmsg.GetTaskPayload(),
Queue: pbmsg.GetQueue(), Queue: pbmsg.GetQueue(),
Started: startTime, Started: startTime.AsTime(),
Deadline: deadline, Deadline: deadline.AsTime(),
}, nil }, nil
} }
@@ -532,14 +505,9 @@ func EncodeSchedulerEntry(entry *SchedulerEntry) ([]byte, error) {
if entry == nil { if entry == nil {
return nil, fmt.Errorf("cannot encode nil scheduler entry") return nil, fmt.Errorf("cannot encode nil scheduler entry")
} }
next, err := ptypes.TimestampProto(entry.Next) next := timestamppb.New(entry.Next)
if err != nil { prev := timestamppb.New(entry.Prev)
return nil, err
}
prev, err := ptypes.TimestampProto(entry.Prev)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.SchedulerEntry{ return proto.Marshal(&pb.SchedulerEntry{
Id: entry.ID, Id: entry.ID,
Spec: entry.Spec, Spec: entry.Spec,
@@ -557,22 +525,17 @@ func DecodeSchedulerEntry(b []byte) (*SchedulerEntry, error) {
if err := proto.Unmarshal(b, &pbmsg); err != nil { if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err return nil, err
} }
next, err := ptypes.Timestamp(pbmsg.GetNextEnqueueTime()) next := pbmsg.GetNextEnqueueTime()
if err != nil { prev := pbmsg.GetPrevEnqueueTime()
return nil, err
}
prev, err := ptypes.Timestamp(pbmsg.GetPrevEnqueueTime())
if err != nil {
return nil, err
}
return &SchedulerEntry{ return &SchedulerEntry{
ID: pbmsg.GetId(), ID: pbmsg.GetId(),
Spec: pbmsg.GetSpec(), Spec: pbmsg.GetSpec(),
Type: pbmsg.GetTaskType(), Type: pbmsg.GetTaskType(),
Payload: pbmsg.GetTaskPayload(), Payload: pbmsg.GetTaskPayload(),
Opts: pbmsg.GetEnqueueOptions(), Opts: pbmsg.GetEnqueueOptions(),
Next: next, Next: next.AsTime(),
Prev: prev, Prev: prev.AsTime(),
}, nil }, nil
} }
@@ -591,10 +554,7 @@ func EncodeSchedulerEnqueueEvent(event *SchedulerEnqueueEvent) ([]byte, error) {
if event == nil { if event == nil {
return nil, fmt.Errorf("cannot encode nil enqueue event") return nil, fmt.Errorf("cannot encode nil enqueue event")
} }
enqueuedAt, err := ptypes.TimestampProto(event.EnqueuedAt) enqueuedAt := timestamppb.New(event.EnqueuedAt)
if err != nil {
return nil, err
}
return proto.Marshal(&pb.SchedulerEnqueueEvent{ return proto.Marshal(&pb.SchedulerEnqueueEvent{
TaskId: event.TaskID, TaskId: event.TaskID,
EnqueueTime: enqueuedAt, EnqueueTime: enqueuedAt,
@@ -608,19 +568,16 @@ func DecodeSchedulerEnqueueEvent(b []byte) (*SchedulerEnqueueEvent, error) {
if err := proto.Unmarshal(b, &pbmsg); err != nil { if err := proto.Unmarshal(b, &pbmsg); err != nil {
return nil, err return nil, err
} }
enqueuedAt, err := ptypes.Timestamp(pbmsg.GetEnqueueTime()) enqueuedAt := pbmsg.GetEnqueueTime()
if err != nil {
return nil, err
}
return &SchedulerEnqueueEvent{ return &SchedulerEnqueueEvent{
TaskID: pbmsg.GetTaskId(), TaskID: pbmsg.GetTaskId(),
EnqueuedAt: enqueuedAt, EnqueuedAt: enqueuedAt.AsTime(),
}, nil }, nil
} }
// Cancelations is a collection that holds cancel functions for all active tasks. // Cancelations is a collection that holds cancel functions for all active tasks.
// //
// Cancelations are safe for concurrent use by multipel goroutines. // Cancelations are safe for concurrent use by multiple goroutines.
type Cancelations struct { type Cancelations struct {
mu sync.Mutex mu sync.Mutex
cancelFuncs map[string]context.CancelFunc cancelFuncs map[string]context.CancelFunc
@@ -655,28 +612,114 @@ func (c *Cancelations) Get(id string) (fn context.CancelFunc, ok bool) {
return fn, ok return fn, ok
} }
// Lease is a time bound lease for worker to process task.
// It provides a communication channel between lessor and lessee about lease expiration.
type Lease struct {
once sync.Once
ch chan struct{}
Clock timeutil.Clock
mu sync.Mutex
expireAt time.Time // guarded by mu
}
func NewLease(expirationTime time.Time) *Lease {
return &Lease{
ch: make(chan struct{}),
expireAt: expirationTime,
Clock: timeutil.NewRealClock(),
}
}
// Reset changes the lease to expire at the given time.
// It returns true if the lease is still valid and reset operation was successful, false if the lease had been expired.
func (l *Lease) Reset(expirationTime time.Time) bool {
if !l.IsValid() {
return false
}
l.mu.Lock()
defer l.mu.Unlock()
l.expireAt = expirationTime
return true
}
// Sends a notification to lessee about expired lease
// Returns true if notification was sent, returns false if the lease is still valid and notification was not sent.
func (l *Lease) NotifyExpiration() bool {
if l.IsValid() {
return false
}
l.once.Do(l.closeCh)
return true
}
func (l *Lease) closeCh() {
close(l.ch)
}
// Done returns a communication channel from which the lessee can read to get notified when lessor notifies about lease expiration.
func (l *Lease) Done() <-chan struct{} {
return l.ch
}
// Deadline returns the expiration time of the lease.
func (l *Lease) Deadline() time.Time {
l.mu.Lock()
defer l.mu.Unlock()
return l.expireAt
}
// IsValid returns true if the lease's expiration time is in the future or equals to the current time,
// returns false otherwise.
func (l *Lease) IsValid() bool {
now := l.Clock.Now()
l.mu.Lock()
defer l.mu.Unlock()
return l.expireAt.After(now) || l.expireAt.Equal(now)
}
// Broker is a message broker that supports operations to manage task queues. // Broker is a message broker that supports operations to manage task queues.
// //
// See rdb.RDB as a reference implementation. // See rdb.RDB as a reference implementation.
type Broker interface { type Broker interface {
Ping() error Ping() error
Close() error
Enqueue(ctx context.Context, msg *TaskMessage) error Enqueue(ctx context.Context, msg *TaskMessage) error
EnqueueUnique(ctx context.Context, msg *TaskMessage, ttl time.Duration) error EnqueueUnique(ctx context.Context, msg *TaskMessage, ttl time.Duration) error
Dequeue(qnames ...string) (*TaskMessage, time.Time, error) Dequeue(qnames ...string) (*TaskMessage, time.Time, error)
Done(msg *TaskMessage) error Done(ctx context.Context, msg *TaskMessage) error
MarkAsComplete(msg *TaskMessage) error MarkAsComplete(ctx context.Context, msg *TaskMessage) error
Requeue(msg *TaskMessage) error Requeue(ctx context.Context, msg *TaskMessage) error
Schedule(ctx context.Context, msg *TaskMessage, processAt time.Time) error Schedule(ctx context.Context, msg *TaskMessage, processAt time.Time) error
ScheduleUnique(ctx context.Context, msg *TaskMessage, processAt time.Time, ttl time.Duration) error ScheduleUnique(ctx context.Context, msg *TaskMessage, processAt time.Time, ttl time.Duration) error
Retry(msg *TaskMessage, processAt time.Time, errMsg string, isFailure bool) error Retry(ctx context.Context, msg *TaskMessage, processAt time.Time, errMsg string, isFailure bool) error
Archive(msg *TaskMessage, errMsg string) error Archive(ctx context.Context, msg *TaskMessage, errMsg string) error
ForwardIfReady(qnames ...string) error ForwardIfReady(qnames ...string) error
DeleteExpiredCompletedTasks(qname string) error
ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*TaskMessage, error) // Group aggregation related methods
AddToGroup(ctx context.Context, msg *TaskMessage, gname string) error
AddToGroupUnique(ctx context.Context, msg *TaskMessage, groupKey string, ttl time.Duration) error
ListGroups(qname string) ([]string, error)
AggregationCheck(qname, gname string, t time.Time, gracePeriod, maxDelay time.Duration, maxSize int) (aggregationSetID string, err error)
ReadAggregationSet(qname, gname, aggregationSetID string) ([]*TaskMessage, time.Time, error)
DeleteAggregationSet(ctx context.Context, qname, gname, aggregationSetID string) error
ReclaimStaleAggregationSets(qname string) error
// Task retention related method
DeleteExpiredCompletedTasks(qname string, batchSize int) error
// Lease related methods
ListLeaseExpired(cutoff time.Time, qnames ...string) ([]*TaskMessage, error)
ExtendLease(qname string, ids ...string) (time.Time, error)
// State snapshot related methods
WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error WriteServerState(info *ServerInfo, workers []*WorkerInfo, ttl time.Duration) error
ClearServerState(host string, pid int, serverID string) error ClearServerState(host string, pid int, serverID string) error
// Cancelation related methods
CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers CancelationPubSub() (*redis.PubSub, error) // TODO: Need to decouple from redis to support other brokers
PublishCancelation(id string) error PublishCancelation(id string) error
WriteResult(qname, id string, data []byte) (n int, err error) WriteResult(qname, id string, data []byte) (n int, err error)
Close() error
} }

View File

@@ -16,6 +16,7 @@ import (
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/timeutil"
) )
func TestTaskKey(t *testing.T) { func TestTaskKey(t *testing.T) {
@@ -71,19 +72,19 @@ func TestActiveKey(t *testing.T) {
} }
} }
func TestDeadlinesKey(t *testing.T) { func TestLeaseKey(t *testing.T) {
tests := []struct { tests := []struct {
qname string qname string
want string want string
}{ }{
{"default", "asynq:{default}:deadlines"}, {"default", "asynq:{default}:lease"},
{"custom", "asynq:{custom}:deadlines"}, {"custom", "asynq:{custom}:lease"},
} }
for _, tc := range tests { for _, tc := range tests {
got := DeadlinesKey(tc.qname) got := LeaseKey(tc.qname)
if got != tc.want { if got != tc.want {
t.Errorf("DeadlinesKey(%q) = %q, want %q", tc.qname, got, tc.want) t.Errorf("LeaseKey(%q) = %q, want %q", tc.qname, got, tc.want)
} }
} }
} }
@@ -173,6 +174,40 @@ func TestPausedKey(t *testing.T) {
} }
} }
func TestProcessedTotalKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:processed"},
{"custom", "asynq:{custom}:processed"},
}
for _, tc := range tests {
got := ProcessedTotalKey(tc.qname)
if got != tc.want {
t.Errorf("ProcessedTotalKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestFailedTotalKey(t *testing.T) {
tests := []struct {
qname string
want string
}{
{"default", "asynq:{default}:failed"},
{"custom", "asynq:{custom}:failed"},
}
for _, tc := range tests {
got := FailedTotalKey(tc.qname)
if got != tc.want {
t.Errorf("FailedTotalKey(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestProcessedKey(t *testing.T) { func TestProcessedKey(t *testing.T) {
tests := []struct { tests := []struct {
qname string qname string
@@ -360,6 +395,107 @@ func TestUniqueKey(t *testing.T) {
} }
} }
func TestGroupKey(t *testing.T) {
tests := []struct {
qname string
gkey string
want string
}{
{
qname: "default",
gkey: "mygroup",
want: "asynq:{default}:g:mygroup",
},
{
qname: "custom",
gkey: "foo",
want: "asynq:{custom}:g:foo",
},
}
for _, tc := range tests {
got := GroupKey(tc.qname, tc.gkey)
if got != tc.want {
t.Errorf("GroupKey(%q, %q) = %q, want %q", tc.qname, tc.gkey, got, tc.want)
}
}
}
func TestAggregationSetKey(t *testing.T) {
tests := []struct {
qname string
gname string
setID string
want string
}{
{
qname: "default",
gname: "mygroup",
setID: "12345",
want: "asynq:{default}:g:mygroup:12345",
},
{
qname: "custom",
gname: "foo",
setID: "98765",
want: "asynq:{custom}:g:foo:98765",
},
}
for _, tc := range tests {
got := AggregationSetKey(tc.qname, tc.gname, tc.setID)
if got != tc.want {
t.Errorf("AggregationSetKey(%q, %q, %q) = %q, want %q", tc.qname, tc.gname, tc.setID, got, tc.want)
}
}
}
func TestAllGroups(t *testing.T) {
tests := []struct {
qname string
want string
}{
{
qname: "default",
want: "asynq:{default}:groups",
},
{
qname: "custom",
want: "asynq:{custom}:groups",
},
}
for _, tc := range tests {
got := AllGroups(tc.qname)
if got != tc.want {
t.Errorf("AllGroups(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestAllAggregationSets(t *testing.T) {
tests := []struct {
qname string
want string
}{
{
qname: "default",
want: "asynq:{default}:aggregation_sets",
},
{
qname: "custom",
want: "asynq:{custom}:aggregation_sets",
},
}
for _, tc := range tests {
got := AllAggregationSets(tc.qname)
if got != tc.want {
t.Errorf("AllAggregationSets(%q) = %q, want %q", tc.qname, got, tc.want)
}
}
}
func TestMessageEncoding(t *testing.T) { func TestMessageEncoding(t *testing.T) {
id := uuid.NewString() id := uuid.NewString()
tests := []struct { tests := []struct {
@@ -372,6 +508,7 @@ func TestMessageEncoding(t *testing.T) {
Payload: toBytes(map[string]interface{}{"a": 1, "b": "hello!", "c": true}), Payload: toBytes(map[string]interface{}{"a": 1, "b": "hello!", "c": true}),
ID: id, ID: id,
Queue: "default", Queue: "default",
GroupKey: "mygroup",
Retry: 10, Retry: 10,
Retried: 0, Retried: 0,
Timeout: 1800, Timeout: 1800,
@@ -383,6 +520,7 @@ func TestMessageEncoding(t *testing.T) {
Payload: toBytes(map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true}), Payload: toBytes(map[string]interface{}{"a": json.Number("1"), "b": "hello!", "c": true}),
ID: id, ID: id,
Queue: "default", Queue: "default",
GroupKey: "mygroup",
Retry: 10, Retry: 10,
Retried: 0, Retried: 0,
Timeout: 1800, Timeout: 1800,
@@ -549,30 +687,6 @@ func TestSchedulerEnqueueEventEncoding(t *testing.T) {
} }
} }
// Test for status being accessed by multiple goroutines.
// Run with -race flag to check for data race.
func TestStatusConcurrentAccess(t *testing.T) {
status := NewServerState()
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
status.Get()
_ = status.String()
}()
wg.Add(1)
go func() {
defer wg.Done()
status.Set(StateClosed)
_ = status.String()
}()
wg.Wait()
}
// Test for cancelations being accessed by multiple goroutines. // Test for cancelations being accessed by multiple goroutines.
// Run with -race flag to check for data race. // Run with -race flag to check for data race.
func TestCancelationsConcurrentAccess(t *testing.T) { func TestCancelationsConcurrentAccess(t *testing.T) {
@@ -617,3 +731,75 @@ func TestCancelationsConcurrentAccess(t *testing.T) {
t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2) t.Errorf("(*Cancelations).Get(%q) = _, true, want <nil>, false", key2)
} }
} }
func TestLeaseReset(t *testing.T) {
now := time.Now()
clock := timeutil.NewSimulatedClock(now)
l := NewLease(now.Add(30 * time.Second))
l.Clock = clock
// Check initial state
if !l.IsValid() {
t.Errorf("lease should be valid when expiration is set to a future time")
}
if want := now.Add(30 * time.Second); l.Deadline() != want {
t.Errorf("Lease.Deadline() = %v, want %v", l.Deadline(), want)
}
// Test Reset
if !l.Reset(now.Add(45 * time.Second)) {
t.Fatalf("Lease.Reset returned false when extending")
}
if want := now.Add(45 * time.Second); l.Deadline() != want {
t.Errorf("After Reset: Lease.Deadline() = %v, want %v", l.Deadline(), want)
}
clock.AdvanceTime(1 * time.Minute) // simulate lease expiration
if l.IsValid() {
t.Errorf("lease should be invalid after expiration")
}
// Reset should return false if lease is expired.
if l.Reset(time.Now().Add(20 * time.Second)) {
t.Errorf("Lease.Reset should return false after expiration")
}
}
func TestLeaseNotifyExpiration(t *testing.T) {
now := time.Now()
clock := timeutil.NewSimulatedClock(now)
l := NewLease(now.Add(30 * time.Second))
l.Clock = clock
select {
case <-l.Done():
t.Fatalf("Lease.Done() did not block")
default:
}
if l.NotifyExpiration() {
t.Fatalf("Lease.NotifyExpiration() should return false when lease is still valid")
}
clock.AdvanceTime(1 * time.Minute) // simulate lease expiration
if l.IsValid() {
t.Errorf("Lease should be invalid after expiration")
}
if !l.NotifyExpiration() {
t.Errorf("Lease.NotifyExpiration() return return true after expiration")
}
if !l.NotifyExpiration() {
t.Errorf("It should be leagal to call Lease.NotifyExpiration multiple times")
}
select {
case <-l.Done():
// expected
default:
t.Errorf("Lease.Done() blocked after call to Lease.NotifyExpiration()")
}
}

View File

@@ -28,14 +28,14 @@ type ctxKey int
const metadataCtxKey ctxKey = 0 const metadataCtxKey ctxKey = 0
// New returns a context and cancel function for a given task message. // New returns a context and cancel function for a given task message.
func New(msg *base.TaskMessage, deadline time.Time) (context.Context, context.CancelFunc) { func New(base context.Context, msg *base.TaskMessage, deadline time.Time) (context.Context, context.CancelFunc) {
metadata := taskMetadata{ metadata := taskMetadata{
id: msg.ID, id: msg.ID,
maxRetry: msg.Retry, maxRetry: msg.Retry,
retryCount: msg.Retried, retryCount: msg.Retried,
qname: msg.Queue, qname: msg.Queue,
} }
ctx := context.WithValue(context.Background(), metadataCtxKey, metadata) ctx := context.WithValue(base, metadataCtxKey, metadata)
return context.WithDeadline(ctx, deadline) return context.WithDeadline(ctx, deadline)
} }
@@ -65,7 +65,7 @@ func GetRetryCount(ctx context.Context) (n int, ok bool) {
// GetMaxRetry extracts maximum retry from a context, if any. // GetMaxRetry extracts maximum retry from a context, if any.
// //
// Return value n indicates the maximum number of times the assoicated task // Return value n indicates the maximum number of times the associated task
// can be retried if ProcessTask returns a non-nil error. // can be retried if ProcessTask returns a non-nil error.
func GetMaxRetry(ctx context.Context) (n int, ok bool) { func GetMaxRetry(ctx context.Context) (n int, ok bool) {
metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata) metadata, ok := ctx.Value(metadataCtxKey).(taskMetadata)

View File

@@ -6,6 +6,7 @@ package context
import ( import (
"context" "context"
"fmt"
"testing" "testing"
"time" "time"
@@ -28,7 +29,7 @@ func TestCreateContextWithFutureDeadline(t *testing.T) {
Payload: nil, Payload: nil,
} }
ctx, cancel := New(msg, tc.deadline) ctx, cancel := New(context.Background(), msg, tc.deadline)
select { select {
case x := <-ctx.Done(): case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x) t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
@@ -53,6 +54,53 @@ func TestCreateContextWithFutureDeadline(t *testing.T) {
} }
} }
func TestCreateContextWithBaseContext(t *testing.T) {
type ctxKey string
type ctxValue string
var key ctxKey = "key"
var value ctxValue = "value"
tests := []struct {
baseCtx context.Context
validate func(ctx context.Context, t *testing.T) error
}{
{
baseCtx: context.WithValue(context.Background(), key, value),
validate: func(ctx context.Context, t *testing.T) error {
got, ok := ctx.Value(key).(ctxValue)
if !ok {
return fmt.Errorf("ctx.Value().(ctxValue) returned false, expected to be true")
}
if want := value; got != want {
return fmt.Errorf("ctx.Value().(ctxValue) returned unknown value (%v), expected to be %s", got, value)
}
return nil
},
},
}
for _, tc := range tests {
msg := &base.TaskMessage{
Type: "something",
ID: uuid.NewString(),
Payload: nil,
}
ctx, cancel := New(tc.baseCtx, msg, time.Now().Add(30*time.Minute))
defer cancel()
select {
case x := <-ctx.Done():
t.Errorf("<-ctx.Done() == %v, want nothing (it should block)", x)
default:
}
if err := tc.validate(ctx, t); err != nil {
t.Errorf("%v", err)
}
}
}
func TestCreateContextWithPastDeadline(t *testing.T) { func TestCreateContextWithPastDeadline(t *testing.T) {
tests := []struct { tests := []struct {
deadline time.Time deadline time.Time
@@ -67,7 +115,7 @@ func TestCreateContextWithPastDeadline(t *testing.T) {
Payload: nil, Payload: nil,
} }
ctx, cancel := New(msg, tc.deadline) ctx, cancel := New(context.Background(), msg, tc.deadline)
defer cancel() defer cancel()
select { select {
@@ -97,7 +145,7 @@ func TestGetTaskMetadataFromContext(t *testing.T) {
} }
for _, tc := range tests { for _, tc := range tests {
ctx, cancel := New(tc.msg, time.Now().Add(30*time.Minute)) ctx, cancel := New(context.Background(), tc.msg, time.Now().Add(30*time.Minute))
defer cancel() defer cancel()
id, ok := GetTaskID(ctx) id, ok := GetTaskID(ctx)

View File

@@ -161,7 +161,7 @@ func CanonicalCode(err error) Code {
} }
/****************************************** /******************************************
Domin Specific Error Types & Values Domain Specific Error Types & Values
*******************************************/ *******************************************/
var ( var (
@@ -256,6 +256,21 @@ func IsRedisCommandError(err error) bool {
return As(err, &target) return As(err, &target)
} }
// PanicError defines an error when occurred a panic error.
type PanicError struct {
ErrMsg string
}
func (e *PanicError) Error() string {
return fmt.Sprintf("panic error cause by: %s", e.ErrMsg)
}
// IsPanicError reports whether any error in err's chain is of type PanicError.
func IsPanicError(err error) bool {
var target *PanicError
return As(err, &target)
}
/************************************************* /*************************************************
Standard Library errors package functions Standard Library errors package functions
*************************************************/ *************************************************/
@@ -263,26 +278,26 @@ func IsRedisCommandError(err error) bool {
// New returns an error that formats as the given text. // New returns an error that formats as the given text.
// Each call to New returns a distinct error value even if the text is identical. // Each call to New returns a distinct error value even if the text is identical.
// //
// This function is the errors.New function from the standard libarary (https://golang.org/pkg/errors/#New). // This function is the errors.New function from the standard library (https://golang.org/pkg/errors/#New).
// It is exported from this package for import convinience. // It is exported from this package for import convenience.
func New(text string) error { return errors.New(text) } func New(text string) error { return errors.New(text) }
// Is reports whether any error in err's chain matches target. // Is reports whether any error in err's chain matches target.
// //
// This function is the errors.Is function from the standard libarary (https://golang.org/pkg/errors/#Is). // This function is the errors.Is function from the standard library (https://golang.org/pkg/errors/#Is).
// It is exported from this package for import convinience. // It is exported from this package for import convenience.
func Is(err, target error) bool { return errors.Is(err, target) } func Is(err, target error) bool { return errors.Is(err, target) }
// As finds the first error in err's chain that matches target, and if so, sets target to that error value and returns true. // As finds the first error in err's chain that matches target, and if so, sets target to that error value and returns true.
// Otherwise, it returns false. // Otherwise, it returns false.
// //
// This function is the errors.As function from the standard libarary (https://golang.org/pkg/errors/#As). // This function is the errors.As function from the standard library (https://golang.org/pkg/errors/#As).
// It is exported from this package for import convinience. // It is exported from this package for import convenience.
func As(err error, target interface{}) bool { return errors.As(err, target) } func As(err error, target interface{}) bool { return errors.As(err, target) }
// Unwrap returns the result of calling the Unwrap method on err, if err's type contains an Unwrap method returning error. // Unwrap returns the result of calling the Unwrap method on err, if err's type contains an Unwrap method returning error.
// Otherwise, Unwrap returns nil. // Otherwise, Unwrap returns nil.
// //
// This function is the errors.Unwrap function from the standard libarary (https://golang.org/pkg/errors/#Unwrap). // This function is the errors.Unwrap function from the standard library (https://golang.org/pkg/errors/#Unwrap).
// It is exported from this package for import convinience. // It is exported from this package for import convenience.
func Unwrap(err error) error { return errors.Unwrap(err) } func Unwrap(err error) error { return errors.Unwrap(err) }

View File

@@ -131,6 +131,12 @@ func TestErrorPredicates(t *testing.T) {
err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}), err: E(Op("rdb.ArchiveTask"), NotFound, &QueueNotFoundError{Queue: "default"}),
want: true, want: true,
}, },
{
desc: "IsPanicError should detect presence of PanicError in err's chain",
fn: IsPanicError,
err: E(Op("unknown"), Unknown, &PanicError{ErrMsg: "Something went wrong"}),
want: true,
},
} }
for _, tc := range tests { for _, tc := range tests {

View File

@@ -4,14 +4,13 @@
// Code generated by protoc-gen-go. DO NOT EDIT. // Code generated by protoc-gen-go. DO NOT EDIT.
// versions: // versions:
// protoc-gen-go v1.25.0 // protoc-gen-go v1.34.2
// protoc v3.17.3 // protoc v3.19.6
// source: asynq.proto // source: asynq.proto
package proto package proto
import ( import (
proto "github.com/golang/protobuf/proto"
protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl" protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb" timestamppb "google.golang.org/protobuf/types/known/timestamppb"
@@ -26,12 +25,9 @@ const (
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
) )
// This is a compile-time assertion that a sufficiently up-to-date version
// of the legacy proto package is being used.
const _ = proto.ProtoPackageIsVersion4
// TaskMessage is the internal representation of a task with additional // TaskMessage is the internal representation of a task with additional
// metadata fields. // metadata fields.
// Next ID: 15
type TaskMessage struct { type TaskMessage struct {
state protoimpl.MessageState state protoimpl.MessageState
sizeCache protoimpl.SizeCache sizeCache protoimpl.SizeCache
@@ -65,6 +61,9 @@ type TaskMessage struct {
// UniqueKey holds the redis key used for uniqueness lock for this task. // UniqueKey holds the redis key used for uniqueness lock for this task.
// Empty string indicates that no uniqueness lock was used. // Empty string indicates that no uniqueness lock was used.
UniqueKey string `protobuf:"bytes,10,opt,name=unique_key,json=uniqueKey,proto3" json:"unique_key,omitempty"` UniqueKey string `protobuf:"bytes,10,opt,name=unique_key,json=uniqueKey,proto3" json:"unique_key,omitempty"`
// GroupKey is a name of the group used for task aggregation.
// This field is optional and empty value means no aggregation for the task.
GroupKey string `protobuf:"bytes,14,opt,name=group_key,json=groupKey,proto3" json:"group_key,omitempty"`
// Retention period specified in a number of seconds. // Retention period specified in a number of seconds.
// The task will be stored in redis as a completed task until the TTL // The task will be stored in redis as a completed task until the TTL
// expires. // expires.
@@ -184,6 +183,13 @@ func (x *TaskMessage) GetUniqueKey() string {
return "" return ""
} }
func (x *TaskMessage) GetGroupKey() string {
if x != nil {
return x.GroupKey
}
return ""
}
func (x *TaskMessage) GetRetention() int64 { func (x *TaskMessage) GetRetention() int64 {
if x != nil { if x != nil {
return x.Retention return x.Retention
@@ -614,7 +620,7 @@ var file_asynq_proto_rawDesc = []byte{
0x0a, 0x0b, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x61, 0x0a, 0x0b, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x61,
0x73, 0x79, 0x6e, 0x71, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x73, 0x79, 0x6e, 0x71, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xea, 0x02, 0x0a, 0x0b, 0x54, 0x61, 0x73, 0x6b, 0x4d, 0x65, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x87, 0x03, 0x0a, 0x0b, 0x54, 0x61, 0x73, 0x6b, 0x4d, 0x65,
0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x73, 0x73, 0x61, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20,
0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79,
0x6c, 0x6f, 0x61, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c,
@@ -633,84 +639,86 @@ var file_asynq_proto_rawDesc = []byte{
0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69,
0x6e, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x5f, 0x6b, 0x65, 0x79, 0x6e, 0x65, 0x12, 0x1d, 0x0a, 0x0a, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x5f, 0x6b, 0x65, 0x79,
0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x4b, 0x65, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x4b, 0x65,
0x79, 0x12, 0x1c, 0x0a, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x0c, 0x79, 0x12, 0x1b, 0x0a, 0x09, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x0e,
0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x4b, 0x65, 0x79, 0x12, 0x1c,
0x21, 0x0a, 0x0c, 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x0a, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x0c, 0x20, 0x01, 0x28,
0x0d, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0b, 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x03, 0x52, 0x09, 0x72, 0x65, 0x74, 0x65, 0x6e, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x21, 0x0a, 0x0c,
0x41, 0x74, 0x22, 0x8f, 0x03, 0x0a, 0x0a, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x0d, 0x20, 0x01,
0x6f, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x28, 0x03, 0x52, 0x0b, 0x63, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x41, 0x74, 0x22,
0x04, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x8f, 0x03, 0x0a, 0x0a, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12,
0x28, 0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76, 0x65, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x68, 0x6f,
0x72, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72, 0x76, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52,
0x65, 0x72, 0x49, 0x64, 0x12, 0x20, 0x0a, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x5f, 0x69,
0x6e, 0x63, 0x79, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x49,
0x72, 0x72, 0x65, 0x6e, 0x63, 0x79, 0x12, 0x35, 0x0a, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x64, 0x12, 0x20, 0x0a, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x63, 0x79,
0x18, 0x05, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x53, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x63, 0x75, 0x72, 0x72, 0x65,
0x65, 0x72, 0x76, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x51, 0x75, 0x65, 0x75, 0x65, 0x73, 0x6e, 0x63, 0x79, 0x12, 0x35, 0x0a, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x18, 0x05, 0x20,
0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x12, 0x27, 0x0a, 0x03, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x61, 0x73, 0x79, 0x6e, 0x71, 0x2e, 0x53, 0x65, 0x72, 0x76,
0x0f, 0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x5f, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x51, 0x75, 0x65, 0x75, 0x65, 0x73, 0x45, 0x6e, 0x74,
0x18, 0x06, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0e, 0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x50, 0x72, 0x72, 0x79, 0x52, 0x06, 0x71, 0x75, 0x65, 0x75, 0x65, 0x73, 0x12, 0x27, 0x0a, 0x0f, 0x73, 0x74,
0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x72, 0x69, 0x63, 0x74, 0x5f, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x18, 0x06, 0x20,
0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x39, 0x01, 0x28, 0x08, 0x52, 0x0e, 0x73, 0x74, 0x72, 0x69, 0x63, 0x74, 0x50, 0x72, 0x69, 0x6f, 0x72,
0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x69, 0x74, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x07, 0x20,
0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x39, 0x0a, 0x0a, 0x73,
0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x2e, 0x0a, 0x13, 0x61, 0x63, 0x74, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x69, 0x76, 0x65, 0x5f, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x73, 0x74, 0x61,
0x18, 0x09, 0x20, 0x01, 0x28, 0x05, 0x52, 0x11, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x57, 0x6f, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x2e, 0x0a, 0x13, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65,
0x72, 0x6b, 0x65, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x1a, 0x39, 0x0a, 0x0b, 0x51, 0x75, 0x65, 0x5f, 0x77, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x09, 0x20,
0x75, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x28, 0x05, 0x52, 0x11, 0x61, 0x63, 0x74, 0x69, 0x76, 0x65, 0x57, 0x6f, 0x72, 0x6b, 0x65,
0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x1a, 0x39, 0x0a, 0x0b, 0x51, 0x75, 0x65, 0x75, 0x65, 0x73,
0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01,
0x3a, 0x02, 0x38, 0x01, 0x22, 0xb1, 0x02, 0x0a, 0x0a, 0x57, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x49, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
0x6e, 0x66, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38,
0x09, 0x52, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x01, 0x22, 0xb1, 0x02, 0x0a, 0x0a, 0x57, 0x6f, 0x72, 0x6b, 0x65, 0x72, 0x49, 0x6e, 0x66, 0x6f,
0x20, 0x01, 0x28, 0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x12, 0x12, 0x0a, 0x04, 0x68, 0x6f, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x76, 0x65, 0x72, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x10, 0x0a, 0x03, 0x70, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28,
0x72, 0x76, 0x65, 0x72, 0x49, 0x64, 0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x05, 0x52, 0x03, 0x70, 0x69, 0x64, 0x12, 0x1b, 0x0a, 0x09, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72,
0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x73, 0x65, 0x72, 0x76, 0x65,
0x1b, 0x0a, 0x09, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x05, 0x20, 0x01, 0x72, 0x49, 0x64, 0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64, 0x18, 0x04,
0x28, 0x09, 0x52, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x1b, 0x0a, 0x09,
0x74, 0x61, 0x73, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x06, 0x20, 0x01, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52,
0x28, 0x0c, 0x52, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74, 0x61, 0x73,
0x14, 0x0a, 0x05, 0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x71, 0x75, 0x65, 0x75, 0x65, 0x12, 0x39, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x14, 0x0a, 0x05,
0x69, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x71, 0x75, 0x65, 0x75, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x71, 0x75, 0x65,
0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x75, 0x65, 0x12, 0x39, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x74, 0x69, 0x6d, 0x65,
0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
0x12, 0x36, 0x0a, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61,
0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6d, 0x70, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x36, 0x0a,
0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x08, 0x64, 0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x64, 0x65, 0x61, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x22, 0xad, 0x02, 0x0a, 0x0e, 0x53, 0x63, 0x68, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x65, 0x64, 0x75, 0x6c, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x64, 0x65, 0x61,
0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x64, 0x6c, 0x69, 0x6e, 0x65, 0x22, 0xad, 0x02, 0x0a, 0x0e, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75,
0x70, 0x65, 0x63, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x73, 0x70, 0x65, 0x63, 0x12, 0x6c, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01,
0x1b, 0x0a, 0x09, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x70, 0x65, 0x63,
0x28, 0x09, 0x52, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x73, 0x70, 0x65, 0x63, 0x12, 0x1b, 0x0a, 0x09,
0x74, 0x61, 0x73, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x04, 0x20, 0x01, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52,
0x28, 0x0c, 0x52, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x08, 0x74, 0x61, 0x73, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x21, 0x0a, 0x0c, 0x74, 0x61, 0x73,
0x27, 0x0a, 0x0f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6b, 0x5f, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x6e, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0e, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x0b, 0x74, 0x61, 0x73, 0x6b, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x27, 0x0a, 0x0f,
0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x46, 0x0a, 0x11, 0x6e, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18,
0x5f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x06, 0x20, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0e, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x4f, 0x70,
0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x12, 0x46, 0x0a, 0x11, 0x6e, 0x65, 0x78, 0x74, 0x5f, 0x65, 0x6e,
0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b,
0x0f, 0x6e, 0x65, 0x78, 0x74, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65,
0x12, 0x46, 0x0a, 0x11, 0x70, 0x72, 0x65, 0x76, 0x5f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65,
0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f,
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69,
0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f, 0x70, 0x72, 0x65, 0x76, 0x45, 0x6e, 0x71,
0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x22, 0x6f, 0x0a, 0x15, 0x53, 0x63, 0x68, 0x65,
0x64, 0x75, 0x6c, 0x65, 0x72, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x45, 0x76, 0x65, 0x6e,
0x74, 0x12, 0x17, 0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x3d, 0x0a, 0x0c, 0x65, 0x6e,
0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0b, 0x65, 0x6e, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f, 0x6e, 0x65,
0x71, 0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x42, 0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x78, 0x74, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x46, 0x0a,
0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x69, 0x62, 0x69, 0x6b, 0x65, 0x6e, 0x2f, 0x11, 0x70, 0x72, 0x65, 0x76, 0x5f, 0x65, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x74, 0x69,
0x61, 0x73, 0x79, 0x6e, 0x71, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6d, 0x65, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73,
0x74, 0x61, 0x6d, 0x70, 0x52, 0x0f, 0x70, 0x72, 0x65, 0x76, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75,
0x65, 0x54, 0x69, 0x6d, 0x65, 0x22, 0x6f, 0x0a, 0x15, 0x53, 0x63, 0x68, 0x65, 0x64, 0x75, 0x6c,
0x65, 0x72, 0x45, 0x6e, 0x71, 0x75, 0x65, 0x75, 0x65, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12, 0x17,
0x0a, 0x07, 0x74, 0x61, 0x73, 0x6b, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x06, 0x74, 0x61, 0x73, 0x6b, 0x49, 0x64, 0x12, 0x3d, 0x0a, 0x0c, 0x65, 0x6e, 0x71, 0x75, 0x65,
0x75, 0x65, 0x5f, 0x74, 0x69, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e,
0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e,
0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0b, 0x65, 0x6e, 0x71, 0x75, 0x65,
0x75, 0x65, 0x54, 0x69, 0x6d, 0x65, 0x42, 0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x69, 0x62, 0x69, 0x6b, 0x65, 0x6e, 0x2f, 0x61, 0x73, 0x79,
0x6e, 0x71, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
} }
var ( var (
@@ -726,7 +734,7 @@ func file_asynq_proto_rawDescGZIP() []byte {
} }
var file_asynq_proto_msgTypes = make([]protoimpl.MessageInfo, 6) var file_asynq_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
var file_asynq_proto_goTypes = []interface{}{ var file_asynq_proto_goTypes = []any{
(*TaskMessage)(nil), // 0: asynq.TaskMessage (*TaskMessage)(nil), // 0: asynq.TaskMessage
(*ServerInfo)(nil), // 1: asynq.ServerInfo (*ServerInfo)(nil), // 1: asynq.ServerInfo
(*WorkerInfo)(nil), // 2: asynq.WorkerInfo (*WorkerInfo)(nil), // 2: asynq.WorkerInfo
@@ -756,7 +764,7 @@ func file_asynq_proto_init() {
return return
} }
if !protoimpl.UnsafeEnabled { if !protoimpl.UnsafeEnabled {
file_asynq_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { file_asynq_proto_msgTypes[0].Exporter = func(v any, i int) any {
switch v := v.(*TaskMessage); i { switch v := v.(*TaskMessage); i {
case 0: case 0:
return &v.state return &v.state
@@ -768,7 +776,7 @@ func file_asynq_proto_init() {
return nil return nil
} }
} }
file_asynq_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { file_asynq_proto_msgTypes[1].Exporter = func(v any, i int) any {
switch v := v.(*ServerInfo); i { switch v := v.(*ServerInfo); i {
case 0: case 0:
return &v.state return &v.state
@@ -780,7 +788,7 @@ func file_asynq_proto_init() {
return nil return nil
} }
} }
file_asynq_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { file_asynq_proto_msgTypes[2].Exporter = func(v any, i int) any {
switch v := v.(*WorkerInfo); i { switch v := v.(*WorkerInfo); i {
case 0: case 0:
return &v.state return &v.state
@@ -792,7 +800,7 @@ func file_asynq_proto_init() {
return nil return nil
} }
} }
file_asynq_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { file_asynq_proto_msgTypes[3].Exporter = func(v any, i int) any {
switch v := v.(*SchedulerEntry); i { switch v := v.(*SchedulerEntry); i {
case 0: case 0:
return &v.state return &v.state
@@ -804,7 +812,7 @@ func file_asynq_proto_init() {
return nil return nil
} }
} }
file_asynq_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { file_asynq_proto_msgTypes[4].Exporter = func(v any, i int) any {
switch v := v.(*SchedulerEnqueueEvent); i { switch v := v.(*SchedulerEnqueueEvent); i {
case 0: case 0:
return &v.state return &v.state

View File

@@ -11,6 +11,7 @@ option go_package = "github.com/hibiken/asynq/internal/proto";
// TaskMessage is the internal representation of a task with additional // TaskMessage is the internal representation of a task with additional
// metadata fields. // metadata fields.
// Next ID: 15
message TaskMessage { message TaskMessage {
// Type indicates the kind of the task to be performed. // Type indicates the kind of the task to be performed.
string type = 1; string type = 1;
@@ -51,6 +52,10 @@ message TaskMessage {
// Empty string indicates that no uniqueness lock was used. // Empty string indicates that no uniqueness lock was used.
string unique_key = 10; string unique_key = 10;
// GroupKey is a name of the group used for task aggregation.
// This field is optional and empty value means no aggregation for the task.
string group_key = 14;
// Retention period specified in a number of seconds. // Retention period specified in a number of seconds.
// The task will be stored in redis as a completed task until the TTL // The task will be stored in redis as a completed task until the TTL
// expires. // expires.

View File

@@ -10,19 +10,19 @@ import (
"testing" "testing"
"time" "time"
"github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/testutil"
) )
func BenchmarkEnqueue(b *testing.B) { func BenchmarkEnqueue(b *testing.B) {
r := setup(b) r := setup(b)
ctx := context.Background() ctx := context.Background()
msg := asynqtest.NewTaskMessage("task1", nil) msg := testutil.NewTaskMessage("task1", nil)
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
b.StartTimer() b.StartTimer()
if err := r.Enqueue(ctx, msg); err != nil { if err := r.Enqueue(ctx, msg); err != nil {
@@ -45,7 +45,7 @@ func BenchmarkEnqueueUnique(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
b.StartTimer() b.StartTimer()
if err := r.EnqueueUnique(ctx, msg, uniqueTTL); err != nil { if err := r.EnqueueUnique(ctx, msg, uniqueTTL); err != nil {
@@ -57,13 +57,13 @@ func BenchmarkEnqueueUnique(b *testing.B) {
func BenchmarkSchedule(b *testing.B) { func BenchmarkSchedule(b *testing.B) {
r := setup(b) r := setup(b)
ctx := context.Background() ctx := context.Background()
msg := asynqtest.NewTaskMessage("task1", nil) msg := testutil.NewTaskMessage("task1", nil)
processAt := time.Now().Add(3 * time.Minute) processAt := time.Now().Add(3 * time.Minute)
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
b.StartTimer() b.StartTimer()
if err := r.Schedule(ctx, msg, processAt); err != nil { if err := r.Schedule(ctx, msg, processAt); err != nil {
@@ -87,7 +87,7 @@ func BenchmarkScheduleUnique(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
b.StartTimer() b.StartTimer()
if err := r.ScheduleUnique(ctx, msg, processAt, uniqueTTL); err != nil { if err := r.ScheduleUnique(ctx, msg, processAt, uniqueTTL); err != nil {
@@ -103,9 +103,9 @@ func BenchmarkDequeueSingleQueue(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
m := asynqtest.NewTaskMessageWithQueue( m := testutil.NewTaskMessageWithQueue(
fmt.Sprintf("task%d", i), nil, base.DefaultQueueName) fmt.Sprintf("task%d", i), nil, base.DefaultQueueName)
if err := r.Enqueue(ctx, m); err != nil { if err := r.Enqueue(ctx, m); err != nil {
b.Fatalf("Enqueue failed: %v", err) b.Fatalf("Enqueue failed: %v", err)
@@ -127,10 +127,10 @@ func BenchmarkDequeueMultipleQueues(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
for _, qname := range qnames { for _, qname := range qnames {
m := asynqtest.NewTaskMessageWithQueue( m := testutil.NewTaskMessageWithQueue(
fmt.Sprintf("%s_task%d", qname, i), nil, qname) fmt.Sprintf("%s_task%d", qname, i), nil, qname)
if err := r.Enqueue(ctx, m); err != nil { if err := r.Enqueue(ctx, m); err != nil {
b.Fatalf("Enqueue failed: %v", err) b.Fatalf("Enqueue failed: %v", err)
@@ -147,25 +147,26 @@ func BenchmarkDequeueMultipleQueues(b *testing.B) {
func BenchmarkDone(b *testing.B) { func BenchmarkDone(b *testing.B) {
r := setup(b) r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil) m1 := testutil.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil) m2 := testutil.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil) m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3} msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{ zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()}, {Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()}, {Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()}, {Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
} }
ctx := context.Background()
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName) testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName) testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer() b.StartTimer()
if err := r.Done(msgs[0]); err != nil { if err := r.Done(ctx, msgs[0]); err != nil {
b.Fatalf("Done failed: %v", err) b.Fatalf("Done failed: %v", err)
} }
} }
@@ -173,25 +174,26 @@ func BenchmarkDone(b *testing.B) {
func BenchmarkRetry(b *testing.B) { func BenchmarkRetry(b *testing.B) {
r := setup(b) r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil) m1 := testutil.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil) m2 := testutil.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil) m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3} msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{ zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()}, {Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()}, {Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()}, {Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
} }
ctx := context.Background()
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName) testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName) testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer() b.StartTimer()
if err := r.Retry(msgs[0], time.Now().Add(1*time.Minute), "error", true /*isFailure*/); err != nil { if err := r.Retry(ctx, msgs[0], time.Now().Add(1*time.Minute), "error", true /*isFailure*/); err != nil {
b.Fatalf("Retry failed: %v", err) b.Fatalf("Retry failed: %v", err)
} }
} }
@@ -199,25 +201,26 @@ func BenchmarkRetry(b *testing.B) {
func BenchmarkArchive(b *testing.B) { func BenchmarkArchive(b *testing.B) {
r := setup(b) r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil) m1 := testutil.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil) m2 := testutil.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil) m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3} msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{ zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()}, {Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()}, {Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()}, {Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
} }
ctx := context.Background()
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName) testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName) testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer() b.StartTimer()
if err := r.Archive(msgs[0], "error"); err != nil { if err := r.Archive(ctx, msgs[0], "error"); err != nil {
b.Fatalf("Archive failed: %v", err) b.Fatalf("Archive failed: %v", err)
} }
} }
@@ -225,25 +228,26 @@ func BenchmarkArchive(b *testing.B) {
func BenchmarkRequeue(b *testing.B) { func BenchmarkRequeue(b *testing.B) {
r := setup(b) r := setup(b)
m1 := asynqtest.NewTaskMessage("task1", nil) m1 := testutil.NewTaskMessage("task1", nil)
m2 := asynqtest.NewTaskMessage("task2", nil) m2 := testutil.NewTaskMessage("task2", nil)
m3 := asynqtest.NewTaskMessage("task3", nil) m3 := testutil.NewTaskMessage("task3", nil)
msgs := []*base.TaskMessage{m1, m2, m3} msgs := []*base.TaskMessage{m1, m2, m3}
zs := []base.Z{ zs := []base.Z{
{Message: m1, Score: time.Now().Add(10 * time.Second).Unix()}, {Message: m1, Score: time.Now().Add(10 * time.Second).Unix()},
{Message: m2, Score: time.Now().Add(20 * time.Second).Unix()}, {Message: m2, Score: time.Now().Add(20 * time.Second).Unix()},
{Message: m3, Score: time.Now().Add(30 * time.Second).Unix()}, {Message: m3, Score: time.Now().Add(30 * time.Second).Unix()},
} }
ctx := context.Background()
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
asynqtest.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName) testutil.SeedActiveQueue(b, r.client, msgs, base.DefaultQueueName)
asynqtest.SeedDeadlines(b, r.client, zs, base.DefaultQueueName) testutil.SeedLease(b, r.client, zs, base.DefaultQueueName)
b.StartTimer() b.StartTimer()
if err := r.Requeue(msgs[0]); err != nil { if err := r.Requeue(ctx, msgs[0]); err != nil {
b.Fatalf("Requeue failed: %v", err) b.Fatalf("Requeue failed: %v", err)
} }
} }
@@ -254,7 +258,7 @@ func BenchmarkCheckAndEnqueue(b *testing.B) {
now := time.Now() now := time.Now()
var zs []base.Z var zs []base.Z
for i := -100; i < 100; i++ { for i := -100; i < 100; i++ {
msg := asynqtest.NewTaskMessage(fmt.Sprintf("task%d", i), nil) msg := testutil.NewTaskMessage(fmt.Sprintf("task%d", i), nil)
score := now.Add(time.Duration(i) * time.Second).Unix() score := now.Add(time.Duration(i) * time.Second).Unix()
zs = append(zs, base.Z{Message: msg, Score: score}) zs = append(zs, base.Z{Message: msg, Score: score})
} }
@@ -262,8 +266,8 @@ func BenchmarkCheckAndEnqueue(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
b.StopTimer() b.StopTimer()
asynqtest.FlushDB(b, r.client) testutil.FlushDB(b, r.client)
asynqtest.SeedScheduledQueue(b, r.client, zs, base.DefaultQueueName) testutil.SeedScheduledQueue(b, r.client, zs, base.DefaultQueueName)
b.StartTimer() b.StartTimer()
if err := r.ForwardIfReady(base.DefaultQueueName); err != nil { if err := r.ForwardIfReady(base.DefaultQueueName); err != nil {

View File

@@ -10,9 +10,9 @@ import (
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors" "github.com/hibiken/asynq/internal/errors"
"github.com/redis/go-redis/v9"
"github.com/spf13/cast" "github.com/spf13/cast"
) )
@@ -34,18 +34,30 @@ type Stats struct {
Paused bool Paused bool
// Size is the total number of tasks in the queue. // Size is the total number of tasks in the queue.
Size int Size int
// Groups is the total number of groups in the queue.
Groups int
// Number of tasks in each state. // Number of tasks in each state.
Pending int Pending int
Active int Active int
Scheduled int Scheduled int
Retry int Retry int
Archived int Archived int
Completed int Completed int
// Total number of tasks processed during the current date. Aggregating int
// Number of tasks processed within the current date.
// The number includes both succeeded and failed tasks. // The number includes both succeeded and failed tasks.
Processed int Processed int
// Total number of tasks failed during the current date. // Number of tasks failed within the current date.
Failed int Failed int
// Total number of tasks processed (both succeeded and failed) from this queue.
ProcessedTotal int
// Total number of tasks failed.
FailedTotal int
// Latency of the queue, measured by the oldest pending task in the queue. // Latency of the queue, measured by the oldest pending task in the queue.
Latency time.Duration Latency time.Duration
// Time this stats was taken. // Time this stats was taken.
@@ -65,17 +77,21 @@ type DailyStats struct {
Time time.Time Time time.Time
} }
// KEYS[1] -> asynq:<qname>:pending // KEYS[1] -> asynq:<qname>:pending
// KEYS[2] -> asynq:<qname>:active // KEYS[2] -> asynq:<qname>:active
// KEYS[3] -> asynq:<qname>:scheduled // KEYS[3] -> asynq:<qname>:scheduled
// KEYS[4] -> asynq:<qname>:retry // KEYS[4] -> asynq:<qname>:retry
// KEYS[5] -> asynq:<qname>:archived // KEYS[5] -> asynq:<qname>:archived
// KEYS[6] -> asynq:<qname>:completed // KEYS[6] -> asynq:<qname>:completed
// KEYS[7] -> asynq:<qname>:processed:<yyyy-mm-dd> // KEYS[7] -> asynq:<qname>:processed:<yyyy-mm-dd>
// KEYS[8] -> asynq:<qname>:failed:<yyyy-mm-dd> // KEYS[8] -> asynq:<qname>:failed:<yyyy-mm-dd>
// KEYS[9] -> asynq:<qname>:paused // KEYS[9] -> asynq:<qname>:processed
// // KEYS[10] -> asynq:<qname>:failed
// KEYS[11] -> asynq:<qname>:paused
// KEYS[12] -> asynq:<qname>:groups
// --------
// ARGV[1] -> task key prefix // ARGV[1] -> task key prefix
// ARGV[2] -> group key prefix
var currentStatsCmd = redis.NewScript(` var currentStatsCmd = redis.NewScript(`
local res = {} local res = {}
local pendingTaskCount = redis.call("LLEN", KEYS[1]) local pendingTaskCount = redis.call("LLEN", KEYS[1])
@@ -91,22 +107,17 @@ table.insert(res, KEYS[5])
table.insert(res, redis.call("ZCARD", KEYS[5])) table.insert(res, redis.call("ZCARD", KEYS[5]))
table.insert(res, KEYS[6]) table.insert(res, KEYS[6])
table.insert(res, redis.call("ZCARD", KEYS[6])) table.insert(res, redis.call("ZCARD", KEYS[6]))
local pcount = 0 for i=7,10 do
local p = redis.call("GET", KEYS[7]) local count = 0
if p then local n = redis.call("GET", KEYS[i])
pcount = tonumber(p) if n then
count = tonumber(n)
end
table.insert(res, KEYS[i])
table.insert(res, count)
end end
table.insert(res, KEYS[7]) table.insert(res, KEYS[11])
table.insert(res, pcount) table.insert(res, redis.call("EXISTS", KEYS[11]))
local fcount = 0
local f = redis.call("GET", KEYS[8])
if f then
fcount = tonumber(f)
end
table.insert(res, KEYS[8])
table.insert(res, fcount)
table.insert(res, KEYS[9])
table.insert(res, redis.call("EXISTS", KEYS[9]))
table.insert(res, "oldest_pending_since") table.insert(res, "oldest_pending_since")
if pendingTaskCount > 0 then if pendingTaskCount > 0 then
local id = redis.call("LRANGE", KEYS[1], -1, -1)[1] local id = redis.call("LRANGE", KEYS[1], -1, -1)[1]
@@ -114,6 +125,15 @@ if pendingTaskCount > 0 then
else else
table.insert(res, 0) table.insert(res, 0)
end end
local group_names = redis.call("SMEMBERS", KEYS[12])
table.insert(res, "group_size")
table.insert(res, table.getn(group_names))
local aggregating_count = 0
for _, gname in ipairs(group_names) do
aggregating_count = aggregating_count + redis.call("ZCARD", ARGV[2] .. gname)
end
table.insert(res, "aggregating_count")
table.insert(res, aggregating_count)
return res`) return res`)
// CurrentStats returns a current state of the queues. // CurrentStats returns a current state of the queues.
@@ -126,8 +146,8 @@ func (r *RDB) CurrentStats(qname string) (*Stats, error) {
if !exists { if !exists {
return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname}) return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
} }
now := time.Now() now := r.clock.Now()
res, err := currentStatsCmd.Run(context.Background(), r.client, []string{ keys := []string{
base.PendingKey(qname), base.PendingKey(qname),
base.ActiveKey(qname), base.ActiveKey(qname),
base.ScheduledKey(qname), base.ScheduledKey(qname),
@@ -136,8 +156,16 @@ func (r *RDB) CurrentStats(qname string) (*Stats, error) {
base.CompletedKey(qname), base.CompletedKey(qname),
base.ProcessedKey(qname, now), base.ProcessedKey(qname, now),
base.FailedKey(qname, now), base.FailedKey(qname, now),
base.ProcessedTotalKey(qname),
base.FailedTotalKey(qname),
base.PausedKey(qname), base.PausedKey(qname),
}, base.TaskKeyPrefix(qname)).Result() base.AllGroups(qname),
}
argv := []interface{}{
base.TaskKeyPrefix(qname),
base.GroupKeyPrefix(qname),
}
res, err := currentStatsCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil { if err != nil {
return nil, errors.E(op, errors.Unknown, err) return nil, errors.E(op, errors.Unknown, err)
} }
@@ -176,6 +204,10 @@ func (r *RDB) CurrentStats(qname string) (*Stats, error) {
stats.Processed = val stats.Processed = val
case base.FailedKey(qname, now): case base.FailedKey(qname, now):
stats.Failed = val stats.Failed = val
case base.ProcessedTotalKey(qname):
stats.ProcessedTotal = val
case base.FailedTotalKey(qname):
stats.FailedTotal = val
case base.PausedKey(qname): case base.PausedKey(qname):
if val == 0 { if val == 0 {
stats.Paused = false stats.Paused = false
@@ -188,6 +220,11 @@ func (r *RDB) CurrentStats(qname string) (*Stats, error) {
} else { } else {
stats.Latency = r.clock.Now().Sub(time.Unix(0, int64(val))) stats.Latency = r.clock.Now().Sub(time.Unix(0, int64(val)))
} }
case "group_size":
stats.Groups = val
case "aggregating_count":
stats.Aggregating = val
size += val
} }
} }
stats.Size = size stats.Size = size
@@ -209,9 +246,12 @@ func (r *RDB) CurrentStats(qname string) (*Stats, error) {
// KEYS[4] -> asynq:{qname}:retry // KEYS[4] -> asynq:{qname}:retry
// KEYS[5] -> asynq:{qname}:archived // KEYS[5] -> asynq:{qname}:archived
// KEYS[6] -> asynq:{qname}:completed // KEYS[6] -> asynq:{qname}:completed
// // KEYS[7] -> asynq:{qname}:groups
// ARGV[1] -> asynq:{qname}:t: // -------
// ARGV[2] -> sample_size (e.g 20) // ARGV[1] -> asynq:{qname}:t: (task key prefix)
// ARGV[2] -> task sample size per redis list/zset (e.g 20)
// ARGV[3] -> group sample size
// ARGV[4] -> asynq:{qname}:g: (group key prefix)
var memoryUsageCmd = redis.NewScript(` var memoryUsageCmd = redis.NewScript(`
local sample_size = tonumber(ARGV[2]) local sample_size = tonumber(ARGV[2])
if sample_size <= 0 then if sample_size <= 0 then
@@ -252,12 +292,36 @@ for i=3,6 do
memusg = memusg + m memusg = memusg + m
end end
end end
local groups = redis.call("SMEMBERS", KEYS[7])
if table.getn(groups) > 0 then
local agg_task_count = 0
local agg_task_sample_total = 0
local agg_task_sample_size = 0
for i, gname in ipairs(groups) do
local group_key = ARGV[4] .. gname
agg_task_count = agg_task_count + redis.call("ZCARD", group_key)
if i <= tonumber(ARGV[3]) then
local ids = redis.call("ZRANGE", group_key, 0, sample_size - 1)
for _, id in ipairs(ids) do
local bytes = redis.call("MEMORY", "USAGE", ARGV[1] .. id)
agg_task_sample_total = agg_task_sample_total + bytes
agg_task_sample_size = agg_task_sample_size + 1
end
end
end
local avg = agg_task_sample_total / agg_task_sample_size
memusg = memusg + (avg * agg_task_count)
end
return memusg return memusg
`) `)
func (r *RDB) memoryUsage(qname string) (int64, error) { func (r *RDB) memoryUsage(qname string) (int64, error) {
var op errors.Op = "rdb.memoryUsage" var op errors.Op = "rdb.memoryUsage"
const sampleSize = 20 const (
taskSampleSize = 20
groupSampleSize = 5
)
keys := []string{ keys := []string{
base.ActiveKey(qname), base.ActiveKey(qname),
base.PendingKey(qname), base.PendingKey(qname),
@@ -265,10 +329,13 @@ func (r *RDB) memoryUsage(qname string) (int64, error) {
base.RetryKey(qname), base.RetryKey(qname),
base.ArchivedKey(qname), base.ArchivedKey(qname),
base.CompletedKey(qname), base.CompletedKey(qname),
base.AllGroups(qname),
} }
argv := []interface{}{ argv := []interface{}{
base.TaskKeyPrefix(qname), base.TaskKeyPrefix(qname),
sampleSize, taskSampleSize,
groupSampleSize,
base.GroupKeyPrefix(qname),
} }
res, err := memoryUsageCmd.Run(context.Background(), r.client, keys, argv...).Result() res, err := memoryUsageCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil { if err != nil {
@@ -276,7 +343,7 @@ func (r *RDB) memoryUsage(qname string) (int64, error) {
} }
usg, err := cast.ToInt64E(res) usg, err := cast.ToInt64E(res)
if err != nil { if err != nil {
return 0, errors.E(op, errors.Internal, fmt.Sprintf("could not cast script return value to int64")) return 0, errors.E(op, errors.Internal, "could not cast script return value to int64")
} }
return usg, nil return usg, nil
} }
@@ -306,7 +373,7 @@ func (r *RDB) HistoricalStats(qname string, n int) ([]*DailyStats, error) {
return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname}) return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
} }
const day = 24 * time.Hour const day = 24 * time.Hour
now := time.Now().UTC() now := r.clock.Now().UTC()
var days []time.Time var days []time.Time
var keys []string var keys []string
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
@@ -423,7 +490,7 @@ func (r *RDB) GetTaskInfo(qname, id string) (*base.TaskInfo, error) {
keys := []string{base.TaskKey(qname, id)} keys := []string{base.TaskKey(qname, id)}
argv := []interface{}{ argv := []interface{}{
id, id,
time.Now().Unix(), r.clock.Now().Unix(),
base.QueueKeyPrefix(qname), base.QueueKeyPrefix(qname),
} }
res, err := getTaskInfoCmd.Run(context.Background(), r.client, keys, argv...).Result() res, err := getTaskInfoCmd.Run(context.Background(), r.client, keys, argv...).Result()
@@ -480,6 +547,56 @@ func (r *RDB) GetTaskInfo(qname, id string) (*base.TaskInfo, error) {
}, nil }, nil
} }
type GroupStat struct {
// Name of the group.
Group string
// Size of the group.
Size int
}
// KEYS[1] -> asynq:{<qname>}:groups
// -------
// ARGV[1] -> group key prefix
//
// Output:
// list of group name and size (e.g. group1 size1 group2 size2 ...)
//
// Time Complexity:
// O(N) where N being the number of groups in the given queue.
var groupStatsCmd = redis.NewScript(`
local res = {}
local group_names = redis.call("SMEMBERS", KEYS[1])
for _, gname in ipairs(group_names) do
local size = redis.call("ZCARD", ARGV[1] .. gname)
table.insert(res, gname)
table.insert(res, size)
end
return res
`)
func (r *RDB) GroupStats(qname string) ([]*GroupStat, error) {
var op errors.Op = "RDB.GroupStats"
keys := []string{base.AllGroups(qname)}
argv := []interface{}{base.GroupKeyPrefix(qname)}
res, err := groupStatsCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil {
return nil, errors.E(op, errors.Unknown, err)
}
data, err := cast.ToSliceE(res)
if err != nil {
return nil, errors.E(op, errors.Internal, "cast error: unexpected return value from Lua script")
}
var stats []*GroupStat
for i := 0; i < len(data); i += 2 {
stats = append(stats, &GroupStat{
Group: data[i].(string),
Size: int(data[i+1].(int64)),
})
}
return stats, nil
}
// Pagination specifies the page size and page number // Pagination specifies the page size and page number
// for the list operation. // for the list operation.
type Pagination struct { type Pagination struct {
@@ -584,7 +701,7 @@ func (r *RDB) listMessages(qname string, state base.TaskState, pgn Pagination) (
} }
var nextProcessAt time.Time var nextProcessAt time.Time
if state == base.TaskStatePending { if state == base.TaskStatePending {
nextProcessAt = time.Now() nextProcessAt = r.clock.Now()
} }
infos = append(infos, &base.TaskInfo{ infos = append(infos, &base.TaskInfo{
Message: m, Message: m,
@@ -609,7 +726,7 @@ func (r *RDB) ListScheduled(qname string, pgn Pagination) ([]*base.TaskInfo, err
if !exists { if !exists {
return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname}) return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
} }
res, err := r.listZSetEntries(qname, base.TaskStateScheduled, pgn) res, err := r.listZSetEntries(qname, base.TaskStateScheduled, base.ScheduledKey(qname), pgn)
if err != nil { if err != nil {
return nil, errors.E(op, errors.CanonicalCode(err), err) return nil, errors.E(op, errors.CanonicalCode(err), err)
} }
@@ -627,7 +744,7 @@ func (r *RDB) ListRetry(qname string, pgn Pagination) ([]*base.TaskInfo, error)
if !exists { if !exists {
return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname}) return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
} }
res, err := r.listZSetEntries(qname, base.TaskStateRetry, pgn) res, err := r.listZSetEntries(qname, base.TaskStateRetry, base.RetryKey(qname), pgn)
if err != nil { if err != nil {
return nil, errors.E(op, errors.CanonicalCode(err), err) return nil, errors.E(op, errors.CanonicalCode(err), err)
} }
@@ -644,7 +761,7 @@ func (r *RDB) ListArchived(qname string, pgn Pagination) ([]*base.TaskInfo, erro
if !exists { if !exists {
return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname}) return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
} }
zs, err := r.listZSetEntries(qname, base.TaskStateArchived, pgn) zs, err := r.listZSetEntries(qname, base.TaskStateArchived, base.ArchivedKey(qname), pgn)
if err != nil { if err != nil {
return nil, errors.E(op, errors.CanonicalCode(err), err) return nil, errors.E(op, errors.CanonicalCode(err), err)
} }
@@ -661,7 +778,24 @@ func (r *RDB) ListCompleted(qname string, pgn Pagination) ([]*base.TaskInfo, err
if !exists { if !exists {
return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname}) return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
} }
zs, err := r.listZSetEntries(qname, base.TaskStateCompleted, pgn) zs, err := r.listZSetEntries(qname, base.TaskStateCompleted, base.CompletedKey(qname), pgn)
if err != nil {
return nil, errors.E(op, errors.CanonicalCode(err), err)
}
return zs, nil
}
// ListAggregating returns all tasks from the given group.
func (r *RDB) ListAggregating(qname, gname string, pgn Pagination) ([]*base.TaskInfo, error) {
var op errors.Op = "rdb.ListAggregating"
exists, err := r.queueExists(qname)
if err != nil {
return nil, errors.E(op, errors.Unknown, &errors.RedisCommandError{Command: "sismember", Err: err})
}
if !exists {
return nil, errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
}
zs, err := r.listZSetEntries(qname, base.TaskStateAggregating, base.GroupKey(qname, gname), pgn)
if err != nil { if err != nil {
return nil, errors.E(op, errors.CanonicalCode(err), err) return nil, errors.E(op, errors.CanonicalCode(err), err)
} }
@@ -697,20 +831,7 @@ return data
// listZSetEntries returns a list of message and score pairs in Redis sorted-set // listZSetEntries returns a list of message and score pairs in Redis sorted-set
// with the given key. // with the given key.
func (r *RDB) listZSetEntries(qname string, state base.TaskState, pgn Pagination) ([]*base.TaskInfo, error) { func (r *RDB) listZSetEntries(qname string, state base.TaskState, key string, pgn Pagination) ([]*base.TaskInfo, error) {
var key string
switch state {
case base.TaskStateScheduled:
key = base.ScheduledKey(qname)
case base.TaskStateRetry:
key = base.RetryKey(qname)
case base.TaskStateArchived:
key = base.ArchivedKey(qname)
case base.TaskStateCompleted:
key = base.CompletedKey(qname)
default:
panic(fmt.Sprintf("unsupported task state: %v", state))
}
res, err := listZSetEntriesCmd.Run(context.Background(), r.client, []string{key}, res, err := listZSetEntriesCmd.Run(context.Background(), r.client, []string{key},
pgn.start(), pgn.stop(), base.TaskKeyPrefix(qname)).Result() pgn.start(), pgn.stop(), base.TaskKeyPrefix(qname)).Result()
if err != nil { if err != nil {
@@ -801,14 +922,67 @@ func (r *RDB) RunAllArchivedTasks(qname string) (int64, error) {
return n, nil return n, nil
} }
// runAllAggregatingCmd schedules all tasks in the group to run individually.
//
// Input:
// KEYS[1] -> asynq:{<qname>}:g:<gname>
// KEYS[2] -> asynq:{<qname>}:pending
// KEYS[3] -> asynq:{<qname>}:groups
// -------
// ARGV[1] -> task key prefix
// ARGV[2] -> group name
//
// Output:
// integer: number of tasks scheduled to run
var runAllAggregatingCmd = redis.NewScript(`
local ids = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, id in ipairs(ids) do
redis.call("LPUSH", KEYS[2], id)
redis.call("HSET", ARGV[1] .. id, "state", "pending")
end
redis.call("DEL", KEYS[1])
redis.call("SREM", KEYS[3], ARGV[2])
return table.getn(ids)
`)
// RunAllAggregatingTasks schedules all tasks from the given queue to run
// and returns the number of tasks scheduled to run.
// If a queue with the given name doesn't exist, it returns QueueNotFoundError.
func (r *RDB) RunAllAggregatingTasks(qname, gname string) (int64, error) {
var op errors.Op = "rdb.RunAllAggregatingTasks"
if err := r.checkQueueExists(qname); err != nil {
return 0, errors.E(op, errors.CanonicalCode(err), err)
}
keys := []string{
base.GroupKey(qname, gname),
base.PendingKey(qname),
base.AllGroups(qname),
}
argv := []interface{}{
base.TaskKeyPrefix(qname),
gname,
}
res, err := runAllAggregatingCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil {
return 0, errors.E(op, errors.Internal, err)
}
n, ok := res.(int64)
if !ok {
return 0, errors.E(op, errors.Internal, fmt.Sprintf("unexpected return value from script %v", res))
}
return n, nil
}
// runTaskCmd is a Lua script that updates the given task to pending state. // runTaskCmd is a Lua script that updates the given task to pending state.
// //
// Input: // Input:
// KEYS[1] -> asynq:{<qname>}:t:<task_id> // KEYS[1] -> asynq:{<qname>}:t:<task_id>
// KEYS[2] -> asynq:{<qname>}:pending // KEYS[2] -> asynq:{<qname>}:pending
// KEYS[3] -> asynq:{<qname>}:groups
// -- // --
// ARGV[1] -> task ID // ARGV[1] -> task ID
// ARGV[2] -> queue key prefix; asynq:{<qname>}: // ARGV[2] -> queue key prefix; asynq:{<qname>}:
// ARGV[3] -> group key prefix
// //
// Output: // Output:
// Numeric code indicating the status: // Numeric code indicating the status:
@@ -821,15 +995,24 @@ var runTaskCmd = redis.NewScript(`
if redis.call("EXISTS", KEYS[1]) == 0 then if redis.call("EXISTS", KEYS[1]) == 0 then
return 0 return 0
end end
local state = redis.call("HGET", KEYS[1], "state") local state, group = unpack(redis.call("HMGET", KEYS[1], "state", "group"))
if state == "active" then if state == "active" then
return -1 return -1
elseif state == "pending" then elseif state == "pending" then
return -2 return -2
end elseif state == "aggregating" then
local n = redis.call("ZREM", ARGV[2] .. state, ARGV[1]) local n = redis.call("ZREM", ARGV[3] .. group, ARGV[1])
if n == 0 then if n == 0 then
return redis.error_reply("internal error: task id not found in zset " .. tostring(state)) return redis.error_reply("internal error: task id not found in zset " .. tostring(ARGV[3] .. group))
end
if redis.call("ZCARD", ARGV[3] .. group) == 0 then
redis.call("SREM", KEYS[3], group)
end
else
local n = redis.call("ZREM", ARGV[2] .. state, ARGV[1])
if n == 0 then
return redis.error_reply("internal error: task id not found in zset " .. tostring(ARGV[2] .. state))
end
end end
redis.call("LPUSH", KEYS[2], ARGV[1]) redis.call("LPUSH", KEYS[2], ARGV[1])
redis.call("HSET", KEYS[1], "state", "pending") redis.call("HSET", KEYS[1], "state", "pending")
@@ -850,10 +1033,12 @@ func (r *RDB) RunTask(qname, id string) error {
keys := []string{ keys := []string{
base.TaskKey(qname, id), base.TaskKey(qname, id),
base.PendingKey(qname), base.PendingKey(qname),
base.AllGroups(qname),
} }
argv := []interface{}{ argv := []interface{}{
id, id,
base.QueueKeyPrefix(qname), base.QueueKeyPrefix(qname),
base.GroupKeyPrefix(qname),
} }
res, err := runTaskCmd.Run(context.Background(), r.client, keys, argv...).Result() res, err := runTaskCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil { if err != nil {
@@ -952,6 +1137,66 @@ func (r *RDB) ArchiveAllScheduledTasks(qname string) (int64, error) {
return n, nil return n, nil
} }
// archiveAllAggregatingCmd archives all tasks in the given group.
//
// Input:
// KEYS[1] -> asynq:{<qname>}:g:<gname>
// KEYS[2] -> asynq:{<qname>}:archived
// KEYS[3] -> asynq:{<qname>}:groups
// -------
// ARGV[1] -> current timestamp
// ARGV[2] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[3] -> max number of tasks in archive (e.g., 100)
// ARGV[4] -> task key prefix (asynq:{<qname>}:t:)
// ARGV[5] -> group name
//
// Output:
// integer: Number of tasks archived
var archiveAllAggregatingCmd = redis.NewScript(`
local ids = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, id in ipairs(ids) do
redis.call("ZADD", KEYS[2], ARGV[1], id)
redis.call("HSET", ARGV[4] .. id, "state", "archived")
end
redis.call("ZREMRANGEBYSCORE", KEYS[2], "-inf", ARGV[2])
redis.call("ZREMRANGEBYRANK", KEYS[2], 0, -ARGV[3])
redis.call("DEL", KEYS[1])
redis.call("SREM", KEYS[3], ARGV[5])
return table.getn(ids)
`)
// ArchiveAllAggregatingTasks archives all aggregating tasks from the given group
// and returns the number of tasks archived.
// If a queue with the given name doesn't exist, it returns QueueNotFoundError.
func (r *RDB) ArchiveAllAggregatingTasks(qname, gname string) (int64, error) {
var op errors.Op = "rdb.ArchiveAllAggregatingTasks"
if err := r.checkQueueExists(qname); err != nil {
return 0, errors.E(op, errors.CanonicalCode(err), err)
}
keys := []string{
base.GroupKey(qname, gname),
base.ArchivedKey(qname),
base.AllGroups(qname),
}
now := r.clock.Now()
argv := []interface{}{
now.Unix(),
now.AddDate(0, 0, -archivedExpirationInDays).Unix(),
maxArchiveSize,
base.TaskKeyPrefix(qname),
gname,
}
res, err := archiveAllAggregatingCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil {
return 0, errors.E(op, errors.Internal, err)
}
n, ok := res.(int64)
if !ok {
return 0, errors.E(op, errors.Internal, fmt.Sprintf("unexpected return value from script %v", res))
}
return n, nil
}
// archiveAllPendingCmd is a Lua script that moves all pending tasks from // archiveAllPendingCmd is a Lua script that moves all pending tasks from
// the given queue to archived state. // the given queue to archived state.
// //
@@ -989,7 +1234,7 @@ func (r *RDB) ArchiveAllPendingTasks(qname string) (int64, error) {
base.PendingKey(qname), base.PendingKey(qname),
base.ArchivedKey(qname), base.ArchivedKey(qname),
} }
now := time.Now() now := r.clock.Now()
argv := []interface{}{ argv := []interface{}{
now.Unix(), now.Unix(),
now.AddDate(0, 0, -archivedExpirationInDays).Unix(), now.AddDate(0, 0, -archivedExpirationInDays).Unix(),
@@ -1012,12 +1257,14 @@ func (r *RDB) ArchiveAllPendingTasks(qname string) (int64, error) {
// Input: // Input:
// KEYS[1] -> task key (asynq:{<qname>}:t:<task_id>) // KEYS[1] -> task key (asynq:{<qname>}:t:<task_id>)
// KEYS[2] -> archived key (asynq:{<qname>}:archived) // KEYS[2] -> archived key (asynq:{<qname>}:archived)
// KEYS[3] -> all groups key (asynq:{<qname>}:groups)
// -- // --
// ARGV[1] -> id of the task to archive // ARGV[1] -> id of the task to archive
// ARGV[2] -> current timestamp // ARGV[2] -> current timestamp
// ARGV[3] -> cutoff timestamp (e.g., 90 days ago) // ARGV[3] -> cutoff timestamp (e.g., 90 days ago)
// ARGV[4] -> max number of tasks in archived state (e.g., 100) // ARGV[4] -> max number of tasks in archived state (e.g., 100)
// ARGV[5] -> queue key prefix (asynq:{<qname>}:) // ARGV[5] -> queue key prefix (asynq:{<qname>}:)
// ARGV[6] -> group key prefix (asynq:{<qname>}:g:)
// //
// Output: // Output:
// Numeric code indicating the status: // Numeric code indicating the status:
@@ -1030,7 +1277,7 @@ var archiveTaskCmd = redis.NewScript(`
if redis.call("EXISTS", KEYS[1]) == 0 then if redis.call("EXISTS", KEYS[1]) == 0 then
return 0 return 0
end end
local state = redis.call("HGET", KEYS[1], "state") local state, group = unpack(redis.call("HMGET", KEYS[1], "state", "group"))
if state == "active" then if state == "active" then
return -2 return -2
end end
@@ -1039,11 +1286,18 @@ if state == "archived" then
end end
if state == "pending" then if state == "pending" then
if redis.call("LREM", ARGV[5] .. state, 1, ARGV[1]) == 0 then if redis.call("LREM", ARGV[5] .. state, 1, ARGV[1]) == 0 then
return redis.error_reply("task id not found in list " .. tostring(state)) return redis.error_reply("task id not found in list " .. tostring(ARGV[5] .. state))
end end
else elseif state == "aggregating" then
if redis.call("ZREM", ARGV[6] .. group, ARGV[1]) == 0 then
return redis.error_reply("task id not found in zset " .. tostring(ARGV[6] .. group))
end
if redis.call("ZCARD", ARGV[6] .. group) == 0 then
redis.call("SREM", KEYS[3], group)
end
else
if redis.call("ZREM", ARGV[5] .. state, ARGV[1]) == 0 then if redis.call("ZREM", ARGV[5] .. state, ARGV[1]) == 0 then
return redis.error_reply("task id not found in zset " .. tostring(state)) return redis.error_reply("task id not found in zset " .. tostring(ARGV[5] .. state))
end end
end end
redis.call("ZADD", KEYS[2], ARGV[2], ARGV[1]) redis.call("ZADD", KEYS[2], ARGV[2], ARGV[1])
@@ -1068,14 +1322,16 @@ func (r *RDB) ArchiveTask(qname, id string) error {
keys := []string{ keys := []string{
base.TaskKey(qname, id), base.TaskKey(qname, id),
base.ArchivedKey(qname), base.ArchivedKey(qname),
base.AllGroups(qname),
} }
now := time.Now() now := r.clock.Now()
argv := []interface{}{ argv := []interface{}{
id, id,
now.Unix(), now.Unix(),
now.AddDate(0, 0, -archivedExpirationInDays).Unix(), now.AddDate(0, 0, -archivedExpirationInDays).Unix(),
maxArchiveSize, maxArchiveSize,
base.QueueKeyPrefix(qname), base.QueueKeyPrefix(qname),
base.GroupKeyPrefix(qname),
} }
res, err := archiveTaskCmd.Run(context.Background(), r.client, keys, argv...).Result() res, err := archiveTaskCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil { if err != nil {
@@ -1093,7 +1349,7 @@ func (r *RDB) ArchiveTask(qname, id string) error {
case -1: case -1:
return errors.E(op, errors.FailedPrecondition, &errors.TaskAlreadyArchivedError{Queue: qname, ID: id}) return errors.E(op, errors.FailedPrecondition, &errors.TaskAlreadyArchivedError{Queue: qname, ID: id})
case -2: case -2:
return errors.E(op, errors.FailedPrecondition, "cannot archive task in active state. use CancelTask instead.") return errors.E(op, errors.FailedPrecondition, "cannot archive task in active state. use CancelProcessing instead.")
case -3: case -3:
return errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname}) return errors.E(op, errors.NotFound, &errors.QueueNotFoundError{Queue: qname})
default: default:
@@ -1134,7 +1390,7 @@ func (r *RDB) archiveAll(src, dst, qname string) (int64, error) {
src, src,
dst, dst,
} }
now := time.Now() now := r.clock.Now()
argv := []interface{}{ argv := []interface{}{
now.Unix(), now.Unix(),
now.AddDate(0, 0, -archivedExpirationInDays).Unix(), now.AddDate(0, 0, -archivedExpirationInDays).Unix(),
@@ -1158,9 +1414,11 @@ func (r *RDB) archiveAll(src, dst, qname string) (int64, error) {
// Input: // Input:
// KEYS[1] -> asynq:{<qname>}:t:<task_id> // KEYS[1] -> asynq:{<qname>}:t:<task_id>
// KEYS[2] -> asynq:{<qname>}:groups
// -- // --
// ARGV[1] -> task ID // ARGV[1] -> task ID
// ARGV[2] -> queue key prefix // ARGV[2] -> queue key prefix
// ARGV[3] -> group key prefix
// //
// Output: // Output:
// Numeric code indicating the status: // Numeric code indicating the status:
@@ -1171,17 +1429,24 @@ var deleteTaskCmd = redis.NewScript(`
if redis.call("EXISTS", KEYS[1]) == 0 then if redis.call("EXISTS", KEYS[1]) == 0 then
return 0 return 0
end end
local state = redis.call("HGET", KEYS[1], "state") local state, group = unpack(redis.call("HMGET", KEYS[1], "state", "group"))
if state == "active" then if state == "active" then
return -1 return -1
end end
if state == "pending" then if state == "pending" then
if redis.call("LREM", ARGV[2] .. state, 0, ARGV[1]) == 0 then if redis.call("LREM", ARGV[2] .. state, 0, ARGV[1]) == 0 then
return redis.error_reply("task is not found in list: " .. tostring(state)) return redis.error_reply("task is not found in list: " .. tostring(ARGV[2] .. state))
end
elseif state == "aggregating" then
if redis.call("ZREM", ARGV[3] .. group, ARGV[1]) == 0 then
return redis.error_reply("task is not found in zset: " .. tostring(ARGV[3] .. group))
end
if redis.call("ZCARD", ARGV[3] .. group) == 0 then
redis.call("SREM", KEYS[2], group)
end end
else else
if redis.call("ZREM", ARGV[2] .. state, ARGV[1]) == 0 then if redis.call("ZREM", ARGV[2] .. state, ARGV[1]) == 0 then
return redis.error_reply("task is not found in zset: " .. tostring(state)) return redis.error_reply("task is not found in zset: " .. tostring(ARGV[2] .. state))
end end
end end
local unique_key = redis.call("HGET", KEYS[1], "unique_key") local unique_key = redis.call("HGET", KEYS[1], "unique_key")
@@ -1204,10 +1469,12 @@ func (r *RDB) DeleteTask(qname, id string) error {
} }
keys := []string{ keys := []string{
base.TaskKey(qname, id), base.TaskKey(qname, id),
base.AllGroups(qname),
} }
argv := []interface{}{ argv := []interface{}{
id, id,
base.QueueKeyPrefix(qname), base.QueueKeyPrefix(qname),
base.GroupKeyPrefix(qname),
} }
res, err := deleteTaskCmd.Run(context.Background(), r.client, keys, argv...).Result() res, err := deleteTaskCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil { if err != nil {
@@ -1223,7 +1490,7 @@ func (r *RDB) DeleteTask(qname, id string) error {
case 0: case 0:
return errors.E(op, errors.NotFound, &errors.TaskNotFoundError{Queue: qname, ID: id}) return errors.E(op, errors.NotFound, &errors.TaskNotFoundError{Queue: qname, ID: id})
case -1: case -1:
return errors.E(op, errors.FailedPrecondition, "cannot delete task in active state. use CancelTask instead.") return errors.E(op, errors.FailedPrecondition, "cannot delete task in active state. use CancelProcessing instead.")
default: default:
return errors.E(op, errors.Internal, fmt.Sprintf("unexpected return value from deleteTaskCmd script: %d", n)) return errors.E(op, errors.Internal, fmt.Sprintf("unexpected return value from deleteTaskCmd script: %d", n))
} }
@@ -1326,6 +1593,50 @@ func (r *RDB) deleteAll(key, qname string) (int64, error) {
return n, nil return n, nil
} }
// deleteAllAggregatingCmd deletes all tasks from the given group.
//
// Input:
// KEYS[1] -> asynq:{<qname>}:g:<gname>
// KEYS[2] -> asynq:{<qname>}:groups
// -------
// ARGV[1] -> task key prefix
// ARGV[2] -> group name
var deleteAllAggregatingCmd = redis.NewScript(`
local ids = redis.call("ZRANGE", KEYS[1], 0, -1)
for _, id in ipairs(ids) do
redis.call("DEL", ARGV[1] .. id)
end
redis.call("SREM", KEYS[2], ARGV[2])
redis.call("DEL", KEYS[1])
return table.getn(ids)
`)
// DeleteAllAggregatingTasks deletes all aggregating tasks from the given group
// and returns the number of tasks deleted.
func (r *RDB) DeleteAllAggregatingTasks(qname, gname string) (int64, error) {
var op errors.Op = "rdb.DeleteAllAggregatingTasks"
if err := r.checkQueueExists(qname); err != nil {
return 0, errors.E(op, errors.CanonicalCode(err), err)
}
keys := []string{
base.GroupKey(qname, gname),
base.AllGroups(qname),
}
argv := []interface{}{
base.TaskKeyPrefix(qname),
gname,
}
res, err := deleteAllAggregatingCmd.Run(context.Background(), r.client, keys, argv...).Result()
if err != nil {
return 0, errors.E(op, errors.Unknown, err)
}
n, ok := res.(int64)
if !ok {
return 0, errors.E(op, errors.Internal, "command error: unexpected return value %v", res)
}
return n, nil
}
// deleteAllPendingCmd deletes all pending tasks from the given queue. // deleteAllPendingCmd deletes all pending tasks from the given queue.
// //
// Input: // Input:
@@ -1377,7 +1688,7 @@ func (r *RDB) DeleteAllPendingTasks(qname string) (int64, error) {
// KEYS[3] -> asynq:{<qname>}:scheduled // KEYS[3] -> asynq:{<qname>}:scheduled
// KEYS[4] -> asynq:{<qname>}:retry // KEYS[4] -> asynq:{<qname>}:retry
// KEYS[5] -> asynq:{<qname>}:archived // KEYS[5] -> asynq:{<qname>}:archived
// KEYS[6] -> asynq:{<qname>}:deadlines // KEYS[6] -> asynq:{<qname>}:lease
// -- // --
// ARGV[1] -> task key prefix // ARGV[1] -> task key prefix
// //
@@ -1437,7 +1748,7 @@ return 1`)
// KEYS[3] -> asynq:{<qname>}:scheduled // KEYS[3] -> asynq:{<qname>}:scheduled
// KEYS[4] -> asynq:{<qname>}:retry // KEYS[4] -> asynq:{<qname>}:retry
// KEYS[5] -> asynq:{<qname>}:archived // KEYS[5] -> asynq:{<qname>}:archived
// KEYS[6] -> asynq:{<qname>}:deadlines // KEYS[6] -> asynq:{<qname>}:lease
// -- // --
// ARGV[1] -> task key prefix // ARGV[1] -> task key prefix
// //
@@ -1506,7 +1817,7 @@ func (r *RDB) RemoveQueue(qname string, force bool) error {
base.ScheduledKey(qname), base.ScheduledKey(qname),
base.RetryKey(qname), base.RetryKey(qname),
base.ArchivedKey(qname), base.ArchivedKey(qname),
base.DeadlinesKey(qname), base.LeaseKey(qname),
} }
res, err := script.Run(context.Background(), r.client, keys, base.TaskKeyPrefix(qname)).Result() res, err := script.Run(context.Background(), r.client, keys, base.TaskKeyPrefix(qname)).Result()
if err != nil { if err != nil {
@@ -1521,6 +1832,7 @@ func (r *RDB) RemoveQueue(qname string, force bool) error {
if err := r.client.SRem(context.Background(), base.AllQueues, qname).Err(); err != nil { if err := r.client.SRem(context.Background(), base.AllQueues, qname).Err(); err != nil {
return errors.E(op, errors.Unknown, err) return errors.E(op, errors.Unknown, err)
} }
r.queuesPublished.Delete(qname)
return nil return nil
case -1: case -1:
return errors.E(op, errors.NotFound, &errors.QueueNotEmptyError{Queue: qname}) return errors.E(op, errors.NotFound, &errors.QueueNotEmptyError{Queue: qname})
@@ -1540,7 +1852,7 @@ return keys`)
// ListServers returns the list of server info. // ListServers returns the list of server info.
func (r *RDB) ListServers() ([]*base.ServerInfo, error) { func (r *RDB) ListServers() ([]*base.ServerInfo, error) {
now := time.Now() now := r.clock.Now()
res, err := listServerKeysCmd.Run(context.Background(), r.client, []string{base.AllServers}, now.Unix()).Result() res, err := listServerKeysCmd.Run(context.Background(), r.client, []string{base.AllServers}, now.Unix()).Result()
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1574,7 +1886,7 @@ return keys`)
// ListWorkers returns the list of worker stats. // ListWorkers returns the list of worker stats.
func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) { func (r *RDB) ListWorkers() ([]*base.WorkerInfo, error) {
var op errors.Op = "rdb.ListWorkers" var op errors.Op = "rdb.ListWorkers"
now := time.Now() now := r.clock.Now()
res, err := listWorkersCmd.Run(context.Background(), r.client, []string{base.AllWorkers}, now.Unix()).Result() res, err := listWorkersCmd.Run(context.Background(), r.client, []string{base.AllWorkers}, now.Unix()).Result()
if err != nil { if err != nil {
return nil, errors.E(op, errors.Unknown, err) return nil, errors.E(op, errors.Unknown, err)
@@ -1609,7 +1921,7 @@ return keys`)
// ListSchedulerEntries returns the list of scheduler entries. // ListSchedulerEntries returns the list of scheduler entries.
func (r *RDB) ListSchedulerEntries() ([]*base.SchedulerEntry, error) { func (r *RDB) ListSchedulerEntries() ([]*base.SchedulerEntry, error) {
now := time.Now() now := r.clock.Now()
res, err := listSchedulerKeysCmd.Run(context.Background(), r.client, []string{base.AllSchedulers}, now.Unix()).Result() res, err := listSchedulerKeysCmd.Run(context.Background(), r.client, []string{base.AllSchedulers}, now.Unix()).Result()
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1660,7 +1972,7 @@ func (r *RDB) ListSchedulerEnqueueEvents(entryID string, pgn Pagination) ([]*bas
// Pause pauses processing of tasks from the given queue. // Pause pauses processing of tasks from the given queue.
func (r *RDB) Pause(qname string) error { func (r *RDB) Pause(qname string) error {
key := base.PausedKey(qname) key := base.PausedKey(qname)
ok, err := r.client.SetNX(context.Background(), key, time.Now().Unix(), 0).Result() ok, err := r.client.SetNX(context.Background(), key, r.clock.Now().Unix(), 0).Result()
if err != nil { if err != nil {
return err return err
} }

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -11,11 +11,11 @@ import (
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/redis/go-redis/v9"
) )
var errRedisDown = errors.New("asynqtest: redis is down") var errRedisDown = errors.New("testutil: redis is down")
// TestBroker is a broker implementation which enables // TestBroker is a broker implementation which enables
// to simulate Redis failure in tests. // to simulate Redis failure in tests.
@@ -73,31 +73,31 @@ func (tb *TestBroker) Dequeue(qnames ...string) (*base.TaskMessage, time.Time, e
return tb.real.Dequeue(qnames...) return tb.real.Dequeue(qnames...)
} }
func (tb *TestBroker) Done(msg *base.TaskMessage) error { func (tb *TestBroker) Done(ctx context.Context, msg *base.TaskMessage) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.Done(msg) return tb.real.Done(ctx, msg)
} }
func (tb *TestBroker) MarkAsComplete(msg *base.TaskMessage) error { func (tb *TestBroker) MarkAsComplete(ctx context.Context, msg *base.TaskMessage) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.MarkAsComplete(msg) return tb.real.MarkAsComplete(ctx, msg)
} }
func (tb *TestBroker) Requeue(msg *base.TaskMessage) error { func (tb *TestBroker) Requeue(ctx context.Context, msg *base.TaskMessage) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.Requeue(msg) return tb.real.Requeue(ctx, msg)
} }
func (tb *TestBroker) Schedule(ctx context.Context, msg *base.TaskMessage, processAt time.Time) error { func (tb *TestBroker) Schedule(ctx context.Context, msg *base.TaskMessage, processAt time.Time) error {
@@ -118,22 +118,22 @@ func (tb *TestBroker) ScheduleUnique(ctx context.Context, msg *base.TaskMessage,
return tb.real.ScheduleUnique(ctx, msg, processAt, ttl) return tb.real.ScheduleUnique(ctx, msg, processAt, ttl)
} }
func (tb *TestBroker) Retry(msg *base.TaskMessage, processAt time.Time, errMsg string, isFailure bool) error { func (tb *TestBroker) Retry(ctx context.Context, msg *base.TaskMessage, processAt time.Time, errMsg string, isFailure bool) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.Retry(msg, processAt, errMsg, isFailure) return tb.real.Retry(ctx, msg, processAt, errMsg, isFailure)
} }
func (tb *TestBroker) Archive(msg *base.TaskMessage, errMsg string) error { func (tb *TestBroker) Archive(ctx context.Context, msg *base.TaskMessage, errMsg string) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.Archive(msg, errMsg) return tb.real.Archive(ctx, msg, errMsg)
} }
func (tb *TestBroker) ForwardIfReady(qnames ...string) error { func (tb *TestBroker) ForwardIfReady(qnames ...string) error {
@@ -145,22 +145,31 @@ func (tb *TestBroker) ForwardIfReady(qnames ...string) error {
return tb.real.ForwardIfReady(qnames...) return tb.real.ForwardIfReady(qnames...)
} }
func (tb *TestBroker) DeleteExpiredCompletedTasks(qname string) error { func (tb *TestBroker) DeleteExpiredCompletedTasks(qname string, batchSize int) error {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return errRedisDown return errRedisDown
} }
return tb.real.DeleteExpiredCompletedTasks(qname) return tb.real.DeleteExpiredCompletedTasks(qname, batchSize)
} }
func (tb *TestBroker) ListDeadlineExceeded(deadline time.Time, qnames ...string) ([]*base.TaskMessage, error) { func (tb *TestBroker) ListLeaseExpired(cutoff time.Time, qnames ...string) ([]*base.TaskMessage, error) {
tb.mu.Lock() tb.mu.Lock()
defer tb.mu.Unlock() defer tb.mu.Unlock()
if tb.sleeping { if tb.sleeping {
return nil, errRedisDown return nil, errRedisDown
} }
return tb.real.ListDeadlineExceeded(deadline, qnames...) return tb.real.ListLeaseExpired(cutoff, qnames...)
}
func (tb *TestBroker) ExtendLease(qname string, ids ...string) (time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return time.Time{}, errRedisDown
}
return tb.real.ExtendLease(qname, ids...)
} }
func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error { func (tb *TestBroker) WriteServerState(info *base.ServerInfo, workers []*base.WorkerInfo, ttl time.Duration) error {
@@ -225,3 +234,66 @@ func (tb *TestBroker) Close() error {
} }
return tb.real.Close() return tb.real.Close()
} }
func (tb *TestBroker) AddToGroup(ctx context.Context, msg *base.TaskMessage, gname string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.AddToGroup(ctx, msg, gname)
}
func (tb *TestBroker) AddToGroupUnique(ctx context.Context, msg *base.TaskMessage, gname string, ttl time.Duration) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.AddToGroupUnique(ctx, msg, gname, ttl)
}
func (tb *TestBroker) ListGroups(qname string) ([]string, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, errRedisDown
}
return tb.real.ListGroups(qname)
}
func (tb *TestBroker) AggregationCheck(qname, gname string, t time.Time, gracePeriod, maxDelay time.Duration, maxSize int) (aggregationSetID string, err error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return "", errRedisDown
}
return tb.real.AggregationCheck(qname, gname, t, gracePeriod, maxDelay, maxSize)
}
func (tb *TestBroker) ReadAggregationSet(qname, gname, aggregationSetID string) ([]*base.TaskMessage, time.Time, error) {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return nil, time.Time{}, errRedisDown
}
return tb.real.ReadAggregationSet(qname, gname, aggregationSetID)
}
func (tb *TestBroker) DeleteAggregationSet(ctx context.Context, qname, gname, aggregationSetID string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.DeleteAggregationSet(ctx, qname, gname, aggregationSetID)
}
func (tb *TestBroker) ReclaimStaleAggregationSets(qname string) error {
tb.mu.Lock()
defer tb.mu.Unlock()
if tb.sleeping {
return errRedisDown
}
return tb.real.ReclaimStaleAggregationSets(qname)
}

View File

@@ -0,0 +1,84 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package testutil
import (
"time"
"github.com/google/uuid"
"github.com/hibiken/asynq/internal/base"
)
func makeDefaultTaskMessage() *base.TaskMessage {
return &base.TaskMessage{
ID: uuid.NewString(),
Type: "default_task",
Queue: "default",
Retry: 25,
Timeout: 1800, // default timeout of 30 mins
Deadline: 0, // no deadline
}
}
type TaskMessageBuilder struct {
msg *base.TaskMessage
}
func NewTaskMessageBuilder() *TaskMessageBuilder {
return &TaskMessageBuilder{}
}
func (b *TaskMessageBuilder) lazyInit() {
if b.msg == nil {
b.msg = makeDefaultTaskMessage()
}
}
func (b *TaskMessageBuilder) Build() *base.TaskMessage {
b.lazyInit()
return b.msg
}
func (b *TaskMessageBuilder) SetType(typename string) *TaskMessageBuilder {
b.lazyInit()
b.msg.Type = typename
return b
}
func (b *TaskMessageBuilder) SetPayload(payload []byte) *TaskMessageBuilder {
b.lazyInit()
b.msg.Payload = payload
return b
}
func (b *TaskMessageBuilder) SetQueue(qname string) *TaskMessageBuilder {
b.lazyInit()
b.msg.Queue = qname
return b
}
func (b *TaskMessageBuilder) SetRetry(n int) *TaskMessageBuilder {
b.lazyInit()
b.msg.Retry = n
return b
}
func (b *TaskMessageBuilder) SetTimeout(timeout time.Duration) *TaskMessageBuilder {
b.lazyInit()
b.msg.Timeout = int64(timeout.Seconds())
return b
}
func (b *TaskMessageBuilder) SetDeadline(deadline time.Time) *TaskMessageBuilder {
b.lazyInit()
b.msg.Deadline = deadline.Unix()
return b
}
func (b *TaskMessageBuilder) SetGroup(gname string) *TaskMessageBuilder {
b.lazyInit()
b.msg.GroupKey = gname
return b
}

View File

@@ -0,0 +1,94 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package testutil
import (
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/hibiken/asynq/internal/base"
)
func TestTaskMessageBuilder(t *testing.T) {
tests := []struct {
desc string
ops func(b *TaskMessageBuilder) // operations to perform on the builder
want *base.TaskMessage
}{
{
desc: "zero value and build",
ops: nil,
want: &base.TaskMessage{
Type: "default_task",
Queue: "default",
Payload: nil,
Retry: 25,
Timeout: 1800, // 30m
Deadline: 0,
},
},
{
desc: "with type, payload, and queue",
ops: func(b *TaskMessageBuilder) {
b.SetType("foo").SetPayload([]byte("hello")).SetQueue("myqueue")
},
want: &base.TaskMessage{
Type: "foo",
Queue: "myqueue",
Payload: []byte("hello"),
Retry: 25,
Timeout: 1800, // 30m
Deadline: 0,
},
},
{
desc: "with retry, timeout, and deadline",
ops: func(b *TaskMessageBuilder) {
b.SetRetry(1).
SetTimeout(20 * time.Second).
SetDeadline(time.Date(2017, 3, 6, 0, 0, 0, 0, time.UTC))
},
want: &base.TaskMessage{
Type: "default_task",
Queue: "default",
Payload: nil,
Retry: 1,
Timeout: 20,
Deadline: time.Date(2017, 3, 6, 0, 0, 0, 0, time.UTC).Unix(),
},
},
{
desc: "with group",
ops: func(b *TaskMessageBuilder) {
b.SetGroup("mygroup")
},
want: &base.TaskMessage{
Type: "default_task",
Queue: "default",
Payload: nil,
Retry: 25,
Timeout: 1800,
Deadline: 0,
GroupKey: "mygroup",
},
},
}
cmpOpts := []cmp.Option{cmpopts.IgnoreFields(base.TaskMessage{}, "ID")}
for _, tc := range tests {
var b TaskMessageBuilder
if tc.ops != nil {
tc.ops(&b)
}
got := b.Build()
if diff := cmp.Diff(tc.want, got, cmpOpts...); diff != "" {
t.Errorf("%s: TaskMessageBuilder.Build() = %+v, want %+v;\n(-want,+got)\n%s",
tc.desc, got, tc.want, diff)
}
}
}

View File

@@ -2,8 +2,8 @@
// Use of this source code is governed by a MIT license // Use of this source code is governed by a MIT license
// that can be found in the LICENSE file. // that can be found in the LICENSE file.
// Package asynqtest defines test helpers for asynq and its internal packages. // Package testutil defines test helpers for asynq and its internal packages.
package asynqtest package testutil
import ( import (
"context" "context"
@@ -13,11 +13,12 @@ import (
"testing" "testing"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/timeutil"
"github.com/redis/go-redis/v9"
) )
// EquateInt64Approx returns a Comparer option that treats int64 values // EquateInt64Approx returns a Comparer option that treats int64 values
@@ -92,6 +93,20 @@ var SortStringSliceOpt = cmp.Transformer("SortStringSlice", func(in []string) []
return out return out
}) })
var SortRedisZSetEntryOpt = cmp.Transformer("SortZSetEntries", func(in []redis.Z) []redis.Z {
out := append([]redis.Z(nil), in...) // Copy input to avoid mutating it
sort.Slice(out, func(i, j int) bool {
// TODO: If member is a comparable type (int, string, etc) compare by the member
// Use generic comparable type here once update to go1.18
if _, ok := out[i].Member.(string); ok {
// If member is a string, compare the member
return out[i].Member.(string) < out[j].Member.(string)
}
return out[i].Score < out[j].Score
})
return out
})
// IgnoreIDOpt is an cmp.Option to ignore ID field in task messages when comparing. // IgnoreIDOpt is an cmp.Option to ignore ID field in task messages when comparing.
var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID") var IgnoreIDOpt = cmpopts.IgnoreFields(base.TaskMessage{}, "ID")
@@ -114,6 +129,13 @@ func NewTaskMessageWithQueue(taskType string, payload []byte, qname string) *bas
} }
} }
// NewLeaseWithClock returns a new lease with the given expiration time and clock.
func NewLeaseWithClock(expirationTime time.Time, clock timeutil.Clock) *base.Lease {
l := base.NewLease(expirationTime)
l.Clock = clock
return l
}
// JSON serializes the given key-value pairs into stream of bytes in JSON. // JSON serializes the given key-value pairs into stream of bytes in JSON.
func JSON(kv map[string]interface{}) []byte { func JSON(kv map[string]interface{}) []byte {
b, err := json.Marshal(kv) b, err := json.Marshal(kv)
@@ -223,20 +245,35 @@ func SeedArchivedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z,
seedRedisZSet(tb, r, base.ArchivedKey(qname), entries, base.TaskStateArchived) seedRedisZSet(tb, r, base.ArchivedKey(qname), entries, base.TaskStateArchived)
} }
// SeedDeadlines initializes the deadlines set with the given entries. // SeedLease initializes the lease set with the given entries.
func SeedDeadlines(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) { func SeedLease(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname) r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.DeadlinesKey(qname), entries, base.TaskStateActive) seedRedisZSet(tb, r, base.LeaseKey(qname), entries, base.TaskStateActive)
} }
// SeedCompletedQueue initializes the completed set witht the given entries. // SeedCompletedQueue initializes the completed set with the given entries.
func SeedCompletedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) { func SeedCompletedQueue(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname string) {
tb.Helper() tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname) r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.CompletedKey(qname), entries, base.TaskStateCompleted) seedRedisZSet(tb, r, base.CompletedKey(qname), entries, base.TaskStateCompleted)
} }
// SeedGroup initializes the group with the given entries.
func SeedGroup(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname, gname string) {
tb.Helper()
ctx := context.Background()
r.SAdd(ctx, base.AllQueues, qname)
r.SAdd(ctx, base.AllGroups(qname), gname)
seedRedisZSet(tb, r, base.GroupKey(qname, gname), entries, base.TaskStateAggregating)
}
func SeedAggregationSet(tb testing.TB, r redis.UniversalClient, entries []base.Z, qname, gname, setID string) {
tb.Helper()
r.SAdd(context.Background(), base.AllQueues, qname)
seedRedisZSet(tb, r, base.AggregationSetKey(qname, gname, setID), entries, base.TaskStateAggregating)
}
// SeedAllPendingQueues initializes all of the specified queues with the given messages. // SeedAllPendingQueues initializes all of the specified queues with the given messages.
// //
// pending maps a queue name to a list of messages. // pending maps a queue name to a list of messages.
@@ -279,11 +316,11 @@ func SeedAllArchivedQueues(tb testing.TB, r redis.UniversalClient, archived map[
} }
} }
// SeedAllDeadlines initializes all of the deadlines with the given entries. // SeedAllLease initializes all of the lease sets with the given entries.
func SeedAllDeadlines(tb testing.TB, r redis.UniversalClient, deadlines map[string][]base.Z) { func SeedAllLease(tb testing.TB, r redis.UniversalClient, lease map[string][]base.Z) {
tb.Helper() tb.Helper()
for q, entries := range deadlines { for q, entries := range lease {
SeedDeadlines(tb, r, entries, q) SeedLease(tb, r, entries, q)
} }
} }
@@ -295,6 +332,18 @@ func SeedAllCompletedQueues(tb testing.TB, r redis.UniversalClient, completed ma
} }
} }
// SeedAllGroups initializes all groups in all queues.
// The map maps queue names to group names which maps to a list of task messages and the time it was
// added to the group.
func SeedAllGroups(tb testing.TB, r redis.UniversalClient, groups map[string]map[string][]base.Z) {
tb.Helper()
for qname, g := range groups {
for gname, entries := range g {
SeedGroup(tb, r, entries, qname, gname)
}
}
}
func seedRedisList(tb testing.TB, c redis.UniversalClient, key string, func seedRedisList(tb testing.TB, c redis.UniversalClient, key string,
msgs []*base.TaskMessage, state base.TaskState) { msgs []*base.TaskMessage, state base.TaskState) {
tb.Helper() tb.Helper()
@@ -303,15 +352,14 @@ func seedRedisList(tb testing.TB, c redis.UniversalClient, key string,
if err := c.LPush(context.Background(), key, msg.ID).Err(); err != nil { if err := c.LPush(context.Background(), key, msg.ID).Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
key := base.TaskKey(msg.Queue, msg.ID) taskKey := base.TaskKey(msg.Queue, msg.ID)
data := map[string]interface{}{ data := map[string]interface{}{
"msg": encoded, "msg": encoded,
"state": state.String(), "state": state.String(),
"timeout": msg.Timeout,
"deadline": msg.Deadline,
"unique_key": msg.UniqueKey, "unique_key": msg.UniqueKey,
"group": msg.GroupKey,
} }
if err := c.HSet(context.Background(), key, data).Err(); err != nil { if err := c.HSet(context.Background(), taskKey, data).Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
if len(msg.UniqueKey) > 0 { if len(msg.UniqueKey) > 0 {
@@ -329,19 +377,18 @@ func seedRedisZSet(tb testing.TB, c redis.UniversalClient, key string,
for _, item := range items { for _, item := range items {
msg := item.Message msg := item.Message
encoded := MustMarshal(tb, msg) encoded := MustMarshal(tb, msg)
z := &redis.Z{Member: msg.ID, Score: float64(item.Score)} z := redis.Z{Member: msg.ID, Score: float64(item.Score)}
if err := c.ZAdd(context.Background(), key, z).Err(); err != nil { if err := c.ZAdd(context.Background(), key, z).Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
key := base.TaskKey(msg.Queue, msg.ID) taskKey := base.TaskKey(msg.Queue, msg.ID)
data := map[string]interface{}{ data := map[string]interface{}{
"msg": encoded, "msg": encoded,
"state": state.String(), "state": state.String(),
"timeout": msg.Timeout,
"deadline": msg.Deadline,
"unique_key": msg.UniqueKey, "unique_key": msg.UniqueKey,
"group": msg.GroupKey,
} }
if err := c.HSet(context.Background(), key, data).Err(); err != nil { if err := c.HSet(context.Background(), taskKey, data).Err(); err != nil {
tb.Fatal(err) tb.Fatal(err)
} }
if len(msg.UniqueKey) > 0 { if len(msg.UniqueKey) > 0 {
@@ -416,11 +463,11 @@ func GetArchivedEntries(tb testing.TB, r redis.UniversalClient, qname string) []
return getMessagesFromZSetWithScores(tb, r, qname, base.ArchivedKey, base.TaskStateArchived) return getMessagesFromZSetWithScores(tb, r, qname, base.ArchivedKey, base.TaskStateArchived)
} }
// GetDeadlinesEntries returns all task messages and its score in the deadlines set for the given queue. // GetLeaseEntries returns all task IDs and its score in the lease set for the given queue.
// It also asserts the state field of the task. // It also asserts the state field of the task.
func GetDeadlinesEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z { func GetLeaseEntries(tb testing.TB, r redis.UniversalClient, qname string) []base.Z {
tb.Helper() tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname, base.DeadlinesKey, base.TaskStateActive) return getMessagesFromZSetWithScores(tb, r, qname, base.LeaseKey, base.TaskStateActive)
} }
// GetCompletedEntries returns all completed messages and its score in the given queue. // GetCompletedEntries returns all completed messages and its score in the given queue.
@@ -430,6 +477,14 @@ func GetCompletedEntries(tb testing.TB, r redis.UniversalClient, qname string) [
return getMessagesFromZSetWithScores(tb, r, qname, base.CompletedKey, base.TaskStateCompleted) return getMessagesFromZSetWithScores(tb, r, qname, base.CompletedKey, base.TaskStateCompleted)
} }
// GetGroupEntries returns all scheduled messages and its score in the given queue.
// It also asserts the state field of the task.
func GetGroupEntries(tb testing.TB, r redis.UniversalClient, qname, groupKey string) []base.Z {
tb.Helper()
return getMessagesFromZSetWithScores(tb, r, qname,
func(qname string) string { return base.GroupKey(qname, groupKey) }, base.TaskStateAggregating)
}
// Retrieves all messages stored under `keyFn(qname)` key in redis list. // Retrieves all messages stored under `keyFn(qname)` key in redis list.
func getMessagesFromList(tb testing.TB, r redis.UniversalClient, qname string, func getMessagesFromList(tb testing.TB, r redis.UniversalClient, qname string,
keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage { keyFn func(qname string) string, state base.TaskState) []*base.TaskMessage {
@@ -481,3 +536,107 @@ func getMessagesFromZSetWithScores(tb testing.TB, r redis.UniversalClient,
} }
return res return res
} }
// TaskSeedData holds the data required to seed tasks under the task key in test.
type TaskSeedData struct {
Msg *base.TaskMessage
State base.TaskState
PendingSince time.Time
}
func SeedTasks(tb testing.TB, r redis.UniversalClient, taskData []*TaskSeedData) {
for _, data := range taskData {
msg := data.Msg
ctx := context.Background()
key := base.TaskKey(msg.Queue, msg.ID)
v := map[string]interface{}{
"msg": MustMarshal(tb, msg),
"state": data.State.String(),
"unique_key": msg.UniqueKey,
"group": msg.GroupKey,
}
if !data.PendingSince.IsZero() {
v["pending_since"] = data.PendingSince.Unix()
}
if err := r.HSet(ctx, key, v).Err(); err != nil {
tb.Fatalf("Failed to write task data in redis: %v", err)
}
if len(msg.UniqueKey) > 0 {
err := r.SetNX(ctx, msg.UniqueKey, msg.ID, 1*time.Minute).Err()
if err != nil {
tb.Fatalf("Failed to set unique lock in redis: %v", err)
}
}
}
}
func SeedRedisZSets(tb testing.TB, r redis.UniversalClient, zsets map[string][]redis.Z) {
for key, zs := range zsets {
// FIXME: How come we can't simply do ZAdd(ctx, key, zs...) here?
for _, z := range zs {
if err := r.ZAdd(context.Background(), key, z).Err(); err != nil {
tb.Fatalf("Failed to seed zset (key=%q): %v", key, err)
}
}
}
}
func SeedRedisSets(tb testing.TB, r redis.UniversalClient, sets map[string][]string) {
for key, set := range sets {
SeedRedisSet(tb, r, key, set)
}
}
func SeedRedisSet(tb testing.TB, r redis.UniversalClient, key string, members []string) {
for _, mem := range members {
if err := r.SAdd(context.Background(), key, mem).Err(); err != nil {
tb.Fatalf("Failed to seed set (key=%q): %v", key, err)
}
}
}
func SeedRedisLists(tb testing.TB, r redis.UniversalClient, lists map[string][]string) {
for key, vals := range lists {
for _, v := range vals {
if err := r.LPush(context.Background(), key, v).Err(); err != nil {
tb.Fatalf("Failed to seed list (key=%q): %v", key, err)
}
}
}
}
func AssertRedisLists(t *testing.T, r redis.UniversalClient, wantLists map[string][]string) {
for key, want := range wantLists {
got, err := r.LRange(context.Background(), key, 0, -1).Result()
if err != nil {
t.Fatalf("Failed to read list (key=%q): %v", key, err)
}
if diff := cmp.Diff(want, got, SortStringSliceOpt); diff != "" {
t.Errorf("mismatch found in list (key=%q): (-want,+got)\n%s", key, diff)
}
}
}
func AssertRedisSets(t *testing.T, r redis.UniversalClient, wantSets map[string][]string) {
for key, want := range wantSets {
got, err := r.SMembers(context.Background(), key).Result()
if err != nil {
t.Fatalf("Failed to read set (key=%q): %v", key, err)
}
if diff := cmp.Diff(want, got, SortStringSliceOpt); diff != "" {
t.Errorf("mismatch found in set (key=%q): (-want,+got)\n%s", key, diff)
}
}
}
func AssertRedisZSets(t *testing.T, r redis.UniversalClient, wantZSets map[string][]redis.Z) {
for key, want := range wantZSets {
got, err := r.ZRangeWithScores(context.Background(), key, 0, -1).Result()
if err != nil {
t.Fatalf("Failed to read zset (key=%q): %v", key, err)
}
if diff := cmp.Diff(want, got, SortRedisZSetEntryOpt); diff != "" {
t.Errorf("mismatch found in zset (key=%q): (-want,+got)\n%s", key, diff)
}
}
}

View File

@@ -1,7 +1,14 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
// Package timeutil exports functions and types related to time and date. // Package timeutil exports functions and types related to time and date.
package timeutil package timeutil
import "time" import (
"sync"
"time"
)
// A Clock is an object that can tell you the current time. // A Clock is an object that can tell you the current time.
// //
@@ -23,16 +30,30 @@ func (_ *realTimeClock) Now() time.Time { return time.Now() }
// A SimulatedClock is a concrete Clock implementation that doesn't "tick" on its own. // A SimulatedClock is a concrete Clock implementation that doesn't "tick" on its own.
// Time is advanced by explicit call to the AdvanceTime() or SetTime() functions. // Time is advanced by explicit call to the AdvanceTime() or SetTime() functions.
// This object is concurrency safe.
type SimulatedClock struct { type SimulatedClock struct {
t time.Time mu sync.Mutex
t time.Time // guarded by mu
} }
func NewSimulatedClock(t time.Time) *SimulatedClock { func NewSimulatedClock(t time.Time) *SimulatedClock {
return &SimulatedClock{t} return &SimulatedClock{t: t}
} }
func (c *SimulatedClock) Now() time.Time { return c.t } func (c *SimulatedClock) Now() time.Time {
c.mu.Lock()
defer c.mu.Unlock()
return c.t
}
func (c *SimulatedClock) SetTime(t time.Time) { c.t = t } func (c *SimulatedClock) SetTime(t time.Time) {
c.mu.Lock()
defer c.mu.Unlock()
c.t = t
}
func (c *SimulatedClock) AdvanceTime(d time.Duration) { c.t.Add(d) } func (c *SimulatedClock) AdvanceTime(d time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
c.t = c.t.Add(d)
}

View File

@@ -0,0 +1,48 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package timeutil
import (
"testing"
"time"
)
func TestSimulatedClock(t *testing.T) {
now := time.Now()
tests := []struct {
desc string
initTime time.Time
advanceBy time.Duration
wantTime time.Time
}{
{
desc: "advance time forward",
initTime: now,
advanceBy: 30 * time.Second,
wantTime: now.Add(30 * time.Second),
},
{
desc: "advance time backward",
initTime: now,
advanceBy: -10 * time.Second,
wantTime: now.Add(-10 * time.Second),
},
}
for _, tc := range tests {
c := NewSimulatedClock(tc.initTime)
if c.Now() != tc.initTime {
t.Errorf("%s: Before Advance; SimulatedClock.Now() = %v, want %v", tc.desc, c.Now(), tc.initTime)
}
c.AdvanceTime(tc.advanceBy)
if c.Now() != tc.wantTime {
t.Errorf("%s: After Advance; SimulatedClock.Now() = %v, want %v", tc.desc, c.Now(), tc.wantTime)
}
}
}

View File

@@ -27,13 +27,17 @@ type janitor struct {
// average interval between checks. // average interval between checks.
avgInterval time.Duration avgInterval time.Duration
// number of tasks to be deleted when janitor runs to delete the expired completed tasks.
batchSize int
} }
type janitorParams struct { type janitorParams struct {
logger *log.Logger logger *log.Logger
broker base.Broker broker base.Broker
queues []string queues []string
interval time.Duration interval time.Duration
batchSize int
} }
func newJanitor(params janitorParams) *janitor { func newJanitor(params janitorParams) *janitor {
@@ -43,6 +47,7 @@ func newJanitor(params janitorParams) *janitor {
done: make(chan struct{}), done: make(chan struct{}),
queues: params.queues, queues: params.queues,
avgInterval: params.interval, avgInterval: params.interval,
batchSize: params.batchSize,
} }
} }
@@ -73,8 +78,8 @@ func (j *janitor) start(wg *sync.WaitGroup) {
func (j *janitor) exec() { func (j *janitor) exec() {
for _, qname := range j.queues { for _, qname := range j.queues {
if err := j.broker.DeleteExpiredCompletedTasks(qname); err != nil { if err := j.broker.DeleteExpiredCompletedTasks(qname, j.batchSize); err != nil {
j.logger.Errorf("Could not delete expired completed tasks from queue %q: %v", j.logger.Errorf("Failed to delete expired completed tasks from queue %q: %v",
qname, err) qname, err)
} }
} }

View File

@@ -10,9 +10,9 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
) )
func newCompletedTask(qname, tasktype string, payload []byte, completedAt time.Time) *base.TaskMessage { func newCompletedTask(qname, tasktype string, payload []byte, completedAt time.Time) *base.TaskMessage {
@@ -26,11 +26,13 @@ func TestJanitor(t *testing.T) {
defer r.Close() defer r.Close()
rdbClient := rdb.NewRDB(r) rdbClient := rdb.NewRDB(r)
const interval = 1 * time.Second const interval = 1 * time.Second
const batchSize = 100
janitor := newJanitor(janitorParams{ janitor := newJanitor(janitorParams{
logger: testLogger, logger: testLogger,
broker: rdbClient, broker: rdbClient,
queues: []string{"default", "custom"}, queues: []string{"default", "custom"},
interval: interval, interval: interval,
batchSize: batchSize,
}) })
now := time.Now() now := time.Now()

253
periodic_task_manager.go Normal file
View File

@@ -0,0 +1,253 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"crypto/sha256"
"fmt"
"sort"
"sync"
"time"
"github.com/redis/go-redis/v9"
)
// PeriodicTaskManager manages scheduling of periodic tasks.
// It syncs scheduler's entries by calling the config provider periodically.
type PeriodicTaskManager struct {
s *Scheduler
p PeriodicTaskConfigProvider
syncInterval time.Duration
done chan (struct{})
wg sync.WaitGroup
m map[string]string // map[hash]entryID
}
type PeriodicTaskManagerOpts struct {
// Required: must be non nil
PeriodicTaskConfigProvider PeriodicTaskConfigProvider
// Optional: if RedisUniversalClient is nil must be non nil
RedisConnOpt RedisConnOpt
// Optional: if RedisUniversalClient is non nil, RedisConnOpt is ignored.
RedisUniversalClient redis.UniversalClient
// Optional: scheduler options
*SchedulerOpts
// Optional: default is 3m
SyncInterval time.Duration
}
const defaultSyncInterval = 3 * time.Minute
// NewPeriodicTaskManager returns a new PeriodicTaskManager instance.
// The given opts should specify the RedisConnOp and PeriodicTaskConfigProvider at minimum.
func NewPeriodicTaskManager(opts PeriodicTaskManagerOpts) (*PeriodicTaskManager, error) {
if opts.PeriodicTaskConfigProvider == nil {
return nil, fmt.Errorf("PeriodicTaskConfigProvider cannot be nil")
}
if opts.RedisConnOpt == nil && opts.RedisUniversalClient == nil {
return nil, fmt.Errorf("RedisConnOpt/RedisUniversalClient cannot be nil")
}
var scheduler *Scheduler
if opts.RedisUniversalClient != nil {
scheduler = NewSchedulerFromRedisClient(opts.RedisUniversalClient, opts.SchedulerOpts)
} else {
scheduler = NewScheduler(opts.RedisConnOpt, opts.SchedulerOpts)
}
syncInterval := opts.SyncInterval
if syncInterval == 0 {
syncInterval = defaultSyncInterval
}
return &PeriodicTaskManager{
s: scheduler,
p: opts.PeriodicTaskConfigProvider,
syncInterval: syncInterval,
done: make(chan struct{}),
m: make(map[string]string),
}, nil
}
// PeriodicTaskConfigProvider provides configs for periodic tasks.
// GetConfigs will be called by a PeriodicTaskManager periodically to
// sync the scheduler's entries with the configs returned by the provider.
type PeriodicTaskConfigProvider interface {
GetConfigs() ([]*PeriodicTaskConfig, error)
}
// PeriodicTaskConfig specifies the details of a periodic task.
type PeriodicTaskConfig struct {
Cronspec string // required: must be non empty string
Task *Task // required: must be non nil
Opts []Option // optional: can be nil
}
func (c *PeriodicTaskConfig) hash() string {
h := sha256.New()
_, _ = h.Write([]byte(c.Cronspec))
_, _ = h.Write([]byte(c.Task.Type()))
h.Write(c.Task.Payload())
opts := stringifyOptions(c.Opts)
sort.Strings(opts)
for _, opt := range opts {
_, _ = h.Write([]byte(opt))
}
return fmt.Sprintf("%x", h.Sum(nil))
}
func validatePeriodicTaskConfig(c *PeriodicTaskConfig) error {
if c == nil {
return fmt.Errorf("PeriodicTaskConfig cannot be nil")
}
if c.Task == nil {
return fmt.Errorf("PeriodicTaskConfig.Task cannot be nil")
}
if c.Cronspec == "" {
return fmt.Errorf("PeriodicTaskConfig.Cronspec cannot be empty")
}
return nil
}
// Start starts a scheduler and background goroutine to sync the scheduler with the configs
// returned by the provider.
//
// Start returns any error encountered at start up time.
func (mgr *PeriodicTaskManager) Start() error {
if mgr.s == nil || mgr.p == nil {
panic("asynq: cannot start uninitialized PeriodicTaskManager; use NewPeriodicTaskManager to initialize")
}
if err := mgr.initialSync(); err != nil {
return fmt.Errorf("asynq: %v", err)
}
if err := mgr.s.Start(); err != nil {
return fmt.Errorf("asynq: %v", err)
}
mgr.wg.Add(1)
go func() {
defer mgr.wg.Done()
ticker := time.NewTicker(mgr.syncInterval)
for {
select {
case <-mgr.done:
mgr.s.logger.Debugf("Stopping syncer goroutine")
ticker.Stop()
return
case <-ticker.C:
mgr.sync()
}
}
}()
return nil
}
// Shutdown gracefully shuts down the manager.
// It notifies a background syncer goroutine to stop and stops scheduler.
func (mgr *PeriodicTaskManager) Shutdown() {
close(mgr.done)
mgr.wg.Wait()
mgr.s.Shutdown()
}
// Run starts the manager and blocks until an os signal to exit the program is received.
// Once it receives a signal, it gracefully shuts down the manager.
func (mgr *PeriodicTaskManager) Run() error {
if err := mgr.Start(); err != nil {
return err
}
mgr.s.waitForSignals()
mgr.Shutdown()
mgr.s.logger.Debugf("PeriodicTaskManager exiting")
return nil
}
func (mgr *PeriodicTaskManager) initialSync() error {
configs, err := mgr.p.GetConfigs()
if err != nil {
return fmt.Errorf("initial call to GetConfigs failed: %v", err)
}
for _, c := range configs {
if err := validatePeriodicTaskConfig(c); err != nil {
return fmt.Errorf("initial call to GetConfigs contained an invalid config: %v", err)
}
}
mgr.add(configs)
return nil
}
func (mgr *PeriodicTaskManager) add(configs []*PeriodicTaskConfig) {
for _, c := range configs {
entryID, err := mgr.s.Register(c.Cronspec, c.Task, c.Opts...)
if err != nil {
mgr.s.logger.Errorf("Failed to register periodic task: cronspec=%q task=%q err=%v",
c.Cronspec, c.Task.Type(), err)
continue
}
mgr.m[c.hash()] = entryID
mgr.s.logger.Infof("Successfully registered periodic task: cronspec=%q task=%q, entryID=%s",
c.Cronspec, c.Task.Type(), entryID)
}
}
func (mgr *PeriodicTaskManager) remove(removed map[string]string) {
for hash, entryID := range removed {
if err := mgr.s.Unregister(entryID); err != nil {
mgr.s.logger.Errorf("Failed to unregister periodic task: %v", err)
continue
}
delete(mgr.m, hash)
mgr.s.logger.Infof("Successfully unregistered periodic task: entryID=%s", entryID)
}
}
func (mgr *PeriodicTaskManager) sync() {
configs, err := mgr.p.GetConfigs()
if err != nil {
mgr.s.logger.Errorf("Failed to get periodic task configs: %v", err)
return
}
for _, c := range configs {
if err := validatePeriodicTaskConfig(c); err != nil {
mgr.s.logger.Errorf("Failed to sync: GetConfigs returned an invalid config: %v", err)
return
}
}
// Diff and only register/unregister the newly added/removed entries.
removed := mgr.diffRemoved(configs)
added := mgr.diffAdded(configs)
mgr.remove(removed)
mgr.add(added)
}
// diffRemoved diffs the incoming configs with the registered config and returns
// a map containing hash and entryID of each config that was removed.
func (mgr *PeriodicTaskManager) diffRemoved(configs []*PeriodicTaskConfig) map[string]string {
newConfigs := make(map[string]string)
for _, c := range configs {
newConfigs[c.hash()] = "" // empty value since we don't have entryID yet
}
removed := make(map[string]string)
for k, v := range mgr.m {
// test whether existing config is present in the incoming configs
if _, found := newConfigs[k]; !found {
removed[k] = v
}
}
return removed
}
// diffAdded diffs the incoming configs with the registered configs and returns
// a list of configs that were added.
func (mgr *PeriodicTaskManager) diffAdded(configs []*PeriodicTaskConfig) []*PeriodicTaskConfig {
var added []*PeriodicTaskConfig
for _, c := range configs {
if _, found := mgr.m[c.hash()]; !found {
added = append(added, c)
}
}
return added
}

View File

@@ -0,0 +1,340 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package asynq
import (
"sort"
"sync"
"testing"
"time"
"github.com/google/go-cmp/cmp"
)
// Trivial implementation of PeriodicTaskConfigProvider for testing purpose.
type FakeConfigProvider struct {
mu sync.Mutex
cfgs []*PeriodicTaskConfig
}
func (p *FakeConfigProvider) SetConfigs(cfgs []*PeriodicTaskConfig) {
p.mu.Lock()
defer p.mu.Unlock()
p.cfgs = cfgs
}
func (p *FakeConfigProvider) GetConfigs() ([]*PeriodicTaskConfig, error) {
p.mu.Lock()
defer p.mu.Unlock()
return p.cfgs, nil
}
func TestNewPeriodicTaskManager(t *testing.T) {
redisConnOpt := getRedisConnOpt(t)
cfgs := []*PeriodicTaskConfig{
{Cronspec: "* * * * *", Task: NewTask("foo", nil)},
{Cronspec: "* * * * *", Task: NewTask("bar", nil)},
}
tests := []struct {
desc string
opts PeriodicTaskManagerOpts
}{
{
desc: "with provider and redisConnOpt",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
},
},
{
desc: "with sync option",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
SyncInterval: 5 * time.Minute,
},
},
{
desc: "with scheduler option",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
SyncInterval: 5 * time.Minute,
SchedulerOpts: &SchedulerOpts{
LogLevel: DebugLevel,
},
},
},
}
for _, tc := range tests {
_, err := NewPeriodicTaskManager(tc.opts)
if err != nil {
t.Errorf("%s; NewPeriodicTaskManager returned error: %v", tc.desc, err)
}
}
t.Run("error", func(t *testing.T) {
tests := []struct {
desc string
opts PeriodicTaskManagerOpts
}{
{
desc: "without provider",
opts: PeriodicTaskManagerOpts{
RedisConnOpt: redisConnOpt,
},
},
{
desc: "without redisConOpt",
opts: PeriodicTaskManagerOpts{
PeriodicTaskConfigProvider: &FakeConfigProvider{cfgs: cfgs},
},
},
}
for _, tc := range tests {
_, err := NewPeriodicTaskManager(tc.opts)
if err == nil {
t.Errorf("%s; NewPeriodicTaskManager did not return error", tc.desc)
}
}
})
}
func TestPeriodicTaskConfigHash(t *testing.T) {
tests := []struct {
desc string
a *PeriodicTaskConfig
b *PeriodicTaskConfig
isSame bool
}{
{
desc: "basic identity test",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
isSame: true,
},
{
desc: "with a option",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
isSame: true,
},
{
desc: "with multiple options (different order)",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Unique(5 * time.Minute), Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue"), Unique(5 * time.Minute)},
},
isSame: true,
},
{
desc: "with payload",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("hello world!")),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("hello world!")),
Opts: []Option{Queue("myqueue")},
},
isSame: true,
},
{
desc: "with different cronspecs",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
b: &PeriodicTaskConfig{
Cronspec: "5 * * * *",
Task: NewTask("foo", nil),
},
isSame: false,
},
{
desc: "with different task type",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("bar", nil),
},
isSame: false,
},
{
desc: "with different options",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Unique(10 * time.Minute)},
},
isSame: false,
},
{
desc: "with different options (one is subset of the other)",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", nil),
Opts: []Option{Queue("myqueue"), Unique(10 * time.Minute)},
},
isSame: false,
},
{
desc: "with different payload",
a: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("hello!")),
Opts: []Option{Queue("myqueue")},
},
b: &PeriodicTaskConfig{
Cronspec: "* * * * *",
Task: NewTask("foo", []byte("HELLO!")),
Opts: []Option{Queue("myqueue"), Unique(10 * time.Minute)},
},
isSame: false,
},
}
for _, tc := range tests {
if tc.isSame && tc.a.hash() != tc.b.hash() {
t.Errorf("%s: a.hash=%s b.hash=%s expected to be equal",
tc.desc, tc.a.hash(), tc.b.hash())
}
if !tc.isSame && tc.a.hash() == tc.b.hash() {
t.Errorf("%s: a.hash=%s b.hash=%s expected to be not equal",
tc.desc, tc.a.hash(), tc.b.hash())
}
}
}
// Things to test.
// - Run the manager
// - Change provider to return new configs
// - Verify that the scheduler synced with the new config
func TestPeriodicTaskManager(t *testing.T) {
// Note: In this test, we'll use task type as an ID for each config.
cfgs := []*PeriodicTaskConfig{
{Task: NewTask("task1", nil), Cronspec: "* * * * 1"},
{Task: NewTask("task2", nil), Cronspec: "* * * * 2"},
}
const syncInterval = 3 * time.Second
provider := &FakeConfigProvider{cfgs: cfgs}
mgr, err := NewPeriodicTaskManager(PeriodicTaskManagerOpts{
RedisConnOpt: getRedisConnOpt(t),
PeriodicTaskConfigProvider: provider,
SyncInterval: syncInterval,
})
if err != nil {
t.Fatalf("Failed to initialize PeriodicTaskManager: %v", err)
}
if err := mgr.Start(); err != nil {
t.Fatalf("Failed to start PeriodicTaskManager: %v", err)
}
defer mgr.Shutdown()
got := extractCronEntries(mgr.s)
want := []*cronEntry{
{Cronspec: "* * * * 1", TaskType: "task1"},
{Cronspec: "* * * * 2", TaskType: "task2"},
}
if diff := cmp.Diff(want, got, sortCronEntry); diff != "" {
t.Errorf("Diff found in scheduler's registered entries: %s", diff)
}
// Change the underlying configs
// - task2 removed
// - task3 added
provider.SetConfigs([]*PeriodicTaskConfig{
{Task: NewTask("task1", nil), Cronspec: "* * * * 1"},
{Task: NewTask("task3", nil), Cronspec: "* * * * 3"},
})
// Wait for the next sync
time.Sleep(syncInterval * 2)
// Verify the entries are synced
got = extractCronEntries(mgr.s)
want = []*cronEntry{
{Cronspec: "* * * * 1", TaskType: "task1"},
{Cronspec: "* * * * 3", TaskType: "task3"},
}
if diff := cmp.Diff(want, got, sortCronEntry); diff != "" {
t.Errorf("Diff found in scheduler's registered entries: %s", diff)
}
// Change the underlying configs
// All configs removed, empty set.
provider.SetConfigs([]*PeriodicTaskConfig{})
// Wait for the next sync
time.Sleep(syncInterval * 2)
// Verify the entries are synced
got = extractCronEntries(mgr.s)
want = []*cronEntry{}
if diff := cmp.Diff(want, got, sortCronEntry); diff != "" {
t.Errorf("Diff found in scheduler's registered entries: %s", diff)
}
}
func extractCronEntries(s *Scheduler) []*cronEntry {
var out []*cronEntry
for _, e := range s.cron.Entries() {
job := e.Job.(*enqueueJob)
out = append(out, &cronEntry{Cronspec: job.cronspec, TaskType: job.task.Type()})
}
return out
}
var sortCronEntry = cmp.Transformer("sortCronEntry", func(in []*cronEntry) []*cronEntry {
out := append([]*cronEntry(nil), in...)
sort.Slice(out, func(i, j int) bool {
return out[i].TaskType < out[j].TaskType
})
return out
})
// A simple struct to allow for simpler comparison in test.
type cronEntry struct {
Cronspec string
TaskType string
}

View File

@@ -7,7 +7,8 @@ package asynq
import ( import (
"context" "context"
"fmt" "fmt"
"math/rand" "math"
"math/rand/v2"
"runtime" "runtime"
"runtime/debug" "runtime/debug"
"sort" "sort"
@@ -19,25 +20,28 @@ import (
asynqcontext "github.com/hibiken/asynq/internal/context" asynqcontext "github.com/hibiken/asynq/internal/context"
"github.com/hibiken/asynq/internal/errors" "github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/timeutil"
"golang.org/x/time/rate" "golang.org/x/time/rate"
) )
type processor struct { type processor struct {
logger *log.Logger logger *log.Logger
broker base.Broker broker base.Broker
clock timeutil.Clock
handler Handler handler Handler
baseCtxFn func() context.Context
queueConfig map[string]int queueConfig map[string]int
// orderedQueues is set only in strict-priority mode. // orderedQueues is set only in strict-priority mode.
orderedQueues []string orderedQueues []string
retryDelayFunc RetryDelayFunc taskCheckInterval time.Duration
isFailureFunc func(error) bool retryDelayFunc RetryDelayFunc
isFailureFunc func(error) bool
errHandler ErrorHandler
errHandler ErrorHandler
shutdownTimeout time.Duration shutdownTimeout time.Duration
// channel via which to send sync requests to syncer. // channel via which to send sync requests to syncer.
@@ -69,19 +73,21 @@ type processor struct {
} }
type processorParams struct { type processorParams struct {
logger *log.Logger logger *log.Logger
broker base.Broker broker base.Broker
retryDelayFunc RetryDelayFunc baseCtxFn func() context.Context
isFailureFunc func(error) bool retryDelayFunc RetryDelayFunc
syncCh chan<- *syncRequest taskCheckInterval time.Duration
cancelations *base.Cancelations isFailureFunc func(error) bool
concurrency int syncCh chan<- *syncRequest
queues map[string]int cancelations *base.Cancelations
strictPriority bool concurrency int
errHandler ErrorHandler queues map[string]int
shutdownTimeout time.Duration strictPriority bool
starting chan<- *workerInfo errHandler ErrorHandler
finished chan<- *base.TaskMessage shutdownTimeout time.Duration
starting chan<- *workerInfo
finished chan<- *base.TaskMessage
} }
// newProcessor constructs a new processor. // newProcessor constructs a new processor.
@@ -92,24 +98,27 @@ func newProcessor(params processorParams) *processor {
orderedQueues = sortByPriority(queues) orderedQueues = sortByPriority(queues)
} }
return &processor{ return &processor{
logger: params.logger, logger: params.logger,
broker: params.broker, broker: params.broker,
queueConfig: queues, baseCtxFn: params.baseCtxFn,
orderedQueues: orderedQueues, clock: timeutil.NewRealClock(),
retryDelayFunc: params.retryDelayFunc, queueConfig: queues,
isFailureFunc: params.isFailureFunc, orderedQueues: orderedQueues,
syncRequestCh: params.syncCh, taskCheckInterval: params.taskCheckInterval,
cancelations: params.cancelations, retryDelayFunc: params.retryDelayFunc,
errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1), isFailureFunc: params.isFailureFunc,
sema: make(chan struct{}, params.concurrency), syncRequestCh: params.syncCh,
done: make(chan struct{}), cancelations: params.cancelations,
quit: make(chan struct{}), errLogLimiter: rate.NewLimiter(rate.Every(3*time.Second), 1),
abort: make(chan struct{}), sema: make(chan struct{}, params.concurrency),
errHandler: params.errHandler, done: make(chan struct{}),
handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }), quit: make(chan struct{}),
shutdownTimeout: params.shutdownTimeout, abort: make(chan struct{}),
starting: params.starting, errHandler: params.errHandler,
finished: params.finished, handler: HandlerFunc(func(ctx context.Context, t *Task) error { return fmt.Errorf("handler not set") }),
shutdownTimeout: params.shutdownTimeout,
starting: params.starting,
finished: params.finished,
} }
} }
@@ -164,7 +173,7 @@ func (p *processor) exec() {
return return
case p.sema <- struct{}{}: // acquire token case p.sema <- struct{}{}: // acquire token
qnames := p.queues() qnames := p.queues()
msg, deadline, err := p.broker.Dequeue(qnames...) msg, leaseExpirationTime, err := p.broker.Dequeue(qnames...)
switch { switch {
case errors.Is(err, errors.ErrNoProcessableTask): case errors.Is(err, errors.ErrNoProcessableTask):
p.logger.Debug("All queues are empty") p.logger.Debug("All queues are empty")
@@ -172,7 +181,8 @@ func (p *processor) exec() {
// Sleep to avoid slamming redis and let scheduler move tasks into queues. // Sleep to avoid slamming redis and let scheduler move tasks into queues.
// Note: We are not using blocking pop operation and polling queues instead. // Note: We are not using blocking pop operation and polling queues instead.
// This adds significant load to redis. // This adds significant load to redis.
time.Sleep(time.Second) jitter := rand.N(p.taskCheckInterval)
time.Sleep(p.taskCheckInterval/2 + jitter)
<-p.sema // release token <-p.sema // release token
return return
case err != nil: case err != nil:
@@ -183,14 +193,16 @@ func (p *processor) exec() {
return return
} }
p.starting <- &workerInfo{msg, time.Now(), deadline} lease := base.NewLease(leaseExpirationTime)
deadline := p.computeDeadline(msg)
p.starting <- &workerInfo{msg, time.Now(), deadline, lease}
go func() { go func() {
defer func() { defer func() {
p.finished <- msg p.finished <- msg
<-p.sema // release token <-p.sema // release token
}() }()
ctx, cancel := asynqcontext.New(msg, deadline) ctx, cancel := asynqcontext.New(p.baseCtxFn(), msg, deadline)
p.cancelations.Add(msg.ID, cancel) p.cancelations.Add(msg.ID, cancel)
defer func() { defer func() {
cancel() cancel()
@@ -201,7 +213,7 @@ func (p *processor) exec() {
select { select {
case <-ctx.Done(): case <-ctx.Done():
// already canceled (e.g. deadline exceeded). // already canceled (e.g. deadline exceeded).
p.handleFailedMessage(ctx, msg, ctx.Err()) p.handleFailedMessage(ctx, lease, msg, ctx.Err())
return return
default: default:
} }
@@ -225,24 +237,34 @@ func (p *processor) exec() {
case <-p.abort: case <-p.abort:
// time is up, push the message back to queue and quit this worker goroutine. // time is up, push the message back to queue and quit this worker goroutine.
p.logger.Warnf("Quitting worker. task id=%s", msg.ID) p.logger.Warnf("Quitting worker. task id=%s", msg.ID)
p.requeue(msg) p.requeue(lease, msg)
return
case <-lease.Done():
cancel()
p.handleFailedMessage(ctx, lease, msg, ErrLeaseExpired)
return return
case <-ctx.Done(): case <-ctx.Done():
p.handleFailedMessage(ctx, msg, ctx.Err()) p.handleFailedMessage(ctx, lease, msg, ctx.Err())
return return
case resErr := <-resCh: case resErr := <-resCh:
if resErr != nil { if resErr != nil {
p.handleFailedMessage(ctx, msg, resErr) p.handleFailedMessage(ctx, lease, msg, resErr)
return return
} }
p.handleSucceededMessage(ctx, msg) p.handleSucceededMessage(lease, msg)
} }
}() }()
} }
} }
func (p *processor) requeue(msg *base.TaskMessage) { func (p *processor) requeue(l *base.Lease, msg *base.TaskMessage) {
err := p.broker.Requeue(msg) if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.Requeue(ctx, msg)
if err != nil { if err != nil {
p.logger.Errorf("Could not push task id=%s back to queue: %v", msg.ID, err) p.logger.Errorf("Could not push task id=%s back to queue: %v", msg.ID, err)
} else { } else {
@@ -250,49 +272,53 @@ func (p *processor) requeue(msg *base.TaskMessage) {
} }
} }
func (p *processor) handleSucceededMessage(ctx context.Context, msg *base.TaskMessage) { func (p *processor) handleSucceededMessage(l *base.Lease, msg *base.TaskMessage) {
if msg.Retention > 0 { if msg.Retention > 0 {
p.markAsComplete(ctx, msg) p.markAsComplete(l, msg)
} else { } else {
p.markAsDone(ctx, msg) p.markAsDone(l, msg)
} }
} }
func (p *processor) markAsComplete(ctx context.Context, msg *base.TaskMessage) { func (p *processor) markAsComplete(l *base.Lease, msg *base.TaskMessage) {
err := p.broker.MarkAsComplete(msg) if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.MarkAsComplete(ctx, msg)
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s type=%q from %q to %q: %+v", errMsg := fmt.Sprintf("Could not move task id=%s type=%q from %q to %q: %+v",
msg.ID, msg.Type, base.ActiveKey(msg.Queue), base.CompletedKey(msg.Queue), err) msg.ID, msg.Type, base.ActiveKey(msg.Queue), base.CompletedKey(msg.Queue), err)
deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg) p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.broker.MarkAsComplete(msg) return p.broker.MarkAsComplete(ctx, msg)
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline, deadline: l.Deadline(),
} }
} }
} }
func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) { func (p *processor) markAsDone(l *base.Lease, msg *base.TaskMessage) {
err := p.broker.Done(msg) if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.Done(ctx, msg)
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.ActiveKey(msg.Queue), err) errMsg := fmt.Sprintf("Could not remove task id=%s type=%q from %q err: %+v", msg.ID, msg.Type, base.ActiveKey(msg.Queue), err)
deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg) p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.broker.Done(msg) return p.broker.Done(ctx, msg)
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline, deadline: l.Deadline(),
} }
} }
} }
@@ -301,59 +327,66 @@ func (p *processor) markAsDone(ctx context.Context, msg *base.TaskMessage) {
// the task should not be retried and should be archived instead. // the task should not be retried and should be archived instead.
var SkipRetry = errors.New("skip retry for the task") var SkipRetry = errors.New("skip retry for the task")
func (p *processor) handleFailedMessage(ctx context.Context, msg *base.TaskMessage, err error) { // RevokeTask is used as a return value from Handler.ProcessTask to indicate that
// the task should not be retried or archived.
var RevokeTask = errors.New("revoke task")
func (p *processor) handleFailedMessage(ctx context.Context, l *base.Lease, msg *base.TaskMessage, err error) {
if p.errHandler != nil { if p.errHandler != nil {
p.errHandler.HandleError(ctx, NewTask(msg.Type, msg.Payload), err) p.errHandler.HandleError(ctx, NewTask(msg.Type, msg.Payload), err)
} }
if !p.isFailureFunc(err) { switch {
// retry the task without marking it as failed case errors.Is(err, RevokeTask):
p.retry(ctx, msg, err, false /*isFailure*/) p.logger.Warnf("revoke task id=%s", msg.ID)
p.markAsDone(l, msg)
case msg.Retried >= msg.Retry || errors.Is(err, SkipRetry):
p.logger.Warnf("Retry exhausted for task id=%s", msg.ID)
p.archive(l, msg, err)
default:
p.retry(l, msg, err, p.isFailureFunc(err))
}
}
func (p *processor) retry(l *base.Lease, msg *base.TaskMessage, e error, isFailure bool) {
if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return return
} }
if msg.Retried >= msg.Retry || errors.Is(err, SkipRetry) { ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
p.logger.Warnf("Retry exhausted for task id=%s", msg.ID) defer cancel()
p.archive(ctx, msg, err)
} else {
p.retry(ctx, msg, err, true /*isFailure*/)
}
}
func (p *processor) retry(ctx context.Context, msg *base.TaskMessage, e error, isFailure bool) {
d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload)) d := p.retryDelayFunc(msg.Retried, e, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(d) retryAt := time.Now().Add(d)
err := p.broker.Retry(msg, retryAt, e.Error(), isFailure) err := p.broker.Retry(ctx, msg, retryAt, e.Error(), isFailure)
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.RetryKey(msg.Queue)) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.RetryKey(msg.Queue))
deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg) p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.broker.Retry(msg, retryAt, e.Error(), isFailure) return p.broker.Retry(ctx, msg, retryAt, e.Error(), isFailure)
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline, deadline: l.Deadline(),
} }
} }
} }
func (p *processor) archive(ctx context.Context, msg *base.TaskMessage, e error) { func (p *processor) archive(l *base.Lease, msg *base.TaskMessage, e error) {
err := p.broker.Archive(msg, e.Error()) if !l.IsValid() {
// If lease is not valid, do not write to redis; Let recoverer take care of it.
return
}
ctx, cancel := context.WithDeadline(context.Background(), l.Deadline())
defer cancel()
err := p.broker.Archive(ctx, msg, e.Error())
if err != nil { if err != nil {
errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.ArchivedKey(msg.Queue)) errMsg := fmt.Sprintf("Could not move task id=%s from %q to %q", msg.ID, base.ActiveKey(msg.Queue), base.ArchivedKey(msg.Queue))
deadline, ok := ctx.Deadline()
if !ok {
panic("asynq: internal error: missing deadline in context")
}
p.logger.Warnf("%s; Will retry syncing", errMsg) p.logger.Warnf("%s; Will retry syncing", errMsg)
p.syncRequestCh <- &syncRequest{ p.syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return p.broker.Archive(msg, e.Error()) return p.broker.Archive(ctx, msg, e.Error())
}, },
errMsg: errMsg, errMsg: errMsg,
deadline: deadline, deadline: l.Deadline(),
} }
} }
} }
@@ -380,8 +413,7 @@ func (p *processor) queues() []string {
names = append(names, qname) names = append(names, qname)
} }
} }
r := rand.New(rand.NewSource(time.Now().UnixNano())) rand.Shuffle(len(names), func(i, j int) { names[i], names[j] = names[j], names[i] })
r.Shuffle(len(names), func(i, j int) { names[i], names[j] = names[j], names[i] })
return uniq(names, len(p.queueConfig)) return uniq(names, len(p.queueConfig))
} }
@@ -398,12 +430,15 @@ func (p *processor) perform(ctx context.Context, task *Task) (err error) {
// map/slice usage. The parent frame should have the real trigger. // map/slice usage. The parent frame should have the real trigger.
_, file, line, ok = runtime.Caller(2) _, file, line, ok = runtime.Caller(2)
} }
var errMsg string
// Include the file and line number info in the error, if runtime.Caller returned ok. // Include the file and line number info in the error, if runtime.Caller returned ok.
if ok { if ok {
err = fmt.Errorf("panic [%s:%d]: %v", file, line, x) errMsg = fmt.Sprintf("panic [%s:%d]: %v", file, line, x)
} else { } else {
err = fmt.Errorf("panic: %v", x) errMsg = fmt.Sprintf("panic: %v", x)
}
err = &errors.PanicError{
ErrMsg: errMsg,
} }
} }
}() }()
@@ -483,3 +518,23 @@ func gcd(xs ...int) int {
} }
return res return res
} }
// computeDeadline returns the given task's deadline,
func (p *processor) computeDeadline(msg *base.TaskMessage) time.Time {
if msg.Timeout == 0 && msg.Deadline == 0 {
p.logger.Errorf("asynq: internal error: both timeout and deadline are not set for the task message: %s", msg.ID)
return p.clock.Now().Add(defaultTimeout)
}
if msg.Timeout != 0 && msg.Deadline != 0 {
deadlineUnix := math.Min(float64(p.clock.Now().Unix()+msg.Timeout), float64(msg.Deadline))
return time.Unix(int64(deadlineUnix), 0)
}
if msg.Timeout != 0 {
return p.clock.Now().Add(time.Duration(msg.Timeout) * time.Second)
}
return time.Unix(msg.Deadline, 0)
}
func IsPanicError(err error) bool {
return errors.IsPanicError(err)
}

View File

@@ -9,15 +9,19 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"sort" "sort"
"strings"
"sync" "sync"
"testing" "testing"
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts" "github.com/google/go-cmp/cmp/cmpopts"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
"github.com/hibiken/asynq/internal/timeutil"
) )
var taskCmpOpts = []cmp.Option{ var taskCmpOpts = []cmp.Option{
@@ -59,19 +63,21 @@ func newProcessorForTest(t *testing.T, r *rdb.RDB, h Handler) *processor {
go fakeHeartbeater(starting, finished, done) go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done) go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{ p := newProcessor(processorParams{
logger: testLogger, logger: testLogger,
broker: r, broker: r,
retryDelayFunc: DefaultRetryDelayFunc, baseCtxFn: context.Background,
isFailureFunc: defaultIsFailureFunc, retryDelayFunc: DefaultRetryDelayFunc,
syncCh: syncCh, taskCheckInterval: defaultTaskCheckInterval,
cancelations: base.NewCancelations(), isFailureFunc: defaultIsFailureFunc,
concurrency: 10, syncCh: syncCh,
queues: defaultQueueConfig, cancelations: base.NewCancelations(),
strictPriority: false, concurrency: 10,
errHandler: nil, queues: defaultQueueConfig,
shutdownTimeout: defaultShutdownTimeout, strictPriority: false,
starting: starting, errHandler: nil,
finished: finished, shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
}) })
p.handler = h p.handler = h
return p return p
@@ -289,6 +295,7 @@ func TestProcessorRetry(t *testing.T) {
errMsg := "something went wrong" errMsg := "something went wrong"
wrappedSkipRetry := fmt.Errorf("%s:%w", errMsg, SkipRetry) wrappedSkipRetry := fmt.Errorf("%s:%w", errMsg, SkipRetry)
wrappedRevokeTask := fmt.Errorf("%s:%w", errMsg, RevokeTask)
tests := []struct { tests := []struct {
desc string // test description desc string // test description
@@ -306,7 +313,7 @@ func TestProcessorRetry(t *testing.T) {
pending: []*base.TaskMessage{m1, m2, m3, m4}, pending: []*base.TaskMessage{m1, m2, m3, m4},
delay: time.Minute, delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error { handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return fmt.Errorf(errMsg) return errors.New(errMsg)
}), }),
wait: 2 * time.Second, wait: 2 * time.Second,
wantErrMsg: errMsg, wantErrMsg: errMsg,
@@ -340,6 +347,32 @@ func TestProcessorRetry(t *testing.T) {
wantArchived: []*base.TaskMessage{m1, m2}, wantArchived: []*base.TaskMessage{m1, m2},
wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error wantErrCount: 2, // ErrorHandler should still be called with SkipRetry error
}, },
{
desc: "Should revoke task",
pending: []*base.TaskMessage{m1, m2},
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return RevokeTask // return RevokeTask without wrapping
}),
wait: 2 * time.Second,
wantErrMsg: RevokeTask.Error(),
wantRetry: []*base.TaskMessage{},
wantArchived: []*base.TaskMessage{},
wantErrCount: 2, // ErrorHandler should still be called with RevokeTask error
},
{
desc: "Should revoke task (with error wrapping)",
pending: []*base.TaskMessage{m1, m2},
delay: time.Minute,
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
return wrappedRevokeTask
}),
wait: 2 * time.Second,
wantErrMsg: wrappedRevokeTask.Error(),
wantRetry: []*base.TaskMessage{},
wantArchived: []*base.TaskMessage{},
wantErrCount: 2, // ErrorHandler should still be called with RevokeTask error
},
} }
for _, tc := range tests { for _, tc := range tests {
@@ -480,6 +513,105 @@ func TestProcessorMarkAsComplete(t *testing.T) {
} }
} }
// Test a scenario where the worker server cannot communicate with redis due to a network failure
// and the lease expires
func TestProcessorWithExpiredLease(t *testing.T) {
r := setup(t)
defer r.Close()
rdbClient := rdb.NewRDB(r)
m1 := h.NewTaskMessage("task1", nil)
tests := []struct {
pending []*base.TaskMessage
handler Handler
wantErrCount int
}{
{
pending: []*base.TaskMessage{m1},
handler: HandlerFunc(func(ctx context.Context, task *Task) error {
// make sure the task processing time exceeds lease duration
// to test expired lease.
time.Sleep(rdb.LeaseDuration + 10*time.Second)
return nil
}),
wantErrCount: 1, // ErrorHandler should still be called with ErrLeaseExpired
},
}
for _, tc := range tests {
h.FlushDB(t, r)
h.SeedPendingQueue(t, r, tc.pending, base.DefaultQueueName)
starting := make(chan *workerInfo)
finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest)
done := make(chan struct{})
t.Cleanup(func() { close(done) })
// fake heartbeater which notifies lease expiration
go func() {
for {
select {
case w := <-starting:
// simulate expiration by resetting to some time in the past
w.lease.Reset(time.Now().Add(-5 * time.Second))
if !w.lease.NotifyExpiration() {
panic("Failed to notifiy lease expiration")
}
case <-finished:
// do nothing
case <-done:
return
}
}
}()
go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{
logger: testLogger,
broker: rdbClient,
baseCtxFn: context.Background,
taskCheckInterval: defaultTaskCheckInterval,
retryDelayFunc: DefaultRetryDelayFunc,
isFailureFunc: defaultIsFailureFunc,
syncCh: syncCh,
cancelations: base.NewCancelations(),
concurrency: 10,
queues: defaultQueueConfig,
strictPriority: false,
errHandler: nil,
shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
})
p.handler = tc.handler
var (
mu sync.Mutex // guards n and errs
n int // number of times error handler is called
errs []error // error passed to error handler
)
p.errHandler = ErrorHandlerFunc(func(ctx context.Context, t *Task, err error) {
mu.Lock()
defer mu.Unlock()
n++
errs = append(errs, err)
})
p.start(&sync.WaitGroup{})
time.Sleep(4 * time.Second)
p.shutdown()
if n != tc.wantErrCount {
t.Errorf("Unexpected number of error count: got %d, want %d", n, tc.wantErrCount)
continue
}
for i := 0; i < tc.wantErrCount; i++ {
if !errors.Is(errs[i], ErrLeaseExpired) {
t.Errorf("Unexpected error was passed to ErrorHandler: got %v want %v", errs[i], ErrLeaseExpired)
}
}
}
}
func TestProcessorQueues(t *testing.T) { func TestProcessorQueues(t *testing.T) {
sortOpt := cmp.Transformer("SortStrings", func(in []string) []string { sortOpt := cmp.Transformer("SortStrings", func(in []string) []string {
out := append([]string(nil), in...) // Copy input to avoid mutating it out := append([]string(nil), in...) // Copy input to avoid mutating it
@@ -590,19 +722,21 @@ func TestProcessorWithStrictPriority(t *testing.T) {
go fakeHeartbeater(starting, finished, done) go fakeHeartbeater(starting, finished, done)
go fakeSyncer(syncCh, done) go fakeSyncer(syncCh, done)
p := newProcessor(processorParams{ p := newProcessor(processorParams{
logger: testLogger, logger: testLogger,
broker: rdbClient, broker: rdbClient,
retryDelayFunc: DefaultRetryDelayFunc, baseCtxFn: context.Background,
isFailureFunc: defaultIsFailureFunc, taskCheckInterval: defaultTaskCheckInterval,
syncCh: syncCh, retryDelayFunc: DefaultRetryDelayFunc,
cancelations: base.NewCancelations(), isFailureFunc: defaultIsFailureFunc,
concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time. syncCh: syncCh,
queues: queueCfg, cancelations: base.NewCancelations(),
strictPriority: true, concurrency: 1, // Set concurrency to 1 to make sure tasks are processed one at a time.
errHandler: nil, queues: queueCfg,
shutdownTimeout: defaultShutdownTimeout, strictPriority: true,
starting: starting, errHandler: nil,
finished: finished, shutdownTimeout: defaultShutdownTimeout,
starting: starting,
finished: finished,
}) })
p.handler = HandlerFunc(handler) p.handler = HandlerFunc(handler)
@@ -752,3 +886,111 @@ func TestNormalizeQueues(t *testing.T) {
} }
} }
} }
func TestProcessorComputeDeadline(t *testing.T) {
now := time.Now()
p := processor{
logger: log.NewLogger(nil),
clock: timeutil.NewSimulatedClock(now),
}
tests := []struct {
desc string
msg *base.TaskMessage
want time.Time
}{
{
desc: "message with only timeout specified",
msg: &base.TaskMessage{
Timeout: int64((30 * time.Minute).Seconds()),
},
want: now.Add(30 * time.Minute),
},
{
desc: "message with only deadline specified",
msg: &base.TaskMessage{
Deadline: now.Add(24 * time.Hour).Unix(),
},
want: now.Add(24 * time.Hour),
},
{
desc: "message with both timeout and deadline set (now+timeout < deadline)",
msg: &base.TaskMessage{
Deadline: now.Add(24 * time.Hour).Unix(),
Timeout: int64((30 * time.Minute).Seconds()),
},
want: now.Add(30 * time.Minute),
},
{
desc: "message with both timeout and deadline set (now+timeout > deadline)",
msg: &base.TaskMessage{
Deadline: now.Add(10 * time.Minute).Unix(),
Timeout: int64((30 * time.Minute).Seconds()),
},
want: now.Add(10 * time.Minute),
},
{
desc: "message with both timeout and deadline set (now+timeout == deadline)",
msg: &base.TaskMessage{
Deadline: now.Add(30 * time.Minute).Unix(),
Timeout: int64((30 * time.Minute).Seconds()),
},
want: now.Add(30 * time.Minute),
},
{
desc: "message without timeout and deadline",
msg: &base.TaskMessage{},
want: now.Add(defaultTimeout),
},
}
for _, tc := range tests {
got := p.computeDeadline(tc.msg)
// Compare the Unix epoch with seconds granularity
if got.Unix() != tc.want.Unix() {
t.Errorf("%s: got=%v, want=%v", tc.desc, got.Unix(), tc.want.Unix())
}
}
}
func TestReturnPanicError(t *testing.T) {
task := NewTask("gen_thumbnail", h.JSON(map[string]interface{}{"src": "some/img/path"}))
tests := []struct {
name string
handler HandlerFunc
IsPanicError bool
}{
{
name: "should return panic error when occurred panic recovery",
handler: func(ctx context.Context, t *Task) error {
panic("something went terribly wrong")
},
IsPanicError: true,
},
{
name: "should return normal error when don't occur panic recovery",
handler: func(ctx context.Context, t *Task) error {
return fmt.Errorf("something went terribly wrong")
},
IsPanicError: false,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
p := processor{
logger: log.NewLogger(nil),
handler: tc.handler,
}
got := p.perform(context.Background(), task)
if tc.IsPanicError != IsPanicError(got) {
t.Errorf("%s: got=%t, want=%t", tc.name, IsPanicError(got), tc.IsPanicError)
}
if tc.IsPanicError && !strings.HasPrefix(got.Error(), "panic error cause by:") {
t.Error("wrong text msg for panic error")
}
})
}
}

View File

@@ -10,6 +10,7 @@ import (
"time" "time"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/errors"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
) )
@@ -76,19 +77,36 @@ func (r *recoverer) start(wg *sync.WaitGroup) {
}() }()
} }
// ErrLeaseExpired error indicates that the task failed because the worker working on the task
// could not extend its lease due to missing heartbeats. The worker may have crashed or got cutoff from the network.
var ErrLeaseExpired = errors.New("asynq: task lease expired")
func (r *recoverer) recover() { func (r *recoverer) recover() {
// Get all tasks which have expired 30 seconds ago or earlier. r.recoverLeaseExpiredTasks()
deadline := time.Now().Add(-30 * time.Second) r.recoverStaleAggregationSets()
msgs, err := r.broker.ListDeadlineExceeded(deadline, r.queues...) }
func (r *recoverer) recoverLeaseExpiredTasks() {
// Get all tasks which have expired 30 seconds ago or earlier to accommodate certain amount of clock skew.
cutoff := time.Now().Add(-30 * time.Second)
msgs, err := r.broker.ListLeaseExpired(cutoff, r.queues...)
if err != nil { if err != nil {
r.logger.Warn("recoverer: could not list deadline exceeded tasks") r.logger.Warnf("recoverer: could not list lease expired tasks: %v", err)
return return
} }
for _, msg := range msgs { for _, msg := range msgs {
if msg.Retried >= msg.Retry { if msg.Retried >= msg.Retry {
r.archive(msg, context.DeadlineExceeded) r.archive(msg, ErrLeaseExpired)
} else { } else {
r.retry(msg, context.DeadlineExceeded) r.retry(msg, ErrLeaseExpired)
}
}
}
func (r *recoverer) recoverStaleAggregationSets() {
for _, qname := range r.queues {
if err := r.broker.ReclaimStaleAggregationSets(qname); err != nil {
r.logger.Warnf("recoverer: could not reclaim stale aggregation sets in queue %q: %v", qname, err)
} }
} }
} }
@@ -96,13 +114,13 @@ func (r *recoverer) recover() {
func (r *recoverer) retry(msg *base.TaskMessage, err error) { func (r *recoverer) retry(msg *base.TaskMessage, err error) {
delay := r.retryDelayFunc(msg.Retried, err, NewTask(msg.Type, msg.Payload)) delay := r.retryDelayFunc(msg.Retried, err, NewTask(msg.Type, msg.Payload))
retryAt := time.Now().Add(delay) retryAt := time.Now().Add(delay)
if err := r.broker.Retry(msg, retryAt, err.Error(), r.isFailureFunc(err)); err != nil { if err := r.broker.Retry(context.Background(), msg, retryAt, err.Error(), r.isFailureFunc(err)); err != nil {
r.logger.Warnf("recoverer: could not retry deadline exceeded task: %v", err) r.logger.Warnf("recoverer: could not retry lease expired task: %v", err)
} }
} }
func (r *recoverer) archive(msg *base.TaskMessage, err error) { func (r *recoverer) archive(msg *base.TaskMessage, err error) {
if err := r.broker.Archive(msg, err.Error()); err != nil { if err := r.broker.Archive(context.Background(), msg, err.Error()); err != nil {
r.logger.Warnf("recoverer: could not move task to archive: %v", err) r.logger.Warnf("recoverer: could not move task to archive: %v", err)
} }
} }

View File

@@ -10,9 +10,9 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
) )
func TestRecoverer(t *testing.T) { func TestRecoverer(t *testing.T) {
@@ -27,29 +27,25 @@ func TestRecoverer(t *testing.T) {
t4.Retried = t4.Retry // t4 has reached its max retry count t4.Retried = t4.Retry // t4 has reached its max retry count
now := time.Now() now := time.Now()
oneHourFromNow := now.Add(1 * time.Hour)
fiveMinutesFromNow := now.Add(5 * time.Minute)
fiveMinutesAgo := now.Add(-5 * time.Minute)
oneHourAgo := now.Add(-1 * time.Hour)
tests := []struct { tests := []struct {
desc string desc string
inProgress map[string][]*base.TaskMessage active map[string][]*base.TaskMessage
deadlines map[string][]base.Z lease map[string][]base.Z
retry map[string][]base.Z retry map[string][]base.Z
archived map[string][]base.Z archived map[string][]base.Z
wantActive map[string][]*base.TaskMessage wantActive map[string][]*base.TaskMessage
wantDeadlines map[string][]base.Z wantLease map[string][]base.Z
wantRetry map[string][]*base.TaskMessage wantRetry map[string][]*base.TaskMessage
wantArchived map[string][]*base.TaskMessage wantArchived map[string][]*base.TaskMessage
}{ }{
{ {
desc: "with one active task", desc: "with one active task",
inProgress: map[string][]*base.TaskMessage{ active: map[string][]*base.TaskMessage{
"default": {t1}, "default": {t1},
}, },
deadlines: map[string][]base.Z{ lease: map[string][]base.Z{
"default": {{Message: t1, Score: fiveMinutesAgo.Unix()}}, "default": {{Message: t1, Score: now.Add(-1 * time.Minute).Unix()}},
}, },
retry: map[string][]base.Z{ retry: map[string][]base.Z{
"default": {}, "default": {},
@@ -60,7 +56,7 @@ func TestRecoverer(t *testing.T) {
wantActive: map[string][]*base.TaskMessage{ wantActive: map[string][]*base.TaskMessage{
"default": {}, "default": {},
}, },
wantDeadlines: map[string][]base.Z{ wantLease: map[string][]base.Z{
"default": {}, "default": {},
}, },
wantRetry: map[string][]*base.TaskMessage{ wantRetry: map[string][]*base.TaskMessage{
@@ -72,12 +68,12 @@ func TestRecoverer(t *testing.T) {
}, },
{ {
desc: "with a task with max-retry reached", desc: "with a task with max-retry reached",
inProgress: map[string][]*base.TaskMessage{ active: map[string][]*base.TaskMessage{
"default": {t4}, "default": {t4},
"critical": {}, "critical": {},
}, },
deadlines: map[string][]base.Z{ lease: map[string][]base.Z{
"default": {{Message: t4, Score: fiveMinutesAgo.Unix()}}, "default": {{Message: t4, Score: now.Add(-40 * time.Second).Unix()}},
"critical": {}, "critical": {},
}, },
retry: map[string][]base.Z{ retry: map[string][]base.Z{
@@ -92,7 +88,7 @@ func TestRecoverer(t *testing.T) {
"default": {}, "default": {},
"critical": {}, "critical": {},
}, },
wantDeadlines: map[string][]base.Z{ wantLease: map[string][]base.Z{
"default": {}, "default": {},
"critical": {}, "critical": {},
}, },
@@ -107,17 +103,17 @@ func TestRecoverer(t *testing.T) {
}, },
{ {
desc: "with multiple active tasks, and one expired", desc: "with multiple active tasks, and one expired",
inProgress: map[string][]*base.TaskMessage{ active: map[string][]*base.TaskMessage{
"default": {t1, t2}, "default": {t1, t2},
"critical": {t3}, "critical": {t3},
}, },
deadlines: map[string][]base.Z{ lease: map[string][]base.Z{
"default": { "default": {
{Message: t1, Score: oneHourAgo.Unix()}, {Message: t1, Score: now.Add(-2 * time.Minute).Unix()},
{Message: t2, Score: fiveMinutesFromNow.Unix()}, {Message: t2, Score: now.Add(20 * time.Second).Unix()},
}, },
"critical": { "critical": {
{Message: t3, Score: oneHourFromNow.Unix()}, {Message: t3, Score: now.Add(20 * time.Second).Unix()},
}, },
}, },
retry: map[string][]base.Z{ retry: map[string][]base.Z{
@@ -132,9 +128,9 @@ func TestRecoverer(t *testing.T) {
"default": {t2}, "default": {t2},
"critical": {t3}, "critical": {t3},
}, },
wantDeadlines: map[string][]base.Z{ wantLease: map[string][]base.Z{
"default": {{Message: t2, Score: fiveMinutesFromNow.Unix()}}, "default": {{Message: t2, Score: now.Add(20 * time.Second).Unix()}},
"critical": {{Message: t3, Score: oneHourFromNow.Unix()}}, "critical": {{Message: t3, Score: now.Add(20 * time.Second).Unix()}},
}, },
wantRetry: map[string][]*base.TaskMessage{ wantRetry: map[string][]*base.TaskMessage{
"default": {t1}, "default": {t1},
@@ -147,17 +143,17 @@ func TestRecoverer(t *testing.T) {
}, },
{ {
desc: "with multiple expired active tasks", desc: "with multiple expired active tasks",
inProgress: map[string][]*base.TaskMessage{ active: map[string][]*base.TaskMessage{
"default": {t1, t2}, "default": {t1, t2},
"critical": {t3}, "critical": {t3},
}, },
deadlines: map[string][]base.Z{ lease: map[string][]base.Z{
"default": { "default": {
{Message: t1, Score: oneHourAgo.Unix()}, {Message: t1, Score: now.Add(-1 * time.Minute).Unix()},
{Message: t2, Score: oneHourFromNow.Unix()}, {Message: t2, Score: now.Add(10 * time.Second).Unix()},
}, },
"critical": { "critical": {
{Message: t3, Score: fiveMinutesAgo.Unix()}, {Message: t3, Score: now.Add(-1 * time.Minute).Unix()},
}, },
}, },
retry: map[string][]base.Z{ retry: map[string][]base.Z{
@@ -172,8 +168,8 @@ func TestRecoverer(t *testing.T) {
"default": {t2}, "default": {t2},
"critical": {}, "critical": {},
}, },
wantDeadlines: map[string][]base.Z{ wantLease: map[string][]base.Z{
"default": {{Message: t2, Score: oneHourFromNow.Unix()}}, "default": {{Message: t2, Score: now.Add(10 * time.Second).Unix()}},
}, },
wantRetry: map[string][]*base.TaskMessage{ wantRetry: map[string][]*base.TaskMessage{
"default": {t1}, "default": {t1},
@@ -186,11 +182,11 @@ func TestRecoverer(t *testing.T) {
}, },
{ {
desc: "with empty active queue", desc: "with empty active queue",
inProgress: map[string][]*base.TaskMessage{ active: map[string][]*base.TaskMessage{
"default": {}, "default": {},
"critical": {}, "critical": {},
}, },
deadlines: map[string][]base.Z{ lease: map[string][]base.Z{
"default": {}, "default": {},
"critical": {}, "critical": {},
}, },
@@ -206,7 +202,7 @@ func TestRecoverer(t *testing.T) {
"default": {}, "default": {},
"critical": {}, "critical": {},
}, },
wantDeadlines: map[string][]base.Z{ wantLease: map[string][]base.Z{
"default": {}, "default": {},
"critical": {}, "critical": {},
}, },
@@ -223,8 +219,8 @@ func TestRecoverer(t *testing.T) {
for _, tc := range tests { for _, tc := range tests {
h.FlushDB(t, r) h.FlushDB(t, r)
h.SeedAllActiveQueues(t, r, tc.inProgress) h.SeedAllActiveQueues(t, r, tc.active)
h.SeedAllDeadlines(t, r, tc.deadlines) h.SeedAllLease(t, r, tc.lease)
h.SeedAllRetryQueues(t, r, tc.retry) h.SeedAllRetryQueues(t, r, tc.retry)
h.SeedAllArchivedQueues(t, r, tc.archived) h.SeedAllArchivedQueues(t, r, tc.archived)
@@ -249,10 +245,10 @@ func TestRecoverer(t *testing.T) {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.ActiveKey(qname), diff) t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.ActiveKey(qname), diff)
} }
} }
for qname, want := range tc.wantDeadlines { for qname, want := range tc.wantLease {
gotDeadlines := h.GetDeadlinesEntries(t, r, qname) gotLease := h.GetLeaseEntries(t, r, qname)
if diff := cmp.Diff(want, gotDeadlines, h.SortZSetEntryOpt); diff != "" { if diff := cmp.Diff(want, gotLease, h.SortZSetEntryOpt); diff != "" {
t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.DeadlinesKey(qname), diff) t.Errorf("%s; mismatch found in %q; (-want,+got)\n%s", tc.desc, base.LeaseKey(qname), diff)
} }
} }
cmpOpt := h.EquateInt64Approx(2) // allow up to two-second difference in `LastFailedAt` cmpOpt := h.EquateInt64Approx(2) // allow up to two-second difference in `LastFailedAt`
@@ -260,7 +256,7 @@ func TestRecoverer(t *testing.T) {
gotRetry := h.GetRetryMessages(t, r, qname) gotRetry := h.GetRetryMessages(t, r, qname)
var wantRetry []*base.TaskMessage // Note: construct message here since `LastFailedAt` is relative to each test run var wantRetry []*base.TaskMessage // Note: construct message here since `LastFailedAt` is relative to each test run
for _, msg := range msgs { for _, msg := range msgs {
wantRetry = append(wantRetry, h.TaskMessageAfterRetry(*msg, "context deadline exceeded", runTime)) wantRetry = append(wantRetry, h.TaskMessageAfterRetry(*msg, ErrLeaseExpired.Error(), runTime))
} }
if diff := cmp.Diff(wantRetry, gotRetry, h.SortMsgOpt, cmpOpt); diff != "" { if diff := cmp.Diff(wantRetry, gotRetry, h.SortMsgOpt, cmpOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff) t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.RetryKey(qname), diff)
@@ -270,7 +266,7 @@ func TestRecoverer(t *testing.T) {
gotArchived := h.GetArchivedMessages(t, r, qname) gotArchived := h.GetArchivedMessages(t, r, qname)
var wantArchived []*base.TaskMessage var wantArchived []*base.TaskMessage
for _, msg := range msgs { for _, msg := range msgs {
wantArchived = append(wantArchived, h.TaskMessageWithError(*msg, "context deadline exceeded", runTime)) wantArchived = append(wantArchived, h.TaskMessageWithError(*msg, ErrLeaseExpired.Error(), runTime))
} }
if diff := cmp.Diff(wantArchived, gotArchived, h.SortMsgOpt, cmpOpt); diff != "" { if diff := cmp.Diff(wantArchived, gotArchived, h.SortMsgOpt, cmpOpt); diff != "" {
t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.ArchivedKey(qname), diff) t.Errorf("%s; mismatch found in %q: (-want, +got)\n%s", tc.desc, base.ArchivedKey(qname), diff)

View File

@@ -10,11 +10,11 @@ import (
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/redis/go-redis/v9"
"github.com/robfig/cron/v3" "github.com/robfig/cron/v3"
) )
@@ -22,16 +22,21 @@ import (
// //
// Schedulers are safe for concurrent use by multiple goroutines. // Schedulers are safe for concurrent use by multiple goroutines.
type Scheduler struct { type Scheduler struct {
id string id string
state *base.ServerState
logger *log.Logger state *serverState
client *Client
rdb *rdb.RDB heartbeatInterval time.Duration
cron *cron.Cron logger *log.Logger
location *time.Location client *Client
done chan struct{} rdb *rdb.RDB
wg sync.WaitGroup cron *cron.Cron
errHandler func(task *Task, opts []Option, err error) location *time.Location
done chan struct{}
wg sync.WaitGroup
preEnqueueFunc func(task *Task, opts []Option)
postEnqueueFunc func(info *TaskInfo, err error)
errHandler func(task *Task, opts []Option, err error)
// guards idmap // guards idmap
mu sync.Mutex mu sync.Mutex
@@ -41,17 +46,48 @@ type Scheduler struct {
idmap map[string]cron.EntryID idmap map[string]cron.EntryID
} }
const defaultHeartbeatInterval = 10 * time.Second
// NewScheduler returns a new Scheduler instance given the redis connection option. // NewScheduler returns a new Scheduler instance given the redis connection option.
// The parameter opts is optional, defaults will be used if opts is set to nil // The parameter opts is optional, defaults will be used if opts is set to nil
func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler { func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler {
c, ok := r.MakeRedisClient().(redis.UniversalClient) scheduler := newScheduler(opts)
redisClient, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok { if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r)) panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
} }
rdb := rdb.NewRDB(redisClient)
scheduler.rdb = rdb
scheduler.client = &Client{broker: rdb, sharedConnection: false}
return scheduler
}
// NewSchedulerFromRedisClient returns a new instance of Scheduler given a redis.UniversalClient
// The parameter opts is optional, defaults will be used if opts is set to nil.
// Warning: The underlying redis connection pool will not be closed by Asynq, you are responsible for closing it.
func NewSchedulerFromRedisClient(c redis.UniversalClient, opts *SchedulerOpts) *Scheduler {
scheduler := newScheduler(opts)
scheduler.rdb = rdb.NewRDB(c)
scheduler.client = NewClientFromRedisClient(c)
return scheduler
}
func newScheduler(opts *SchedulerOpts) *Scheduler {
if opts == nil { if opts == nil {
opts = &SchedulerOpts{} opts = &SchedulerOpts{}
} }
heartbeatInterval := opts.HeartbeatInterval
if heartbeatInterval <= 0 {
heartbeatInterval = defaultHeartbeatInterval
}
logger := log.NewLogger(opts.Logger) logger := log.NewLogger(opts.Logger)
loglevel := opts.LogLevel loglevel := opts.LogLevel
if loglevel == level_unspecified { if loglevel == level_unspecified {
@@ -65,16 +101,17 @@ func NewScheduler(r RedisConnOpt, opts *SchedulerOpts) *Scheduler {
} }
return &Scheduler{ return &Scheduler{
id: generateSchedulerID(), id: generateSchedulerID(),
state: base.NewServerState(), state: &serverState{value: srvStateNew},
logger: logger, heartbeatInterval: heartbeatInterval,
client: NewClient(r), logger: logger,
rdb: rdb.NewRDB(c), cron: cron.New(cron.WithLocation(loc)),
cron: cron.New(cron.WithLocation(loc)), location: loc,
location: loc, done: make(chan struct{}),
done: make(chan struct{}), preEnqueueFunc: opts.PreEnqueueFunc,
errHandler: opts.EnqueueErrorHandler, postEnqueueFunc: opts.PostEnqueueFunc,
idmap: make(map[string]cron.EntryID), errHandler: opts.EnqueueErrorHandler,
idmap: make(map[string]cron.EntryID),
} }
} }
@@ -88,6 +125,15 @@ func generateSchedulerID() string {
// SchedulerOpts specifies scheduler options. // SchedulerOpts specifies scheduler options.
type SchedulerOpts struct { type SchedulerOpts struct {
// HeartbeatInterval specifies the interval between scheduler heartbeats.
//
// If unset, zero or a negative value, the interval is set to 10 second.
//
// Note: Setting this value too low may add significant load to redis.
//
// By default, HeartbeatInterval is set to 10 seconds.
HeartbeatInterval time.Duration
// Logger specifies the logger used by the scheduler instance. // Logger specifies the logger used by the scheduler instance.
// //
// If unset, the default logger is used. // If unset, the default logger is used.
@@ -103,28 +149,44 @@ type SchedulerOpts struct {
// If unset, the UTC time zone (time.UTC) is used. // If unset, the UTC time zone (time.UTC) is used.
Location *time.Location Location *time.Location
// PreEnqueueFunc, if provided, is called before a task gets enqueued by Scheduler.
// The callback function should return quickly to not block the current thread.
PreEnqueueFunc func(task *Task, opts []Option)
// PostEnqueueFunc, if provided, is called after a task gets enqueued by Scheduler.
// The callback function should return quickly to not block the current thread.
PostEnqueueFunc func(info *TaskInfo, err error)
// Deprecated: Use PostEnqueueFunc instead
// EnqueueErrorHandler gets called when scheduler cannot enqueue a registered task // EnqueueErrorHandler gets called when scheduler cannot enqueue a registered task
// due to an error. // due to an error.
EnqueueErrorHandler func(task *Task, opts []Option, err error) EnqueueErrorHandler func(task *Task, opts []Option, err error)
} }
// enqueueJob encapsulates the job of enqueing a task and recording the event. // enqueueJob encapsulates the job of enqueuing a task and recording the event.
type enqueueJob struct { type enqueueJob struct {
id uuid.UUID id uuid.UUID
cronspec string cronspec string
task *Task task *Task
opts []Option opts []Option
location *time.Location location *time.Location
logger *log.Logger logger *log.Logger
client *Client client *Client
rdb *rdb.RDB rdb *rdb.RDB
errHandler func(task *Task, opts []Option, err error) preEnqueueFunc func(task *Task, opts []Option)
postEnqueueFunc func(info *TaskInfo, err error)
errHandler func(task *Task, opts []Option, err error)
} }
func (j *enqueueJob) Run() { func (j *enqueueJob) Run() {
if j.preEnqueueFunc != nil {
j.preEnqueueFunc(j.task, j.opts)
}
info, err := j.client.Enqueue(j.task, j.opts...) info, err := j.client.Enqueue(j.task, j.opts...)
if j.postEnqueueFunc != nil {
j.postEnqueueFunc(info, err)
}
if err != nil { if err != nil {
j.logger.Errorf("scheduler could not enqueue a task %+v: %v", j.task, err)
if j.errHandler != nil { if j.errHandler != nil {
j.errHandler(j.task, j.opts, err) j.errHandler(j.task, j.opts, err)
} }
@@ -137,7 +199,7 @@ func (j *enqueueJob) Run() {
} }
err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event) err = j.rdb.RecordSchedulerEnqueueEvent(j.id.String(), event)
if err != nil { if err != nil {
j.logger.Errorf("scheduler could not record enqueue event of enqueued task %+v: %v", j.task, err) j.logger.Warnf("scheduler could not record enqueue event of enqueued task %s: %v", info.ID, err)
} }
} }
@@ -145,15 +207,17 @@ func (j *enqueueJob) Run() {
// It returns an ID of the newly registered entry. // It returns an ID of the newly registered entry.
func (s *Scheduler) Register(cronspec string, task *Task, opts ...Option) (entryID string, err error) { func (s *Scheduler) Register(cronspec string, task *Task, opts ...Option) (entryID string, err error) {
job := &enqueueJob{ job := &enqueueJob{
id: uuid.New(), id: uuid.New(),
cronspec: cronspec, cronspec: cronspec,
task: task, task: task,
opts: opts, opts: opts,
location: s.location, location: s.location,
client: s.client, client: s.client,
rdb: s.rdb, rdb: s.rdb,
logger: s.logger, logger: s.logger,
errHandler: s.errHandler, preEnqueueFunc: s.preEnqueueFunc,
postEnqueueFunc: s.postEnqueueFunc,
errHandler: s.errHandler,
} }
cronID, err := s.cron.AddJob(cronspec, job) cronID, err := s.cron.AddJob(cronspec, job)
if err != nil { if err != nil {
@@ -193,23 +257,43 @@ func (s *Scheduler) Run() error {
// Start starts the scheduler. // Start starts the scheduler.
// It returns an error if the scheduler is already running or has been shutdown. // It returns an error if the scheduler is already running or has been shutdown.
func (s *Scheduler) Start() error { func (s *Scheduler) Start() error {
switch s.state.Get() { if err := s.start(); err != nil {
case base.StateActive: return err
return fmt.Errorf("asynq: the scheduler is already running")
case base.StateClosed:
return fmt.Errorf("asynq: the scheduler has already been stopped")
} }
s.logger.Info("Scheduler starting") s.logger.Info("Scheduler starting")
s.logger.Infof("Scheduler timezone is set to %v", s.location) s.logger.Infof("Scheduler timezone is set to %v", s.location)
s.cron.Start() s.cron.Start()
s.wg.Add(1) s.wg.Add(1)
go s.runHeartbeater() go s.runHeartbeater()
s.state.Set(base.StateActive) return nil
}
// Checks server state and returns an error if pre-condition is not met.
// Otherwise it sets the server state to active.
func (s *Scheduler) start() error {
s.state.mu.Lock()
defer s.state.mu.Unlock()
switch s.state.value {
case srvStateActive:
return fmt.Errorf("asynq: the scheduler is already running")
case srvStateClosed:
return fmt.Errorf("asynq: the scheduler has already been stopped")
}
s.state.value = srvStateActive
return nil return nil
} }
// Shutdown stops and shuts down the scheduler. // Shutdown stops and shuts down the scheduler.
func (s *Scheduler) Shutdown() { func (s *Scheduler) Shutdown() {
s.state.mu.Lock()
if s.state.value == srvStateNew || s.state.value == srvStateClosed {
// scheduler is not running, do nothing and return.
s.state.mu.Unlock()
return
}
s.state.value = srvStateClosed
s.state.mu.Unlock()
s.logger.Info("Scheduler shutting down") s.logger.Info("Scheduler shutting down")
close(s.done) // signal heartbeater to stop close(s.done) // signal heartbeater to stop
ctx := s.cron.Stop() ctx := s.cron.Stop()
@@ -217,20 +301,23 @@ func (s *Scheduler) Shutdown() {
s.wg.Wait() s.wg.Wait()
s.clearHistory() s.clearHistory()
s.client.Close() if err := s.client.Close(); err != nil {
s.rdb.Close() s.logger.Errorf("Failed to close redis client connection: %v", err)
s.state.Set(base.StateClosed) }
s.logger.Info("Scheduler stopped") s.logger.Info("Scheduler stopped")
} }
func (s *Scheduler) runHeartbeater() { func (s *Scheduler) runHeartbeater() {
defer s.wg.Done() defer s.wg.Done()
ticker := time.NewTicker(5 * time.Second) ticker := time.NewTicker(s.heartbeatInterval)
for { for {
select { select {
case <-s.done: case <-s.done:
s.logger.Debugf("Scheduler heatbeater shutting down") s.logger.Debugf("Scheduler heatbeater shutting down")
s.rdb.ClearSchedulerEntries(s.id) if err := s.rdb.ClearSchedulerEntries(s.id); err != nil {
s.logger.Errorf("Failed to clear the scheduler entries: %v", err)
}
ticker.Stop()
return return
case <-ticker.C: case <-ticker.C:
s.beat() s.beat()
@@ -254,8 +341,7 @@ func (s *Scheduler) beat() {
} }
entries = append(entries, e) entries = append(entries, e)
} }
s.logger.Debugf("Writing entries %v", entries) if err := s.rdb.WriteSchedulerEntries(s.id, entries, s.heartbeatInterval*2); err != nil {
if err := s.rdb.WriteSchedulerEntries(s.id, entries, 5*time.Second); err != nil {
s.logger.Warnf("Scheduler could not write heartbeat data: %v", err) s.logger.Warnf("Scheduler could not write heartbeat data: %v", err)
} }
} }
@@ -276,3 +362,14 @@ func (s *Scheduler) clearHistory() {
} }
} }
} }
// Ping performs a ping against the redis connection.
func (s *Scheduler) Ping() error {
s.state.mu.Lock()
defer s.state.mu.Unlock()
if s.state.value == srvStateClosed {
return nil
}
return s.rdb.Ping()
}

View File

@@ -10,8 +10,10 @@ import (
"time" "time"
"github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq/internal/asynqtest" "github.com/redis/go-redis/v9"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/testutil"
) )
func TestSchedulerRegister(t *testing.T) { func TestSchedulerRegister(t *testing.T) {
@@ -57,6 +59,7 @@ func TestSchedulerRegister(t *testing.T) {
r := setup(t) r := setup(t)
// Tests for new redis connection.
for _, tc := range tests { for _, tc := range tests {
scheduler := NewScheduler(getRedisConnOpt(t), nil) scheduler := NewScheduler(getRedisConnOpt(t), nil)
if _, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...); err != nil { if _, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...); err != nil {
@@ -69,8 +72,30 @@ func TestSchedulerRegister(t *testing.T) {
time.Sleep(tc.wait) time.Sleep(tc.wait)
scheduler.Shutdown() scheduler.Shutdown()
got := asynqtest.GetPendingMessages(t, r, tc.queue) got := testutil.GetPendingMessages(t, r, tc.queue)
if diff := cmp.Diff(tc.want, got, asynqtest.IgnoreIDOpt); diff != "" { if diff := cmp.Diff(tc.want, got, testutil.IgnoreIDOpt); diff != "" {
t.Errorf("mismatch found in queue %q: (-want,+got)\n%s", tc.queue, diff)
}
}
r = setup(t)
// Tests for existing redis connection.
for _, tc := range tests {
redisClient := getRedisConnOpt(t).MakeRedisClient().(redis.UniversalClient)
scheduler := NewSchedulerFromRedisClient(redisClient, nil)
if _, err := scheduler.Register(tc.cronspec, tc.task, tc.opts...); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
time.Sleep(tc.wait)
scheduler.Shutdown()
got := testutil.GetPendingMessages(t, r, tc.queue)
if diff := cmp.Diff(tc.want, got, testutil.IgnoreIDOpt); diff != "" {
t.Errorf("mismatch found in queue %q: (-want,+got)\n%s", tc.queue, diff) t.Errorf("mismatch found in queue %q: (-want,+got)\n%s", tc.queue, diff)
} }
} }
@@ -89,7 +114,7 @@ func TestSchedulerWhenRedisDown(t *testing.T) {
// Connect to non-existent redis instance to simulate a redis server being down. // Connect to non-existent redis instance to simulate a redis server being down.
scheduler := NewScheduler( scheduler := NewScheduler(
RedisClientOpt{Addr: ":9876"}, RedisClientOpt{Addr: ":9876"}, // no Redis listening to this port.
&SchedulerOpts{EnqueueErrorHandler: errorHandler}, &SchedulerOpts{EnqueueErrorHandler: errorHandler},
) )
@@ -148,9 +173,62 @@ func TestSchedulerUnregister(t *testing.T) {
time.Sleep(tc.wait) time.Sleep(tc.wait)
scheduler.Shutdown() scheduler.Shutdown()
got := asynqtest.GetPendingMessages(t, r, tc.queue) got := testutil.GetPendingMessages(t, r, tc.queue)
if len(got) != 0 { if len(got) != 0 {
t.Errorf("%d tasks were enqueued, want zero", len(got)) t.Errorf("%d tasks were enqueued, want zero", len(got))
} }
} }
} }
func TestSchedulerPostAndPreEnqueueHandler(t *testing.T) {
var (
preMu sync.Mutex
preCounter int
postMu sync.Mutex
postCounter int
)
preHandler := func(task *Task, opts []Option) {
preMu.Lock()
preCounter++
preMu.Unlock()
}
postHandler := func(info *TaskInfo, err error) {
postMu.Lock()
postCounter++
postMu.Unlock()
}
// Connect to non-existent redis instance to simulate a redis server being down.
scheduler := NewScheduler(
getRedisConnOpt(t),
&SchedulerOpts{
PreEnqueueFunc: preHandler,
PostEnqueueFunc: postHandler,
},
)
task := NewTask("test", nil)
if _, err := scheduler.Register("@every 3s", task); err != nil {
t.Fatal(err)
}
if err := scheduler.Start(); err != nil {
t.Fatal(err)
}
// Scheduler should attempt to enqueue the task three times (every 3s).
time.Sleep(10 * time.Second)
scheduler.Shutdown()
preMu.Lock()
if preCounter != 3 {
t.Errorf("PreEnqueueFunc was called %d times, want 3", preCounter)
}
preMu.Unlock()
postMu.Lock()
if postCounter != 3 {
t.Errorf("PostEnqueueFunc was called %d times, want 3", postCounter)
}
postMu.Unlock()
}

View File

@@ -144,9 +144,7 @@ func (mux *ServeMux) HandleFunc(pattern string, handler func(context.Context, *T
func (mux *ServeMux) Use(mws ...MiddlewareFunc) { func (mux *ServeMux) Use(mws ...MiddlewareFunc) {
mux.mu.Lock() mux.mu.Lock()
defer mux.mu.Unlock() defer mux.mu.Unlock()
for _, fn := range mws { mux.mws = append(mux.mws, mws...)
mux.mws = append(mux.mws, fn)
}
} }
// NotFound returns an error indicating that the handler was not found for the given task. // NotFound returns an error indicating that the handler was not found for the given task.

347
server.go
View File

@@ -9,16 +9,16 @@ import (
"errors" "errors"
"fmt" "fmt"
"math" "math"
"math/rand" "math/rand/v2"
"runtime" "runtime"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/redis/go-redis/v9"
) )
// Server is responsible for task processing and task lifecycle management. // Server is responsible for task processing and task lifecycle management.
@@ -37,8 +37,11 @@ type Server struct {
logger *log.Logger logger *log.Logger
broker base.Broker broker base.Broker
// When a Server has been created with an existing Redis connection, we do
// not want to close it.
sharedConnection bool
state *base.ServerState state *serverState
// wait group to wait for all goroutines to finish. // wait group to wait for all goroutines to finish.
wg sync.WaitGroup wg sync.WaitGroup
@@ -50,6 +53,44 @@ type Server struct {
recoverer *recoverer recoverer *recoverer
healthchecker *healthchecker healthchecker *healthchecker
janitor *janitor janitor *janitor
aggregator *aggregator
}
type serverState struct {
mu sync.Mutex
value serverStateValue
}
type serverStateValue int
const (
// StateNew represents a new server. Server begins in
// this state and then transition to StatusActive when
// Start or Run is callled.
srvStateNew serverStateValue = iota
// StateActive indicates the server is up and active.
srvStateActive
// StateStopped indicates the server is up but no longer processing new tasks.
srvStateStopped
// StateClosed indicates the server has been shutdown.
srvStateClosed
)
var serverStates = []string{
"new",
"active",
"stopped",
"closed",
}
func (s serverStateValue) String() string {
if srvStateNew <= s && s <= srvStateClosed {
return serverStates[s]
}
return "unknown status"
} }
// Config specifies the server's background-task processing behavior. // Config specifies the server's background-task processing behavior.
@@ -57,9 +98,24 @@ type Config struct {
// Maximum number of concurrent processing of tasks. // Maximum number of concurrent processing of tasks.
// //
// If set to a zero or negative value, NewServer will overwrite the value // If set to a zero or negative value, NewServer will overwrite the value
// to the number of CPUs usable by the currennt process. // to the number of CPUs usable by the current process.
Concurrency int Concurrency int
// BaseContext optionally specifies a function that returns the base context for Handler invocations on this server.
//
// If BaseContext is nil, the default is context.Background().
// If this is defined, then it MUST return a non-nil context
BaseContext func() context.Context
// TaskCheckInterval specifies the interval between checks for new tasks to process when all queues are empty.
//
// If unset, zero or a negative value, the interval is set to 1 second.
//
// Note: Setting this value too low may add significant load to redis.
//
// By default, TaskCheckInterval is set to 1 seconds.
TaskCheckInterval time.Duration
// Function to calculate retry delay for a failed task. // Function to calculate retry delay for a failed task.
// //
// By default, it uses exponential backoff algorithm to calculate the delay. // By default, it uses exponential backoff algorithm to calculate the delay.
@@ -118,6 +174,16 @@ type Config struct {
// }) // })
// //
// ErrorHandler: asynq.ErrorHandlerFunc(reportError) // ErrorHandler: asynq.ErrorHandlerFunc(reportError)
// we can also handle panic error like:
// func reportError(ctx context, task *asynq.Task, err error) {
// if asynq.IsPanic(err) {
// errorReportingService.Notify(err)
// }
// })
//
// ErrorHandler: asynq.ErrorHandlerFunc(reportError)
ErrorHandler ErrorHandler ErrorHandler ErrorHandler
// Logger specifies the logger used by the server instance. // Logger specifies the logger used by the server instance.
@@ -144,9 +210,72 @@ type Config struct {
// //
// If unset or zero, the interval is set to 15 seconds. // If unset or zero, the interval is set to 15 seconds.
HealthCheckInterval time.Duration HealthCheckInterval time.Duration
// DelayedTaskCheckInterval specifies the interval between checks run on 'scheduled' and 'retry'
// tasks, and forwarding them to 'pending' state if they are ready to be processed.
//
// If unset or zero, the interval is set to 5 seconds.
DelayedTaskCheckInterval time.Duration
// GroupGracePeriod specifies the amount of time the server will wait for an incoming task before aggregating
// the tasks in a group. If an incoming task is received within this period, the server will wait for another
// period of the same length, up to GroupMaxDelay if specified.
//
// If unset or zero, the grace period is set to 1 minute.
// Minimum duration for GroupGracePeriod is 1 second. If value specified is less than a second, the call to
// NewServer will panic.
GroupGracePeriod time.Duration
// GroupMaxDelay specifies the maximum amount of time the server will wait for incoming tasks before aggregating
// the tasks in a group.
//
// If unset or zero, no delay limit is used.
GroupMaxDelay time.Duration
// GroupMaxSize specifies the maximum number of tasks that can be aggregated into a single task within a group.
// If GroupMaxSize is reached, the server will aggregate the tasks into one immediately.
//
// If unset or zero, no size limit is used.
GroupMaxSize int
// GroupAggregator specifies the aggregation function used to aggregate multiple tasks in a group into one task.
//
// If unset or nil, the group aggregation feature will be disabled on the server.
GroupAggregator GroupAggregator
// JanitorInterval specifies the average interval of janitor checks for expired completed tasks.
//
// If unset or zero, default interval of 8 seconds is used.
JanitorInterval time.Duration
// JanitorBatchSize specifies the number of expired completed tasks to be deleted in one run.
//
// If unset or zero, default batch size of 100 is used.
// Make sure to not put a big number as the batch size to prevent a long-running script.
JanitorBatchSize int
} }
// An ErrorHandler handles an error occured during task processing. // GroupAggregator aggregates a group of tasks into one before the tasks are passed to the Handler.
type GroupAggregator interface {
// Aggregate aggregates the given tasks in a group with the given group name,
// and returns a new task which is the aggregation of those tasks.
//
// Use NewTask(typename, payload, opts...) to set any options for the aggregated task.
// The Queue option, if provided, will be ignored and the aggregated task will always be enqueued
// to the same queue the group belonged.
Aggregate(group string, tasks []*Task) *Task
}
// The GroupAggregatorFunc type is an adapter to allow the use of ordinary functions as a GroupAggregator.
// If f is a function with the appropriate signature, GroupAggregatorFunc(f) is a GroupAggregator that calls f.
type GroupAggregatorFunc func(group string, tasks []*Task) *Task
// Aggregate calls fn(group, tasks)
func (fn GroupAggregatorFunc) Aggregate(group string, tasks []*Task) *Task {
return fn(group, tasks)
}
// An ErrorHandler handles an error occurred during task processing.
type ErrorHandler interface { type ErrorHandler interface {
HandleError(ctx context.Context, task *Task, err error) HandleError(ctx context.Context, task *Task, err error)
} }
@@ -271,9 +400,8 @@ func toInternalLogLevel(l LogLevel) log.Level {
// DefaultRetryDelayFunc is the default RetryDelayFunc used if one is not specified in Config. // DefaultRetryDelayFunc is the default RetryDelayFunc used if one is not specified in Config.
// It uses exponential back-off strategy to calculate the retry delay. // It uses exponential back-off strategy to calculate the retry delay.
func DefaultRetryDelayFunc(n int, e error, t *Task) time.Duration { func DefaultRetryDelayFunc(n int, e error, t *Task) time.Duration {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
// Formula taken from https://github.com/mperham/sidekiq. // Formula taken from https://github.com/mperham/sidekiq.
s := int(math.Pow(float64(n), 4)) + 15 + (r.Intn(30) * (n + 1)) s := int(math.Pow(float64(n), 4)) + 15 + (rand.IntN(30) * (n + 1))
return time.Duration(s) * time.Second return time.Duration(s) * time.Second
} }
@@ -284,22 +412,51 @@ var defaultQueueConfig = map[string]int{
} }
const ( const (
defaultTaskCheckInterval = 1 * time.Second
defaultShutdownTimeout = 8 * time.Second defaultShutdownTimeout = 8 * time.Second
defaultHealthCheckInterval = 15 * time.Second defaultHealthCheckInterval = 15 * time.Second
defaultDelayedTaskCheckInterval = 5 * time.Second
defaultGroupGracePeriod = 1 * time.Minute
defaultJanitorInterval = 8 * time.Second
defaultJanitorBatchSize = 100
) )
// NewServer returns a new Server given a redis connection option // NewServer returns a new Server given a redis connection option
// and server configuration. // and server configuration.
func NewServer(r RedisConnOpt, cfg Config) *Server { func NewServer(r RedisConnOpt, cfg Config) *Server {
c, ok := r.MakeRedisClient().(redis.UniversalClient) redisClient, ok := r.MakeRedisClient().(redis.UniversalClient)
if !ok { if !ok {
panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r)) panic(fmt.Sprintf("asynq: unsupported RedisConnOpt type %T", r))
} }
server := NewServerFromRedisClient(redisClient, cfg)
server.sharedConnection = false
return server
}
// NewServerFromRedisClient returns a new instance of Server given a redis.UniversalClient
// and server configuration
// Warning: The underlying redis connection pool will not be closed by Asynq, you are responsible for closing it.
func NewServerFromRedisClient(c redis.UniversalClient, cfg Config) *Server {
baseCtxFn := cfg.BaseContext
if baseCtxFn == nil {
baseCtxFn = context.Background
}
n := cfg.Concurrency n := cfg.Concurrency
if n < 1 { if n < 1 {
n = runtime.NumCPU() n = runtime.NumCPU()
} }
taskCheckInterval := cfg.TaskCheckInterval
if taskCheckInterval <= 0 {
taskCheckInterval = defaultTaskCheckInterval
}
delayFunc := cfg.RetryDelayFunc delayFunc := cfg.RetryDelayFunc
if delayFunc == nil { if delayFunc == nil {
delayFunc = DefaultRetryDelayFunc delayFunc = DefaultRetryDelayFunc
@@ -332,6 +489,14 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
if healthcheckInterval == 0 { if healthcheckInterval == 0 {
healthcheckInterval = defaultHealthCheckInterval healthcheckInterval = defaultHealthCheckInterval
} }
// TODO: Create a helper to check for zero value and fall back to default (e.g. getDurationOrDefault())
groupGracePeriod := cfg.GroupGracePeriod
if groupGracePeriod == 0 {
groupGracePeriod = defaultGroupGracePeriod
}
if groupGracePeriod < time.Second {
panic("GroupGracePeriod cannot be less than a second")
}
logger := log.NewLogger(cfg.Logger) logger := log.NewLogger(cfg.Logger)
loglevel := cfg.LogLevel loglevel := cfg.LogLevel
if loglevel == level_unspecified { if loglevel == level_unspecified {
@@ -343,7 +508,7 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
starting := make(chan *workerInfo) starting := make(chan *workerInfo)
finished := make(chan *base.TaskMessage) finished := make(chan *base.TaskMessage)
syncCh := make(chan *syncRequest) syncCh := make(chan *syncRequest)
state := base.NewServerState() srvState := &serverState{value: srvStateNew}
cancels := base.NewCancelations() cancels := base.NewCancelations()
syncer := newSyncer(syncerParams{ syncer := newSyncer(syncerParams{
@@ -358,15 +523,19 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
concurrency: n, concurrency: n,
queues: queues, queues: queues,
strictPriority: cfg.StrictPriority, strictPriority: cfg.StrictPriority,
state: state, state: srvState,
starting: starting, starting: starting,
finished: finished, finished: finished,
}) })
delayedTaskCheckInterval := cfg.DelayedTaskCheckInterval
if delayedTaskCheckInterval == 0 {
delayedTaskCheckInterval = defaultDelayedTaskCheckInterval
}
forwarder := newForwarder(forwarderParams{ forwarder := newForwarder(forwarderParams{
logger: logger, logger: logger,
broker: rdb, broker: rdb,
queues: qnames, queues: qnames,
interval: 5 * time.Second, interval: delayedTaskCheckInterval,
}) })
subscriber := newSubscriber(subscriberParams{ subscriber := newSubscriber(subscriberParams{
logger: logger, logger: logger,
@@ -374,19 +543,21 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
cancelations: cancels, cancelations: cancels,
}) })
processor := newProcessor(processorParams{ processor := newProcessor(processorParams{
logger: logger, logger: logger,
broker: rdb, broker: rdb,
retryDelayFunc: delayFunc, retryDelayFunc: delayFunc,
isFailureFunc: isFailureFunc, taskCheckInterval: taskCheckInterval,
syncCh: syncCh, baseCtxFn: baseCtxFn,
cancelations: cancels, isFailureFunc: isFailureFunc,
concurrency: n, syncCh: syncCh,
queues: queues, cancelations: cancels,
strictPriority: cfg.StrictPriority, concurrency: n,
errHandler: cfg.ErrorHandler, queues: queues,
shutdownTimeout: shutdownTimeout, strictPriority: cfg.StrictPriority,
starting: starting, errHandler: cfg.ErrorHandler,
finished: finished, shutdownTimeout: shutdownTimeout,
starting: starting,
finished: finished,
}) })
recoverer := newRecoverer(recovererParams{ recoverer := newRecoverer(recovererParams{
logger: logger, logger: logger,
@@ -402,24 +573,50 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
interval: healthcheckInterval, interval: healthcheckInterval,
healthcheckFunc: cfg.HealthCheckFunc, healthcheckFunc: cfg.HealthCheckFunc,
}) })
janitorInterval := cfg.JanitorInterval
if janitorInterval == 0 {
janitorInterval = defaultJanitorInterval
}
janitorBatchSize := cfg.JanitorBatchSize
if janitorBatchSize == 0 {
janitorBatchSize = defaultJanitorBatchSize
}
if janitorBatchSize > defaultJanitorBatchSize {
logger.Warnf("Janitor batch size of %d is greater than the recommended batch size of %d. "+
"This might cause a long-running script", janitorBatchSize, defaultJanitorBatchSize)
}
janitor := newJanitor(janitorParams{ janitor := newJanitor(janitorParams{
logger: logger, logger: logger,
broker: rdb, broker: rdb,
queues: qnames, queues: qnames,
interval: 8 * time.Second, interval: janitorInterval,
batchSize: janitorBatchSize,
})
aggregator := newAggregator(aggregatorParams{
logger: logger,
broker: rdb,
queues: qnames,
gracePeriod: groupGracePeriod,
maxDelay: cfg.GroupMaxDelay,
maxSize: cfg.GroupMaxSize,
groupAggregator: cfg.GroupAggregator,
}) })
return &Server{ return &Server{
logger: logger, logger: logger,
broker: rdb, broker: rdb,
state: state, sharedConnection: true,
forwarder: forwarder, state: srvState,
processor: processor, forwarder: forwarder,
syncer: syncer, processor: processor,
heartbeater: heartbeater, syncer: syncer,
subscriber: subscriber, heartbeater: heartbeater,
recoverer: recoverer, subscriber: subscriber,
healthchecker: healthchecker, recoverer: recoverer,
janitor: janitor, healthchecker: healthchecker,
janitor: janitor,
aggregator: aggregator,
} }
} }
@@ -435,6 +632,10 @@ func NewServer(r RedisConnOpt, cfg Config) *Server {
// One exception to this rule is when ProcessTask returns a SkipRetry error. // One exception to this rule is when ProcessTask returns a SkipRetry error.
// If the returned error is SkipRetry or an error wraps SkipRetry, retry is // If the returned error is SkipRetry or an error wraps SkipRetry, retry is
// skipped and the task will be immediately archived instead. // skipped and the task will be immediately archived instead.
//
// Another exception to this rule is when ProcessTask returns a RevokeTask error.
// If the returned error is RevokeTask or an error wraps RevokeTask, the task
// will not be retried or archived.
type Handler interface { type Handler interface {
ProcessTask(context.Context, *Task) error ProcessTask(context.Context, *Task) error
} }
@@ -481,17 +682,11 @@ func (srv *Server) Start(handler Handler) error {
if handler == nil { if handler == nil {
return fmt.Errorf("asynq: server cannot run with nil handler") return fmt.Errorf("asynq: server cannot run with nil handler")
} }
switch srv.state.Get() {
case base.StateActive:
return fmt.Errorf("asynq: the server is already running")
case base.StateStopped:
return fmt.Errorf("asynq: the server is in the stopped state. Waiting for shutdown.")
case base.StateClosed:
return ErrServerClosed
}
srv.state.Set(base.StateActive)
srv.processor.handler = handler srv.processor.handler = handler
if err := srv.start(); err != nil {
return err
}
srv.logger.Info("Starting processing") srv.logger.Info("Starting processing")
srv.heartbeater.start(&srv.wg) srv.heartbeater.start(&srv.wg)
@@ -502,6 +697,24 @@ func (srv *Server) Start(handler Handler) error {
srv.forwarder.start(&srv.wg) srv.forwarder.start(&srv.wg)
srv.processor.start(&srv.wg) srv.processor.start(&srv.wg)
srv.janitor.start(&srv.wg) srv.janitor.start(&srv.wg)
srv.aggregator.start(&srv.wg)
return nil
}
// Checks server state and returns an error if pre-condition is not met.
// Otherwise it sets the server state to active.
func (srv *Server) start() error {
srv.state.mu.Lock()
defer srv.state.mu.Unlock()
switch srv.state.value {
case srvStateActive:
return fmt.Errorf("asynq: the server is already running")
case srvStateStopped:
return fmt.Errorf("asynq: the server is in the stopped state. Waiting for shutdown.")
case srvStateClosed:
return ErrServerClosed
}
srv.state.value = srvStateActive
return nil return nil
} }
@@ -510,11 +723,14 @@ func (srv *Server) Start(handler Handler) error {
// active workers to finish processing tasks for duration specified in Config.ShutdownTimeout. // active workers to finish processing tasks for duration specified in Config.ShutdownTimeout.
// If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis. // If worker didn't finish processing a task during the timeout, the task will be pushed back to Redis.
func (srv *Server) Shutdown() { func (srv *Server) Shutdown() {
switch srv.state.Get() { srv.state.mu.Lock()
case base.StateNew, base.StateClosed: if srv.state.value == srvStateNew || srv.state.value == srvStateClosed {
srv.state.mu.Unlock()
// server is not running, do nothing and return. // server is not running, do nothing and return.
return return
} }
srv.state.value = srvStateClosed
srv.state.mu.Unlock()
srv.logger.Info("Starting graceful shutdown") srv.logger.Info("Starting graceful shutdown")
// Note: The order of shutdown is important. // Note: The order of shutdown is important.
@@ -527,14 +743,14 @@ func (srv *Server) Shutdown() {
srv.syncer.shutdown() srv.syncer.shutdown()
srv.subscriber.shutdown() srv.subscriber.shutdown()
srv.janitor.shutdown() srv.janitor.shutdown()
srv.aggregator.shutdown()
srv.healthchecker.shutdown() srv.healthchecker.shutdown()
srv.heartbeater.shutdown() srv.heartbeater.shutdown()
srv.wg.Wait() srv.wg.Wait()
srv.broker.Close() if !srv.sharedConnection {
srv.state.Set(base.StateClosed) srv.broker.Close()
}
srv.logger.Info("Exiting") srv.logger.Info("Exiting")
} }
@@ -544,8 +760,29 @@ func (srv *Server) Shutdown() {
// //
// Stop does not shutdown the server, make sure to call Shutdown before exit. // Stop does not shutdown the server, make sure to call Shutdown before exit.
func (srv *Server) Stop() { func (srv *Server) Stop() {
srv.state.mu.Lock()
if srv.state.value != srvStateActive {
// Invalid call to Stop, server can only go from Active state to Stopped state.
srv.state.mu.Unlock()
return
}
srv.state.value = srvStateStopped
srv.state.mu.Unlock()
srv.logger.Info("Stopping processor") srv.logger.Info("Stopping processor")
srv.processor.stop() srv.processor.stop()
srv.state.Set(base.StateStopped)
srv.logger.Info("Processor stopped") srv.logger.Info("Processor stopped")
} }
// Ping performs a ping against the redis connection.
//
// This is an alternative to the HealthCheckFunc available in the Config object.
func (srv *Server) Ping() error {
srv.state.mu.Lock()
defer srv.state.mu.Unlock()
if srv.state.value == srvStateClosed {
return nil
}
return srv.broker.Ping()
}

View File

@@ -11,25 +11,15 @@ import (
"testing" "testing"
"time" "time"
"github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/hibiken/asynq/internal/testbroker" "github.com/hibiken/asynq/internal/testbroker"
"github.com/hibiken/asynq/internal/testutil"
"github.com/redis/go-redis/v9"
"go.uber.org/goleak" "go.uber.org/goleak"
) )
func TestServer(t *testing.T) { func testServer(t *testing.T, c *Client, srv *Server) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v8/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt)
redisConnOpt := getRedisConnOpt(t)
c := NewClient(redisConnOpt)
defer c.Close()
srv := NewServer(redisConnOpt, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
// no-op handler // no-op handler
h := func(ctx context.Context, task *Task) error { h := func(ctx context.Context, task *Task) error {
return nil return nil
@@ -40,12 +30,12 @@ func TestServer(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
_, err = c.Enqueue(NewTask("send_email", asynqtest.JSON(map[string]interface{}{"recipient_id": 123}))) _, err = c.Enqueue(NewTask("send_email", testutil.JSON(map[string]interface{}{"recipient_id": 123})))
if err != nil { if err != nil {
t.Errorf("could not enqueue a task: %v", err) t.Errorf("could not enqueue a task: %v", err)
} }
_, err = c.Enqueue(NewTask("send_email", asynqtest.JSON(map[string]interface{}{"recipient_id": 456})), ProcessIn(1*time.Hour)) _, err = c.Enqueue(NewTask("send_email", testutil.JSON(map[string]interface{}{"recipient_id": 456})), ProcessIn(1*time.Hour))
if err != nil { if err != nil {
t.Errorf("could not enqueue a task: %v", err) t.Errorf("could not enqueue a task: %v", err)
} }
@@ -53,18 +43,55 @@ func TestServer(t *testing.T) {
srv.Shutdown() srv.Shutdown()
} }
func TestServer(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/redis/go-redis/v9/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNone(t, ignoreOpt)
redisConnOpt := getRedisConnOpt(t)
c := NewClient(redisConnOpt)
defer c.Close()
srv := NewServer(redisConnOpt, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
testServer(t, c, srv)
}
func TestServerFromRedisClient(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/redis/go-redis/v9/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNone(t, ignoreOpt)
redisConnOpt := getRedisConnOpt(t)
redisClient := redisConnOpt.MakeRedisClient().(redis.UniversalClient)
c := NewClientFromRedisClient(redisClient)
srv := NewServerFromRedisClient(redisClient, Config{
Concurrency: 10,
LogLevel: testLogLevel,
})
testServer(t, c, srv)
err := c.Close()
if err == nil {
t.Error("client.Close() should have failed because of a shared client but it didn't")
}
}
func TestServerRun(t *testing.T) { func TestServerRun(t *testing.T) {
// https://github.com/go-redis/redis/issues/1029 // https://github.com/go-redis/redis/issues/1029
ignoreOpt := goleak.IgnoreTopFunction("github.com/go-redis/redis/v8/internal/pool.(*ConnPool).reaper") ignoreOpt := goleak.IgnoreTopFunction("github.com/redis/go-redis/v9/internal/pool.(*ConnPool).reaper")
defer goleak.VerifyNoLeaks(t, ignoreOpt) defer goleak.VerifyNone(t, ignoreOpt)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel}) srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
done := make(chan struct{}) done := make(chan struct{})
// Make sure server exits when receiving TERM signal. // Make sure server exits when receiving TERM signal.
go func() { go func() {
time.Sleep(2 * time.Second) time.Sleep(2 * time.Second)
syscall.Kill(syscall.Getpid(), syscall.SIGTERM) _ = syscall.Kill(syscall.Getpid(), syscall.SIGTERM)
done <- struct{}{} done <- struct{}{}
}() }()
@@ -83,7 +110,7 @@ func TestServerRun(t *testing.T) {
} }
func TestServerErrServerClosed(t *testing.T) { func TestServerErrServerClosed(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel}) srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
handler := NewServeMux() handler := NewServeMux()
if err := srv.Start(handler); err != nil { if err := srv.Start(handler); err != nil {
t.Fatal(err) t.Fatal(err)
@@ -96,7 +123,7 @@ func TestServerErrServerClosed(t *testing.T) {
} }
func TestServerErrNilHandler(t *testing.T) { func TestServerErrNilHandler(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel}) srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
err := srv.Start(nil) err := srv.Start(nil)
if err == nil { if err == nil {
t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error") t.Error("Starting server with nil handler: (*Server).Start(nil) did not return error")
@@ -105,7 +132,7 @@ func TestServerErrNilHandler(t *testing.T) {
} }
func TestServerErrServerRunning(t *testing.T) { func TestServerErrServerRunning(t *testing.T) {
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel}) srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
handler := NewServeMux() handler := NewServeMux()
if err := srv.Start(handler); err != nil { if err := srv.Start(handler); err != nil {
t.Fatal(err) t.Fatal(err)
@@ -126,7 +153,7 @@ func TestServerWithRedisDown(t *testing.T) {
}() }()
r := rdb.NewRDB(setup(t)) r := rdb.NewRDB(setup(t))
testBroker := testbroker.NewTestBroker(r) testBroker := testbroker.NewTestBroker(r)
srv := NewServer(RedisClientOpt{Addr: ":6379"}, Config{LogLevel: testLogLevel}) srv := NewServer(getRedisConnOpt(t), Config{LogLevel: testLogLevel})
srv.broker = testBroker srv.broker = testBroker
srv.forwarder.broker = testBroker srv.forwarder.broker = testBroker
srv.heartbeater.broker = testBroker srv.heartbeater.broker = testBroker

View File

@@ -1,4 +1,4 @@
// +build linux bsd darwin //go:build linux || dragonfly || freebsd || netbsd || openbsd || darwin
package asynq package asynq

View File

@@ -1,4 +1,4 @@
// +build windows //go:build windows
package asynq package asynq

View File

@@ -8,7 +8,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/go-redis/redis/v8" "github.com/redis/go-redis/v9"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/log" "github.com/hibiken/asynq/internal/log"
) )

View File

@@ -5,14 +5,15 @@
package asynq package asynq
import ( import (
"context"
"fmt" "fmt"
"sync" "sync"
"testing" "testing"
"time" "time"
h "github.com/hibiken/asynq/internal/asynqtest"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
h "github.com/hibiken/asynq/internal/testutil"
) )
func TestSyncer(t *testing.T) { func TestSyncer(t *testing.T) {
@@ -41,7 +42,7 @@ func TestSyncer(t *testing.T) {
m := msg m := msg
syncRequestCh <- &syncRequest{ syncRequestCh <- &syncRequest{
fn: func() error { fn: func() error {
return rdbClient.Done(m) return rdbClient.Done(context.Background(), m)
}, },
deadline: time.Now().Add(5 * time.Minute), deadline: time.Now().Add(5 * time.Minute),
} }

View File

@@ -12,7 +12,7 @@ Asynq CLI is a command line tool to monitor the queues and tasks managed by `asy
In order to use the tool, compile it using the following command: In order to use the tool, compile it using the following command:
go get github.com/hibiken/asynq/tools/asynq go install github.com/hibiken/asynq/tools/asynq@latest
This will create the asynq executable under your `$GOPATH/bin` directory. This will create the asynq executable under your `$GOPATH/bin` directory.
@@ -22,9 +22,10 @@ This will create the asynq executable under your `$GOPATH/bin` directory.
To view details on any command, use `asynq help <command> <subcommand>`. To view details on any command, use `asynq help <command> <subcommand>`.
- `asynq dash`
- `asynq stats` - `asynq stats`
- `asynq queue [ls inspect history rm pause unpause]` - `asynq queue [ls inspect history rm pause unpause]`
- `asynq task [ls cancel delete archive run delete-all archive-all run-all]` - `asynq task [ls cancel delete archive run deleteall archiveall runall]`
- `asynq server [ls]` - `asynq server [ls]`
### Global flags ### Global flags

View File

@@ -11,6 +11,7 @@ import (
"sort" "sort"
"time" "time"
"github.com/MakeNowJust/heredoc/v2"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -24,21 +25,30 @@ func init() {
} }
var cronCmd = &cobra.Command{ var cronCmd = &cobra.Command{
Use: "cron", Use: "cron <command> [flags]",
Short: "Manage cron", Short: "Manage cron",
Example: heredoc.Doc(`
$ asynq cron ls
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1`),
} }
var cronListCmd = &cobra.Command{ var cronListCmd = &cobra.Command{
Use: "ls", Use: "list",
Short: "List cron entries", Aliases: []string{"ls"},
Run: cronList, Short: "List cron entries",
Run: cronList,
} }
var cronHistoryCmd = &cobra.Command{ var cronHistoryCmd = &cobra.Command{
Use: "history [ENTRY_ID...]", Use: "history <entry_id> [<entry_id>...]",
Short: "Show history of each cron tasks", Short: "Show history of each cron tasks",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
Run: cronHistory, Run: cronHistory,
Example: heredoc.Doc(`
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1 bf6a8594-cd03-4968-b36a-8572c5e160dd
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1 --size=100
$ asynq cron history 7837f142-6337-4217-9276-8f27281b67d1 --page=2`),
} }
func cronList(cmd *cobra.Command, args []string) { func cronList(cmd *cobra.Command, args []string) {

45
tools/asynq/cmd/dash.go Normal file
View File

@@ -0,0 +1,45 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"time"
"github.com/MakeNowJust/heredoc/v2"
"github.com/hibiken/asynq/tools/asynq/cmd/dash"
"github.com/spf13/cobra"
)
var (
flagPollInterval = 8 * time.Second
)
func init() {
rootCmd.AddCommand(dashCmd)
dashCmd.Flags().DurationVar(&flagPollInterval, "refresh", 8*time.Second, "Interval between data refresh (default: 8s, min allowed: 1s)")
}
var dashCmd = &cobra.Command{
Use: "dash",
Short: "View dashboard",
Long: heredoc.Doc(`
Display interactive dashboard.`),
Args: cobra.NoArgs,
Example: heredoc.Doc(`
$ asynq dash
$ asynq dash --refresh=3s`),
Run: func(cmd *cobra.Command, args []string) {
if flagPollInterval < 1*time.Second {
fmt.Println("error: --refresh cannot be less than 1s")
os.Exit(1)
}
dash.Run(dash.Options{
PollInterval: flagPollInterval,
RedisConnOpt: getRedisConnOpt(),
})
},
}

View File

@@ -0,0 +1,220 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"errors"
"fmt"
"os"
"strings"
"time"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
)
// viewType is an enum for dashboard views.
type viewType int
const (
viewTypeQueues viewType = iota
viewTypeQueueDetails
viewTypeHelp
)
// State holds dashboard state.
type State struct {
queues []*asynq.QueueInfo
tasks []*asynq.TaskInfo
groups []*asynq.GroupInfo
err error
// Note: index zero corresponds to the table header; index=1 correctponds to the first element
queueTableRowIdx int // highlighted row in queue table
taskTableRowIdx int // highlighted row in task table
groupTableRowIdx int // highlighted row in group table
taskState asynq.TaskState // highlighted task state in queue details view
taskID string // selected task ID
selectedQueue *asynq.QueueInfo // queue shown on queue details view
selectedGroup *asynq.GroupInfo
selectedTask *asynq.TaskInfo
pageNum int // pagination page number
view viewType // current view type
prevView viewType // to support "go back"
}
func (s *State) DebugString() string {
var b strings.Builder
b.WriteString(fmt.Sprintf("len(queues)=%d ", len(s.queues)))
b.WriteString(fmt.Sprintf("len(tasks)=%d ", len(s.tasks)))
b.WriteString(fmt.Sprintf("len(groups)=%d ", len(s.groups)))
b.WriteString(fmt.Sprintf("err=%v ", s.err))
if s.taskState != 0 {
b.WriteString(fmt.Sprintf("taskState=%s ", s.taskState.String()))
} else {
b.WriteString(fmt.Sprintf("taskState=0"))
}
b.WriteString(fmt.Sprintf("taskID=%s ", s.taskID))
b.WriteString(fmt.Sprintf("queueTableRowIdx=%d ", s.queueTableRowIdx))
b.WriteString(fmt.Sprintf("taskTableRowIdx=%d ", s.taskTableRowIdx))
b.WriteString(fmt.Sprintf("groupTableRowIdx=%d ", s.groupTableRowIdx))
if s.selectedQueue != nil {
b.WriteString(fmt.Sprintf("selectedQueue={Queue:%s} ", s.selectedQueue.Queue))
} else {
b.WriteString("selectedQueue=nil ")
}
if s.selectedGroup != nil {
b.WriteString(fmt.Sprintf("selectedGroup={Group:%s} ", s.selectedGroup.Group))
} else {
b.WriteString("selectedGroup=nil ")
}
if s.selectedTask != nil {
b.WriteString(fmt.Sprintf("selectedTask={ID:%s} ", s.selectedTask.ID))
} else {
b.WriteString("selectedTask=nil ")
}
b.WriteString(fmt.Sprintf("pageNum=%d", s.pageNum))
return b.String()
}
type Options struct {
DebugMode bool
PollInterval time.Duration
RedisConnOpt asynq.RedisConnOpt
}
func Run(opts Options) {
s, err := tcell.NewScreen()
if err != nil {
fmt.Printf("failed to create a screen: %v\n", err)
os.Exit(1)
}
if err := s.Init(); err != nil {
fmt.Printf("failed to initialize screen: %v\n", err)
os.Exit(1)
}
s.SetStyle(baseStyle) // set default text style
var (
state = State{} // confined in this goroutine only; DO NOT SHARE
inspector = asynq.NewInspector(opts.RedisConnOpt)
ticker = time.NewTicker(opts.PollInterval)
eventCh = make(chan tcell.Event)
done = make(chan struct{})
// channels to send/receive data fetched asynchronously
errorCh = make(chan error)
queueCh = make(chan *asynq.QueueInfo)
taskCh = make(chan *asynq.TaskInfo)
queuesCh = make(chan []*asynq.QueueInfo)
groupsCh = make(chan []*asynq.GroupInfo)
tasksCh = make(chan []*asynq.TaskInfo)
)
defer ticker.Stop()
f := dataFetcher{
inspector,
opts,
s,
errorCh,
queueCh,
taskCh,
queuesCh,
groupsCh,
tasksCh,
}
d := dashDrawer{
s,
opts,
}
h := keyEventHandler{
s: s,
fetcher: &f,
drawer: &d,
state: &state,
done: done,
ticker: ticker,
pollInterval: opts.PollInterval,
}
go fetchQueues(inspector, queuesCh, errorCh, opts)
go s.ChannelEvents(eventCh, done) // TODO: Double check that we are not leaking goroutine with this one.
d.Draw(&state) // draw initial screen
for {
// Update screen
s.Show()
select {
case ev := <-eventCh:
// Process event
switch ev := ev.(type) {
case *tcell.EventResize:
s.Sync()
case *tcell.EventKey:
h.HandleKeyEvent(ev)
}
case <-ticker.C:
f.Fetch(&state)
case queues := <-queuesCh:
state.queues = queues
state.err = nil
if len(queues) < state.queueTableRowIdx {
state.queueTableRowIdx = len(queues)
}
d.Draw(&state)
case q := <-queueCh:
state.selectedQueue = q
state.err = nil
d.Draw(&state)
case groups := <-groupsCh:
state.groups = groups
state.err = nil
if len(groups) < state.groupTableRowIdx {
state.groupTableRowIdx = len(groups)
}
d.Draw(&state)
case tasks := <-tasksCh:
state.tasks = tasks
state.err = nil
if len(tasks) < state.taskTableRowIdx {
state.taskTableRowIdx = len(tasks)
}
d.Draw(&state)
case t := <-taskCh:
state.selectedTask = t
state.err = nil
d.Draw(&state)
case err := <-errorCh:
if errors.Is(err, asynq.ErrTaskNotFound) {
state.selectedTask = nil
} else {
state.err = err
}
d.Draw(&state)
}
}
}

View File

@@ -0,0 +1,724 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"fmt"
"math"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
"github.com/mattn/go-runewidth"
)
var (
baseStyle = tcell.StyleDefault.Background(tcell.ColorReset).Foreground(tcell.ColorReset)
labelStyle = baseStyle.Foreground(tcell.ColorLightGray)
// styles for bar graph
activeStyle = baseStyle.Foreground(tcell.ColorBlue)
pendingStyle = baseStyle.Foreground(tcell.ColorGreen)
aggregatingStyle = baseStyle.Foreground(tcell.ColorLightGreen)
scheduledStyle = baseStyle.Foreground(tcell.ColorYellow)
retryStyle = baseStyle.Foreground(tcell.ColorPink)
archivedStyle = baseStyle.Foreground(tcell.ColorPurple)
completedStyle = baseStyle.Foreground(tcell.ColorDarkGreen)
)
// drawer draws UI with the given state.
type drawer interface {
Draw(state *State)
}
type dashDrawer struct {
s tcell.Screen
opts Options
}
func (dd *dashDrawer) Draw(state *State) {
s, opts := dd.s, dd.opts
s.Clear()
// Simulate data update on every render
d := NewScreenDrawer(s)
switch state.view {
case viewTypeQueues:
d.Println("=== Queues ===", baseStyle.Bold(true))
d.NL()
drawQueueSizeGraphs(d, state)
d.NL()
drawQueueTable(d, baseStyle, state)
case viewTypeQueueDetails:
d.Println("=== Queue Summary ===", baseStyle.Bold(true))
d.NL()
drawQueueSummary(d, state)
d.NL()
d.NL()
d.Println("=== Tasks ===", baseStyle.Bold(true))
d.NL()
drawTaskStateBreakdown(d, baseStyle, state)
d.NL()
drawTaskTable(d, state)
drawTaskModal(d, state)
case viewTypeHelp:
drawHelp(d)
}
d.GoToBottom()
if opts.DebugMode {
drawDebugInfo(d, state)
} else {
drawFooter(d, state)
}
}
func drawQueueSizeGraphs(d *ScreenDrawer, state *State) {
var qnames []string
var qsizes []string // queue size in strings
maxSize := 1 // not zero to avoid division by zero
for _, q := range state.queues {
qnames = append(qnames, q.Queue)
qsizes = append(qsizes, strconv.Itoa(q.Size))
if q.Size > maxSize {
maxSize = q.Size
}
}
qnameWidth := maxwidth(qnames)
qsizeWidth := maxwidth(qsizes)
// Calculate the multipler to scale the graph
screenWidth, _ := d.Screen().Size()
graphMaxWidth := screenWidth - (qnameWidth + qsizeWidth + 3) // <qname> |<graph> <size>
multipiler := 1.0
if graphMaxWidth < maxSize {
multipiler = float64(graphMaxWidth) / float64(maxSize)
}
const tick = '▇'
for _, q := range state.queues {
d.Print(q.Queue, baseStyle)
d.Print(strings.Repeat(" ", qnameWidth-runewidth.StringWidth(q.Queue)+1), baseStyle) // padding between qname and graph
d.Print("|", baseStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Active)*multipiler))), activeStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Pending)*multipiler))), pendingStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Aggregating)*multipiler))), aggregatingStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Scheduled)*multipiler))), scheduledStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Retry)*multipiler))), retryStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Archived)*multipiler))), archivedStyle)
d.Print(strings.Repeat(string(tick), int(math.Floor(float64(q.Completed)*multipiler))), completedStyle)
d.Print(fmt.Sprintf(" %d", q.Size), baseStyle)
d.NL()
}
d.NL()
d.Print("active=", baseStyle)
d.Print(string(tick), activeStyle)
d.Print(" pending=", baseStyle)
d.Print(string(tick), pendingStyle)
d.Print(" aggregating=", baseStyle)
d.Print(string(tick), aggregatingStyle)
d.Print(" scheduled=", baseStyle)
d.Print(string(tick), scheduledStyle)
d.Print(" retry=", baseStyle)
d.Print(string(tick), retryStyle)
d.Print(" archived=", baseStyle)
d.Print(string(tick), archivedStyle)
d.Print(" completed=", baseStyle)
d.Print(string(tick), completedStyle)
d.NL()
}
func drawFooter(d *ScreenDrawer, state *State) {
if state.err != nil {
style := baseStyle.Background(tcell.ColorDarkRed)
d.Print(state.err.Error(), style)
d.FillLine(' ', style)
return
}
style := baseStyle.Background(tcell.ColorDarkSlateGray).Foreground(tcell.ColorWhite)
switch state.view {
case viewTypeHelp:
d.Print("<Esc>: GoBack", style)
default:
d.Print("<?>: Help <Ctrl+C>: Exit ", style)
}
d.FillLine(' ', style)
}
// returns the maximum width from the given list of names
func maxwidth(names []string) int {
max := 0
for _, s := range names {
if w := runewidth.StringWidth(s); w > max {
max = w
}
}
return max
}
// rpad adds padding to the right of a string.
func rpad(s string, padding int) string {
tmpl := fmt.Sprintf("%%-%ds ", padding)
return fmt.Sprintf(tmpl, s)
}
// lpad adds padding to the left of a string.
func lpad(s string, padding int) string {
tmpl := fmt.Sprintf("%%%ds ", padding)
return fmt.Sprintf(tmpl, s)
}
// byteCount converts the given bytes into human readable string
func byteCount(b int64) string {
const unit = 1000
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB", float64(b)/float64(div), "kMGTPE"[exp])
}
var queueColumnConfigs = []*columnConfig[*asynq.QueueInfo]{
{"Queue", alignLeft, func(q *asynq.QueueInfo) string { return q.Queue }},
{"State", alignLeft, func(q *asynq.QueueInfo) string { return formatQueueState(q) }},
{"Size", alignRight, func(q *asynq.QueueInfo) string { return strconv.Itoa(q.Size) }},
{"Latency", alignRight, func(q *asynq.QueueInfo) string { return q.Latency.Round(time.Second).String() }},
{"MemoryUsage", alignRight, func(q *asynq.QueueInfo) string { return byteCount(q.MemoryUsage) }},
{"Processed", alignRight, func(q *asynq.QueueInfo) string { return strconv.Itoa(q.Processed) }},
{"Failed", alignRight, func(q *asynq.QueueInfo) string { return strconv.Itoa(q.Failed) }},
{"ErrorRate", alignRight, func(q *asynq.QueueInfo) string { return formatErrorRate(q.Processed, q.Failed) }},
}
func formatQueueState(q *asynq.QueueInfo) string {
if q.Paused {
return "PAUSED"
}
return "RUN"
}
func formatErrorRate(processed, failed int) string {
if processed == 0 {
return "-"
}
return fmt.Sprintf("%.2f", float64(failed)/float64(processed))
}
func formatNextProcessTime(t time.Time) string {
now := time.Now()
if t.Before(now) {
return "now"
}
return fmt.Sprintf("in %v", (t.Sub(now).Round(time.Second)))
}
func formatPastTime(t time.Time) string {
now := time.Now()
if t.After(now) || t.Equal(now) {
return "just now"
}
return fmt.Sprintf("%v ago", time.Since(t).Round(time.Second))
}
func drawQueueTable(d *ScreenDrawer, style tcell.Style, state *State) {
drawTable(d, style, queueColumnConfigs, state.queues, state.queueTableRowIdx-1)
}
func drawQueueSummary(d *ScreenDrawer, state *State) {
q := state.selectedQueue
if q == nil {
d.Println("ERROR: Press q to go back", baseStyle)
return
}
d.Print("Name ", labelStyle)
d.Println(q.Queue, baseStyle)
d.Print("Size ", labelStyle)
d.Println(strconv.Itoa(q.Size), baseStyle)
d.Print("Latency ", labelStyle)
d.Println(q.Latency.Round(time.Second).String(), baseStyle)
d.Print("MemUsage ", labelStyle)
d.Println(byteCount(q.MemoryUsage), baseStyle)
}
// Returns the max number of groups that can be displayed.
func groupPageSize(s tcell.Screen) int {
_, h := s.Size()
return h - 16 // height - (# of rows used)
}
// Returns the number of tasks to fetch.
func taskPageSize(s tcell.Screen) int {
_, h := s.Size()
return h - 15 // height - (# of rows used)
}
func shouldShowGroupTable(state *State) bool {
return state.taskState == asynq.TaskStateAggregating && state.selectedGroup == nil
}
func getTaskTableColumnConfig(taskState asynq.TaskState) []*columnConfig[*asynq.TaskInfo] {
switch taskState {
case asynq.TaskStateActive:
return activeTaskTableColumns
case asynq.TaskStatePending:
return pendingTaskTableColumns
case asynq.TaskStateAggregating:
return aggregatingTaskTableColumns
case asynq.TaskStateScheduled:
return scheduledTaskTableColumns
case asynq.TaskStateRetry:
return retryTaskTableColumns
case asynq.TaskStateArchived:
return archivedTaskTableColumns
case asynq.TaskStateCompleted:
return completedTaskTableColumns
}
panic("unknown task state")
}
var activeTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retried", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.Retried) }},
{"Max Retry", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.MaxRetry) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var pendingTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retried", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.Retried) }},
{"Max Retry", alignRight, func(t *asynq.TaskInfo) string { return strconv.Itoa(t.MaxRetry) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var aggregatingTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
{"Group", alignLeft, func(t *asynq.TaskInfo) string { return t.Group }},
}
var scheduledTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Next Process Time", alignLeft, func(t *asynq.TaskInfo) string {
return formatNextProcessTime(t.NextProcessAt)
}},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var retryTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retry", alignRight, func(t *asynq.TaskInfo) string { return fmt.Sprintf("%d/%d", t.Retried, t.MaxRetry) }},
{"Last Failure", alignLeft, func(t *asynq.TaskInfo) string { return t.LastErr }},
{"Last Failure Time", alignLeft, func(t *asynq.TaskInfo) string { return formatPastTime(t.LastFailedAt) }},
{"Next Process Time", alignLeft, func(t *asynq.TaskInfo) string {
return formatNextProcessTime(t.NextProcessAt)
}},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var archivedTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Retry", alignRight, func(t *asynq.TaskInfo) string { return fmt.Sprintf("%d/%d", t.Retried, t.MaxRetry) }},
{"Last Failure", alignLeft, func(t *asynq.TaskInfo) string { return t.LastErr }},
{"Last Failure Time", alignLeft, func(t *asynq.TaskInfo) string { return formatPastTime(t.LastFailedAt) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
}
var completedTaskTableColumns = []*columnConfig[*asynq.TaskInfo]{
{"ID", alignLeft, func(t *asynq.TaskInfo) string { return t.ID }},
{"Type", alignLeft, func(t *asynq.TaskInfo) string { return t.Type }},
{"Completion Time", alignLeft, func(t *asynq.TaskInfo) string { return formatPastTime(t.CompletedAt) }},
{"Payload", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Payload) }},
{"Result", alignLeft, func(t *asynq.TaskInfo) string { return formatByteSlice(t.Result) }},
}
func drawTaskTable(d *ScreenDrawer, state *State) {
if shouldShowGroupTable(state) {
drawGroupTable(d, state)
return
}
if len(state.tasks) == 0 {
return // print nothing
}
drawTable(d, baseStyle, getTaskTableColumnConfig(state.taskState), state.tasks, state.taskTableRowIdx-1)
// Pagination
pageSize := taskPageSize(d.Screen())
totalCount := getTaskCount(state.selectedQueue, state.taskState)
if state.taskState == asynq.TaskStateAggregating {
// aggregating tasks are scoped to each group when shown in the table.
totalCount = state.selectedGroup.Size
}
if pageSize < totalCount {
start := (state.pageNum-1)*pageSize + 1
end := start + len(state.tasks) - 1
paginationStyle := baseStyle.Foreground(tcell.ColorLightGray)
d.Print(fmt.Sprintf("Showing %d-%d out of %d", start, end, totalCount), paginationStyle)
if isNextTaskPageAvailable(d.Screen(), state) {
d.Print(" n=NextPage", paginationStyle)
}
if state.pageNum > 1 {
d.Print(" p=PrevPage", paginationStyle)
}
d.FillLine(' ', paginationStyle)
}
}
func isNextTaskPageAvailable(s tcell.Screen, state *State) bool {
totalCount := getTaskCount(state.selectedQueue, state.taskState)
end := (state.pageNum-1)*taskPageSize(s) + len(state.tasks)
return end < totalCount
}
func drawGroupTable(d *ScreenDrawer, state *State) {
if len(state.groups) == 0 {
return // print nothing
}
d.Println("<<< Select group >>>", baseStyle)
colConfigs := []*columnConfig[*asynq.GroupInfo]{
{"Name", alignLeft, func(g *asynq.GroupInfo) string { return g.Group }},
{"Size", alignRight, func(g *asynq.GroupInfo) string { return strconv.Itoa(g.Size) }},
}
// pagination
pageSize := groupPageSize(d.Screen())
total := len(state.groups)
start := (state.pageNum - 1) * pageSize
end := min(start+pageSize, total)
drawTable(d, baseStyle, colConfigs, state.groups[start:end], state.groupTableRowIdx-1)
if pageSize < total {
d.Print(fmt.Sprintf("Showing %d-%d out of %d", start+1, end, total), labelStyle)
if end < total {
d.Print(" n=NextPage", labelStyle)
}
if start > 0 {
d.Print(" p=PrevPage", labelStyle)
}
}
d.FillLine(' ', labelStyle)
}
type number interface {
int | int64 | float64
}
// min returns the smaller of x and y. if x==y, returns x
func min[V number](x, y V) V {
if x > y {
return y
}
return x
}
// Define the order of states to show
var taskStates = []asynq.TaskState{
asynq.TaskStateActive,
asynq.TaskStatePending,
asynq.TaskStateAggregating,
asynq.TaskStateScheduled,
asynq.TaskStateRetry,
asynq.TaskStateArchived,
asynq.TaskStateCompleted,
}
func nextTaskState(current asynq.TaskState) asynq.TaskState {
for i, ts := range taskStates {
if current == ts {
if i == len(taskStates)-1 {
return taskStates[0]
} else {
return taskStates[i+1]
}
}
}
panic("unknown task state")
}
func prevTaskState(current asynq.TaskState) asynq.TaskState {
for i, ts := range taskStates {
if current == ts {
if i == 0 {
return taskStates[len(taskStates)-1]
} else {
return taskStates[i-1]
}
}
}
panic("unknown task state")
}
func getTaskCount(queue *asynq.QueueInfo, taskState asynq.TaskState) int {
switch taskState {
case asynq.TaskStateActive:
return queue.Active
case asynq.TaskStatePending:
return queue.Pending
case asynq.TaskStateAggregating:
return queue.Aggregating
case asynq.TaskStateScheduled:
return queue.Scheduled
case asynq.TaskStateRetry:
return queue.Retry
case asynq.TaskStateArchived:
return queue.Archived
case asynq.TaskStateCompleted:
return queue.Completed
}
panic("unkonwn task state")
}
func drawTaskStateBreakdown(d *ScreenDrawer, style tcell.Style, state *State) {
const pad = " " // padding between states
for _, ts := range taskStates {
s := style
if state.taskState == ts {
s = s.Bold(true).Underline(true)
}
d.Print(fmt.Sprintf("%s:%d", strings.Title(ts.String()), getTaskCount(state.selectedQueue, ts)), s)
d.Print(pad, style)
}
d.NL()
}
func drawTaskModal(d *ScreenDrawer, state *State) {
if state.taskID == "" {
return
}
task := state.selectedTask
if task == nil {
// task no longer found
fns := []func(d *modalRowDrawer){
func(d *modalRowDrawer) { d.Print("=== Task Info ===", baseStyle.Bold(true)) },
func(d *modalRowDrawer) { d.Print("", baseStyle) },
func(d *modalRowDrawer) {
d.Print(fmt.Sprintf("Task %q no longer exists", state.taskID), baseStyle)
},
}
withModal(d, fns)
return
}
fns := []func(d *modalRowDrawer){
func(d *modalRowDrawer) { d.Print("=== Task Info ===", baseStyle.Bold(true)) },
func(d *modalRowDrawer) { d.Print("", baseStyle) },
func(d *modalRowDrawer) {
d.Print("ID: ", labelStyle)
d.Print(task.ID, baseStyle)
},
func(d *modalRowDrawer) {
d.Print("Type: ", labelStyle)
d.Print(task.Type, baseStyle)
},
func(d *modalRowDrawer) {
d.Print("State: ", labelStyle)
d.Print(task.State.String(), baseStyle)
},
func(d *modalRowDrawer) {
d.Print("Queue: ", labelStyle)
d.Print(task.Queue, baseStyle)
},
func(d *modalRowDrawer) {
d.Print("Retry: ", labelStyle)
d.Print(fmt.Sprintf("%d/%d", task.Retried, task.MaxRetry), baseStyle)
},
}
if task.LastErr != "" {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Last Failure: ", labelStyle)
d.Print(task.LastErr, baseStyle)
})
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Last Failure Time: ", labelStyle)
d.Print(fmt.Sprintf("%v (%s)", task.LastFailedAt, formatPastTime(task.LastFailedAt)), baseStyle)
})
}
if !task.NextProcessAt.IsZero() {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Next Process Time: ", labelStyle)
d.Print(fmt.Sprintf("%v (%s)", task.NextProcessAt, formatNextProcessTime(task.NextProcessAt)), baseStyle)
})
}
if !task.CompletedAt.IsZero() {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Completion Time: ", labelStyle)
d.Print(fmt.Sprintf("%v (%s)", task.CompletedAt, formatPastTime(task.CompletedAt)), baseStyle)
})
}
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Payload: ", labelStyle)
d.Print(formatByteSlice(task.Payload), baseStyle)
})
if task.Result != nil {
fns = append(fns, func(d *modalRowDrawer) {
d.Print("Result: ", labelStyle)
d.Print(formatByteSlice(task.Result), baseStyle)
})
}
withModal(d, fns)
}
// Reports whether the given byte slice is printable (i.e. human readable)
func isPrintable(data []byte) bool {
if !utf8.Valid(data) {
return false
}
isAllSpace := true
for _, r := range string(data) {
if !unicode.IsGraphic(r) {
return false
}
if !unicode.IsSpace(r) {
isAllSpace = false
}
}
return !isAllSpace
}
func formatByteSlice(data []byte) string {
if data == nil {
return "<nil>"
}
if !isPrintable(data) {
return "<non-printable>"
}
return strings.ReplaceAll(string(data), "\n", " ")
}
type modalRowDrawer struct {
d *ScreenDrawer
width int // current width occupied by content
maxWidth int
}
// Note: s should not include newline
func (d *modalRowDrawer) Print(s string, style tcell.Style) {
if d.width >= d.maxWidth {
return // no longer write to this row
}
if d.width+runewidth.StringWidth(s) > d.maxWidth {
s = truncate(s, d.maxWidth-d.width)
}
d.d.Print(s, style)
}
// withModal draws a modal with the given functions row by row.
func withModal(d *ScreenDrawer, rowPrintFns []func(d *modalRowDrawer)) {
w, h := d.Screen().Size()
var (
modalWidth = int(math.Floor(float64(w) * 0.6))
modalHeight = int(math.Floor(float64(h) * 0.6))
rowOffset = int(math.Floor(float64(h) * 0.2)) // 20% from the top
colOffset = int(math.Floor(float64(w) * 0.2)) // 20% from the left
)
if modalHeight < 3 {
return // no content can be shown
}
d.Goto(colOffset, rowOffset)
d.Print(string(tcell.RuneULCorner), baseStyle)
d.Print(strings.Repeat(string(tcell.RuneHLine), modalWidth-2), baseStyle)
d.Print(string(tcell.RuneURCorner), baseStyle)
d.NL()
rowDrawer := modalRowDrawer{
d: d,
width: 0,
maxWidth: modalWidth - 4, /* borders + paddings */
}
for i := 1; i < modalHeight-1; i++ {
d.Goto(colOffset, rowOffset+i)
d.Print(fmt.Sprintf("%c ", tcell.RuneVLine), baseStyle)
if i <= len(rowPrintFns) {
rowPrintFns[i-1](&rowDrawer)
}
d.FillUntil(' ', baseStyle, colOffset+modalWidth-2)
d.Print(fmt.Sprintf(" %c", tcell.RuneVLine), baseStyle)
d.NL()
}
d.Goto(colOffset, rowOffset+modalHeight-1)
d.Print(string(tcell.RuneLLCorner), baseStyle)
d.Print(strings.Repeat(string(tcell.RuneHLine), modalWidth-2), baseStyle)
d.Print(string(tcell.RuneLRCorner), baseStyle)
d.NL()
}
func adjustWidth(s string, width int) string {
sw := runewidth.StringWidth(s)
if sw > width {
return truncate(s, width)
}
var b strings.Builder
b.WriteString(s)
b.WriteString(strings.Repeat(" ", width-sw))
return b.String()
}
// truncates s if s exceeds max length.
func truncate(s string, max int) string {
if runewidth.StringWidth(s) <= max {
return s
}
return string([]rune(s)[:max-1]) + "…"
}
func drawDebugInfo(d *ScreenDrawer, state *State) {
d.Println(state.DebugString(), baseStyle)
}
func drawHelp(d *ScreenDrawer) {
keyStyle := labelStyle.Bold(true)
withModal(d, []func(*modalRowDrawer){
func(d *modalRowDrawer) { d.Print("=== Help ===", baseStyle.Bold(true)) },
func(d *modalRowDrawer) { d.Print("", baseStyle) },
func(d *modalRowDrawer) {
d.Print("<Enter>", keyStyle)
d.Print(" to select", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<Esc>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<q>", keyStyle)
d.Print(" to go back", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<UpArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<k>", keyStyle)
d.Print(" to move up", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<DownArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<j>", keyStyle)
d.Print(" to move down", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<LeftArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<h>", keyStyle)
d.Print(" to move left", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<RightArrow>", keyStyle)
d.Print(" or ", baseStyle)
d.Print("<l>", keyStyle)
d.Print(" to move right", baseStyle)
},
func(d *modalRowDrawer) {
d.Print("<Ctrl+C>", keyStyle)
d.Print(" to quit", baseStyle)
},
})
}

View File

@@ -0,0 +1,33 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import "testing"
func TestTruncate(t *testing.T) {
tests := []struct {
s string
max int
want string
}{
{
s: "hello world!",
max: 15,
want: "hello world!",
},
{
s: "hello world!",
max: 6,
want: "hello…",
},
}
for _, tc := range tests {
got := truncate(tc.s, tc.max)
if tc.want != got {
t.Errorf("truncate(%q, %d) = %q, want %q", tc.s, tc.max, got, tc.want)
}
}
}

View File

@@ -0,0 +1,185 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"sort"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
)
type fetcher interface {
// Fetch retries data required by the given state of the dashboard.
Fetch(state *State)
}
type dataFetcher struct {
inspector *asynq.Inspector
opts Options
s tcell.Screen
errorCh chan<- error
queueCh chan<- *asynq.QueueInfo
taskCh chan<- *asynq.TaskInfo
queuesCh chan<- []*asynq.QueueInfo
groupsCh chan<- []*asynq.GroupInfo
tasksCh chan<- []*asynq.TaskInfo
}
func (f *dataFetcher) Fetch(state *State) {
switch state.view {
case viewTypeQueues:
f.fetchQueues()
case viewTypeQueueDetails:
if shouldShowGroupTable(state) {
f.fetchGroups(state.selectedQueue.Queue)
} else if state.taskState == asynq.TaskStateAggregating {
f.fetchAggregatingTasks(state.selectedQueue.Queue, state.selectedGroup.Group, taskPageSize(f.s), state.pageNum)
} else {
f.fetchTasks(state.selectedQueue.Queue, state.taskState, taskPageSize(f.s), state.pageNum)
}
// if the task modal is open, additionally fetch the selected task's info
if state.taskID != "" {
f.fetchTaskInfo(state.selectedQueue.Queue, state.taskID)
}
}
}
func (f *dataFetcher) fetchQueues() {
var (
inspector = f.inspector
queuesCh = f.queuesCh
errorCh = f.errorCh
opts = f.opts
)
go fetchQueues(inspector, queuesCh, errorCh, opts)
}
func fetchQueues(i *asynq.Inspector, queuesCh chan<- []*asynq.QueueInfo, errorCh chan<- error, opts Options) {
queues, err := i.Queues()
if err != nil {
errorCh <- err
return
}
sort.Strings(queues)
var res []*asynq.QueueInfo
for _, q := range queues {
info, err := i.GetQueueInfo(q)
if err != nil {
errorCh <- err
return
}
res = append(res, info)
}
queuesCh <- res
}
func fetchQueueInfo(i *asynq.Inspector, qname string, queueCh chan<- *asynq.QueueInfo, errorCh chan<- error) {
q, err := i.GetQueueInfo(qname)
if err != nil {
errorCh <- err
return
}
queueCh <- q
}
func (f *dataFetcher) fetchGroups(qname string) {
var (
i = f.inspector
groupsCh = f.groupsCh
errorCh = f.errorCh
queueCh = f.queueCh
)
go fetchGroups(i, qname, groupsCh, errorCh)
go fetchQueueInfo(i, qname, queueCh, errorCh)
}
func fetchGroups(i *asynq.Inspector, qname string, groupsCh chan<- []*asynq.GroupInfo, errorCh chan<- error) {
groups, err := i.Groups(qname)
if err != nil {
errorCh <- err
return
}
groupsCh <- groups
}
func (f *dataFetcher) fetchAggregatingTasks(qname, group string, pageSize, pageNum int) {
var (
i = f.inspector
tasksCh = f.tasksCh
errorCh = f.errorCh
queueCh = f.queueCh
)
go fetchAggregatingTasks(i, qname, group, pageSize, pageNum, tasksCh, errorCh)
go fetchQueueInfo(i, qname, queueCh, errorCh)
}
func fetchAggregatingTasks(i *asynq.Inspector, qname, group string, pageSize, pageNum int,
tasksCh chan<- []*asynq.TaskInfo, errorCh chan<- error) {
tasks, err := i.ListAggregatingTasks(qname, group, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
errorCh <- err
return
}
tasksCh <- tasks
}
func (f *dataFetcher) fetchTasks(qname string, taskState asynq.TaskState, pageSize, pageNum int) {
var (
i = f.inspector
tasksCh = f.tasksCh
errorCh = f.errorCh
queueCh = f.queueCh
)
go fetchTasks(i, qname, taskState, pageSize, pageNum, tasksCh, errorCh)
go fetchQueueInfo(i, qname, queueCh, errorCh)
}
func fetchTasks(i *asynq.Inspector, qname string, taskState asynq.TaskState, pageSize, pageNum int,
tasksCh chan<- []*asynq.TaskInfo, errorCh chan<- error) {
var (
tasks []*asynq.TaskInfo
err error
)
opts := []asynq.ListOption{asynq.PageSize(pageSize), asynq.Page(pageNum)}
switch taskState {
case asynq.TaskStateActive:
tasks, err = i.ListActiveTasks(qname, opts...)
case asynq.TaskStatePending:
tasks, err = i.ListPendingTasks(qname, opts...)
case asynq.TaskStateScheduled:
tasks, err = i.ListScheduledTasks(qname, opts...)
case asynq.TaskStateRetry:
tasks, err = i.ListRetryTasks(qname, opts...)
case asynq.TaskStateArchived:
tasks, err = i.ListArchivedTasks(qname, opts...)
case asynq.TaskStateCompleted:
tasks, err = i.ListCompletedTasks(qname, opts...)
}
if err != nil {
errorCh <- err
return
}
tasksCh <- tasks
}
func (f *dataFetcher) fetchTaskInfo(qname, taskID string) {
var (
i = f.inspector
taskCh = f.taskCh
errorCh = f.errorCh
)
go fetchTaskInfo(i, qname, taskID, taskCh, errorCh)
}
func fetchTaskInfo(i *asynq.Inspector, qname, taskID string, taskCh chan<- *asynq.TaskInfo, errorCh chan<- error) {
info, err := i.GetTaskInfo(qname, taskID)
if err != nil {
errorCh <- err
return
}
taskCh <- info
}

View File

@@ -0,0 +1,317 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"os"
"time"
"github.com/gdamore/tcell/v2"
"github.com/hibiken/asynq"
)
// keyEventHandler handles keyboard events and updates the state.
// It delegates data fetching to fetcher and UI rendering to drawer.
type keyEventHandler struct {
s tcell.Screen
state *State
done chan struct{}
fetcher fetcher
drawer drawer
ticker *time.Ticker
pollInterval time.Duration
}
func (h *keyEventHandler) quit() {
h.s.Fini()
close(h.done)
os.Exit(0)
}
func (h *keyEventHandler) HandleKeyEvent(ev *tcell.EventKey) {
if ev.Key() == tcell.KeyEscape || ev.Rune() == 'q' {
h.goBack() // Esc and 'q' key have "go back" semantics
} else if ev.Key() == tcell.KeyCtrlC {
h.quit()
} else if ev.Key() == tcell.KeyCtrlL {
h.s.Sync()
} else if ev.Key() == tcell.KeyDown || ev.Rune() == 'j' {
h.handleDownKey()
} else if ev.Key() == tcell.KeyUp || ev.Rune() == 'k' {
h.handleUpKey()
} else if ev.Key() == tcell.KeyRight || ev.Rune() == 'l' {
h.handleRightKey()
} else if ev.Key() == tcell.KeyLeft || ev.Rune() == 'h' {
h.handleLeftKey()
} else if ev.Key() == tcell.KeyEnter {
h.handleEnterKey()
} else if ev.Rune() == '?' {
h.showHelp()
} else if ev.Rune() == 'n' {
h.nextPage()
} else if ev.Rune() == 'p' {
h.prevPage()
}
}
func (h *keyEventHandler) goBack() {
var (
state = h.state
d = h.drawer
f = h.fetcher
)
if state.view == viewTypeHelp {
state.view = state.prevView // exit help
f.Fetch(state)
h.resetTicker()
d.Draw(state)
} else if state.view == viewTypeQueueDetails {
// if task modal is open close it; otherwise go back to the previous view
if state.taskID != "" {
state.taskID = ""
state.selectedTask = nil
d.Draw(state)
} else {
state.view = viewTypeQueues
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
} else {
h.quit()
}
}
func (h *keyEventHandler) handleDownKey() {
switch h.state.view {
case viewTypeQueues:
h.downKeyQueues()
case viewTypeQueueDetails:
h.downKeyQueueDetails()
}
}
func (h *keyEventHandler) downKeyQueues() {
if h.state.queueTableRowIdx < len(h.state.queues) {
h.state.queueTableRowIdx++
} else {
h.state.queueTableRowIdx = 0 // loop back
}
h.drawer.Draw(h.state)
}
func (h *keyEventHandler) downKeyQueueDetails() {
s, state := h.s, h.state
if shouldShowGroupTable(state) {
if state.groupTableRowIdx < groupPageSize(s) {
state.groupTableRowIdx++
} else {
state.groupTableRowIdx = 0 // loop back
}
} else if state.taskID == "" {
if state.taskTableRowIdx < len(state.tasks) {
state.taskTableRowIdx++
} else {
state.taskTableRowIdx = 0 // loop back
}
}
h.drawer.Draw(state)
}
func (h *keyEventHandler) handleUpKey() {
switch h.state.view {
case viewTypeQueues:
h.upKeyQueues()
case viewTypeQueueDetails:
h.upKeyQueueDetails()
}
}
func (h *keyEventHandler) upKeyQueues() {
state := h.state
if state.queueTableRowIdx == 0 {
state.queueTableRowIdx = len(state.queues)
} else {
state.queueTableRowIdx--
}
h.drawer.Draw(state)
}
func (h *keyEventHandler) upKeyQueueDetails() {
s, state := h.s, h.state
if shouldShowGroupTable(state) {
if state.groupTableRowIdx == 0 {
state.groupTableRowIdx = groupPageSize(s)
} else {
state.groupTableRowIdx--
}
} else if state.taskID == "" {
if state.taskTableRowIdx == 0 {
state.taskTableRowIdx = len(state.tasks)
} else {
state.taskTableRowIdx--
}
}
h.drawer.Draw(state)
}
func (h *keyEventHandler) handleEnterKey() {
switch h.state.view {
case viewTypeQueues:
h.enterKeyQueues()
case viewTypeQueueDetails:
h.enterKeyQueueDetails()
}
}
func (h *keyEventHandler) resetTicker() {
h.ticker.Reset(h.pollInterval)
}
func (h *keyEventHandler) enterKeyQueues() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if state.queueTableRowIdx != 0 {
state.selectedQueue = state.queues[state.queueTableRowIdx-1]
state.view = viewTypeQueueDetails
state.taskState = asynq.TaskStateActive
state.tasks = nil
state.pageNum = 1
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) enterKeyQueueDetails() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if shouldShowGroupTable(state) && state.groupTableRowIdx != 0 {
state.selectedGroup = state.groups[state.groupTableRowIdx-1]
state.tasks = nil
state.pageNum = 1
f.Fetch(state)
h.resetTicker()
d.Draw(state)
} else if !shouldShowGroupTable(state) && state.taskTableRowIdx != 0 {
task := state.tasks[state.taskTableRowIdx-1]
state.selectedTask = task
state.taskID = task.ID
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) handleLeftKey() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails && state.taskID == "" {
state.taskState = prevTaskState(state.taskState)
state.pageNum = 1
state.taskTableRowIdx = 0
state.tasks = nil
state.selectedGroup = nil
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) handleRightKey() {
var (
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails && state.taskID == "" {
state.taskState = nextTaskState(state.taskState)
state.pageNum = 1
state.taskTableRowIdx = 0
state.tasks = nil
state.selectedGroup = nil
f.Fetch(state)
h.resetTicker()
d.Draw(state)
}
}
func (h *keyEventHandler) nextPage() {
var (
s = h.s
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails {
if shouldShowGroupTable(state) {
pageSize := groupPageSize(s)
total := len(state.groups)
start := (state.pageNum - 1) * pageSize
end := start + pageSize
if end <= total {
state.pageNum++
d.Draw(state)
}
} else {
pageSize := taskPageSize(s)
totalCount := getTaskCount(state.selectedQueue, state.taskState)
if (state.pageNum-1)*pageSize+len(state.tasks) < totalCount {
state.pageNum++
f.Fetch(state)
h.resetTicker()
}
}
}
}
func (h *keyEventHandler) prevPage() {
var (
s = h.s
state = h.state
f = h.fetcher
d = h.drawer
)
if state.view == viewTypeQueueDetails {
if shouldShowGroupTable(state) {
pageSize := groupPageSize(s)
start := (state.pageNum - 1) * pageSize
if start > 0 {
state.pageNum--
d.Draw(state)
}
} else {
if state.pageNum > 1 {
state.pageNum--
f.Fetch(state)
h.resetTicker()
}
}
}
}
func (h *keyEventHandler) showHelp() {
var (
state = h.state
d = h.drawer
)
if state.view != viewTypeHelp {
state.prevView = state.view
state.view = viewTypeHelp
d.Draw(state)
}
}

View File

@@ -0,0 +1,234 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"testing"
"time"
"github.com/gdamore/tcell/v2"
"github.com/google/go-cmp/cmp"
"github.com/hibiken/asynq"
)
func makeKeyEventHandler(t *testing.T, state *State) *keyEventHandler {
ticker := time.NewTicker(time.Second)
t.Cleanup(func() { ticker.Stop() })
return &keyEventHandler{
s: tcell.NewSimulationScreen("UTF-8"),
state: state,
done: make(chan struct{}),
fetcher: &fakeFetcher{},
drawer: &fakeDrawer{},
ticker: ticker,
pollInterval: time.Second,
}
}
type keyEventHandlerTest struct {
desc string // test description
state *State // initial state, to be mutated by the handler
events []*tcell.EventKey // keyboard events
wantState State // expected state after the events
}
func TestKeyEventHandler(t *testing.T) {
tests := []*keyEventHandlerTest{
{
desc: "navigates to help view",
state: &State{view: viewTypeQueues},
events: []*tcell.EventKey{tcell.NewEventKey(tcell.KeyRune, '?', tcell.ModNone)},
wantState: State{view: viewTypeHelp},
},
{
desc: "navigates to queue details view",
state: &State{
view: viewTypeQueues,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 100, Active: 10, Pending: 40, Scheduled: 40, Completed: 10},
},
queueTableRowIdx: 0,
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyRune, 'j', tcell.ModNone), // down
tcell.NewEventKey(tcell.KeyEnter, '\n', tcell.ModNone), // Enter
},
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 100, Active: 10, Pending: 40, Scheduled: 40, Completed: 10},
},
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 100, Active: 10, Pending: 40, Scheduled: 40, Completed: 10},
queueTableRowIdx: 1,
taskState: asynq.TaskStateActive,
pageNum: 1,
},
},
{
desc: "does nothing if no queues are present",
state: &State{
view: viewTypeQueues,
queues: []*asynq.QueueInfo{}, // empty
queueTableRowIdx: 0,
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyRune, 'j', tcell.ModNone), // down
tcell.NewEventKey(tcell.KeyEnter, '\n', tcell.ModNone), // Enter
},
wantState: State{
view: viewTypeQueues,
queues: []*asynq.QueueInfo{},
queueTableRowIdx: 0,
},
},
{
desc: "opens task info modal",
state: &State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyEnter, '\n', tcell.ModNone), // Enter
},
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
// new states
taskID: "yyyy",
selectedTask: &asynq.TaskInfo{ID: "yyyy", Type: "bar"},
},
},
{
desc: "Esc closes task info modal",
state: &State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "yyyy", // presence of this field opens the modal
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyEscape, ' ', tcell.ModNone), // Esc
},
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "", // this field should be unset
},
},
{
desc: "Arrow keys are disabled while task info modal is open",
state: &State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "yyyy", // presence of this field opens the modal
},
events: []*tcell.EventKey{
tcell.NewEventKey(tcell.KeyLeft, ' ', tcell.ModNone),
},
// no change
wantState: State{
view: viewTypeQueueDetails,
queues: []*asynq.QueueInfo{
{Queue: "default", Size: 500, Active: 10, Pending: 40},
},
queueTableRowIdx: 1,
selectedQueue: &asynq.QueueInfo{Queue: "default", Size: 50, Active: 10, Pending: 40},
taskState: asynq.TaskStatePending,
pageNum: 1,
tasks: []*asynq.TaskInfo{
{ID: "xxxx", Type: "foo"},
{ID: "yyyy", Type: "bar"},
{ID: "zzzz", Type: "baz"},
},
taskTableRowIdx: 2,
taskID: "yyyy", // presence of this field opens the modal
},
},
// TODO: Add more tests
}
for _, tc := range tests {
t.Run(tc.desc, func(t *testing.T) {
h := makeKeyEventHandler(t, tc.state)
for _, e := range tc.events {
h.HandleKeyEvent(e)
}
if diff := cmp.Diff(tc.wantState, *tc.state, cmp.AllowUnexported(State{})); diff != "" {
t.Errorf("after state was %+v, want %+v: (-want,+got)\n%s", *tc.state, tc.wantState, diff)
}
})
}
}
/*** fake implementation for tests ***/
type fakeFetcher struct{}
func (f *fakeFetcher) Fetch(s *State) {}
type fakeDrawer struct{}
func (d *fakeDrawer) Draw(s *State) {}

View File

@@ -0,0 +1,100 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"strings"
"github.com/gdamore/tcell/v2"
"github.com/mattn/go-runewidth"
)
/*** Screen Drawer ***/
// ScreenDrawer is used to draw contents on screen.
//
// Usage example:
// d := NewScreenDrawer(s)
// d.Println("Hello world", mystyle)
// d.NL() // adds newline
// d.Print("foo", mystyle.Bold(true))
// d.Print("bar", mystyle.Italic(true))
type ScreenDrawer struct {
l *LineDrawer
}
func NewScreenDrawer(s tcell.Screen) *ScreenDrawer {
return &ScreenDrawer{l: NewLineDrawer(0, s)}
}
func (d *ScreenDrawer) Print(s string, style tcell.Style) {
d.l.Draw(s, style)
}
func (d *ScreenDrawer) Println(s string, style tcell.Style) {
d.Print(s, style)
d.NL()
}
// FillLine prints the given rune until the end of the current line
// and adds a newline.
func (d *ScreenDrawer) FillLine(r rune, style tcell.Style) {
w, _ := d.Screen().Size()
if w-d.l.col < 0 {
d.NL()
return
}
s := strings.Repeat(string(r), w-d.l.col)
d.Print(s, style)
d.NL()
}
func (d *ScreenDrawer) FillUntil(r rune, style tcell.Style, limit int) {
if d.l.col > limit {
return // already passed the limit
}
s := strings.Repeat(string(r), limit-d.l.col)
d.Print(s, style)
}
// NL adds a newline (i.e., moves to the next line).
func (d *ScreenDrawer) NL() {
d.l.row++
d.l.col = 0
}
func (d *ScreenDrawer) Screen() tcell.Screen {
return d.l.s
}
// Goto moves the screendrawer to the specified cell.
func (d *ScreenDrawer) Goto(x, y int) {
d.l.row = y
d.l.col = x
}
// Go to the bottom of the screen.
func (d *ScreenDrawer) GoToBottom() {
_, h := d.Screen().Size()
d.l.row = h - 1
d.l.col = 0
}
type LineDrawer struct {
s tcell.Screen
row int
col int
}
func NewLineDrawer(row int, s tcell.Screen) *LineDrawer {
return &LineDrawer{row: row, col: 0, s: s}
}
func (d *LineDrawer) Draw(s string, style tcell.Style) {
for _, r := range s {
d.s.SetContent(d.col, d.row, r, nil, style)
d.col += runewidth.RuneWidth(r)
}
}

View File

@@ -0,0 +1,70 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package dash
import (
"github.com/gdamore/tcell/v2"
"github.com/mattn/go-runewidth"
)
type columnAlignment int
const (
alignRight columnAlignment = iota
alignLeft
)
type columnConfig[V any] struct {
name string
alignment columnAlignment
displayFn func(v V) string
}
type column[V any] struct {
*columnConfig[V]
width int
}
// Helper to draw a table.
func drawTable[V any](d *ScreenDrawer, style tcell.Style, configs []*columnConfig[V], data []V, highlightRowIdx int) {
const colBuffer = " " // extra buffer between columns
cols := make([]*column[V], len(configs))
for i, cfg := range configs {
cols[i] = &column[V]{cfg, runewidth.StringWidth(cfg.name)}
}
// adjust the column width to accommodate the widest value.
for _, v := range data {
for _, col := range cols {
if w := runewidth.StringWidth(col.displayFn(v)); col.width < w {
col.width = w
}
}
}
// print header
headerStyle := style.Background(tcell.ColorDimGray).Foreground(tcell.ColorWhite)
for _, col := range cols {
if col.alignment == alignLeft {
d.Print(rpad(col.name, col.width)+colBuffer, headerStyle)
} else {
d.Print(lpad(col.name, col.width)+colBuffer, headerStyle)
}
}
d.FillLine(' ', headerStyle)
// print body
for i, v := range data {
rowStyle := style
if highlightRowIdx == i {
rowStyle = style.Background(tcell.ColorDarkOliveGreen)
}
for _, col := range cols {
if col.alignment == alignLeft {
d.Print(rpad(col.displayFn(v), col.width)+colBuffer, rowStyle)
} else {
d.Print(lpad(col.displayFn(v), col.width)+colBuffer, rowStyle)
}
}
d.FillLine(' ', rowStyle)
}
}

52
tools/asynq/cmd/group.go Normal file
View File

@@ -0,0 +1,52 @@
// Copyright 2022 Kentaro Hibino. All rights reserved.
// Use of this source code is governed by a MIT license
// that can be found in the LICENSE file.
package cmd
import (
"fmt"
"os"
"github.com/MakeNowJust/heredoc/v2"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(groupCmd)
groupCmd.AddCommand(groupListCmd)
groupListCmd.Flags().StringP("queue", "q", "", "queue to inspect")
groupListCmd.MarkFlagRequired("queue")
}
var groupCmd = &cobra.Command{
Use: "group <command> [flags]",
Short: "Manage groups",
Example: heredoc.Doc(`
$ asynq group list --queue=myqueue`),
}
var groupListCmd = &cobra.Command{
Use: "list",
Aliases: []string{"ls"},
Short: "List groups",
Args: cobra.NoArgs,
Run: groupLists,
}
func groupLists(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
inspector := createInspector()
groups, err := inspector.Groups(qname)
if len(groups) == 0 {
fmt.Printf("No groups found in queue %q\n", qname)
return
}
for _, g := range groups {
fmt.Println(g.Group)
}
}

View File

@@ -9,6 +9,7 @@ import (
"io" "io"
"os" "os"
"github.com/MakeNowJust/heredoc/v2"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/errors" "github.com/hibiken/asynq/internal/errors"
@@ -31,51 +32,75 @@ func init() {
} }
var queueCmd = &cobra.Command{ var queueCmd = &cobra.Command{
Use: "queue", Use: "queue <command> [flags]",
Short: "Manage queues", Short: "Manage queues",
Example: heredoc.Doc(`
$ asynq queue ls
$ asynq queue inspect myqueue
$ asynq queue pause myqueue`),
} }
var queueListCmd = &cobra.Command{ var queueListCmd = &cobra.Command{
Use: "ls", Use: "list",
Short: "List queues", Short: "List queues",
Aliases: []string{"ls"},
// TODO: Use RunE instead? // TODO: Use RunE instead?
Run: queueList, Run: queueList,
} }
var queueInspectCmd = &cobra.Command{ var queueInspectCmd = &cobra.Command{
Use: "inspect QUEUE [QUEUE...]", Use: "inspect <queue> [<queue>...]",
Short: "Display detailed information on one or more queues", Short: "Display detailed information on one or more queues",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
// TODO: Use RunE instead? // TODO: Use RunE instead?
Run: queueInspect, Run: queueInspect,
Example: heredoc.Doc(`
$ asynq queue inspect myqueue
$ asynq queue inspect queue1 queue2 queue3`),
} }
var queueHistoryCmd = &cobra.Command{ var queueHistoryCmd = &cobra.Command{
Use: "history QUEUE [QUEUE...]", Use: "history <queue> [<queue>...]",
Short: "Display historical aggregate data from one or more queues", Short: "Display historical aggregate data from one or more queues",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
Run: queueHistory, Run: queueHistory,
Example: heredoc.Doc(`
$ asynq queue history myqueue
$ asynq queue history queue1 queue2 queue3
$ asynq queue history myqueue --days=90`),
} }
var queuePauseCmd = &cobra.Command{ var queuePauseCmd = &cobra.Command{
Use: "pause QUEUE [QUEUE...]", Use: "pause <queue> [<queue>...]",
Short: "Pause one or more queues", Short: "Pause one or more queues",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
Run: queuePause, Run: queuePause,
Example: heredoc.Doc(`
$ asynq queue pause myqueue
$ asynq queue pause queue1 queue2 queue3`),
} }
var queueUnpauseCmd = &cobra.Command{ var queueUnpauseCmd = &cobra.Command{
Use: "unpause QUEUE [QUEUE...]", Use: "resume <queue> [<queue>...]",
Short: "Unpause one or more queues", Short: "Resume (unpause) one or more queues",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
Run: queueUnpause, Aliases: []string{"unpause"},
Run: queueUnpause,
Example: heredoc.Doc(`
$ asynq queue resume myqueue
$ asynq queue resume queue1 queue2 queue3`),
} }
var queueRemoveCmd = &cobra.Command{ var queueRemoveCmd = &cobra.Command{
Use: "rm QUEUE [QUEUE...]", Use: "remove <queue> [<queue>...]",
Short: "Remove one or more queues", Short: "Remove one or more queues",
Args: cobra.MinimumNArgs(1), Aliases: []string{"rm", "delete"},
Run: queueRemove, Args: cobra.MinimumNArgs(1),
Run: queueRemove,
Example: heredoc.Doc(`
$ asynq queue rm myqueue
$ asynq queue rm queue1 queue2 queue3
$ asynq queue rm myqueue --force`),
} }
func queueList(cmd *cobra.Command, args []string) { func queueList(cmd *cobra.Command, args []string) {
@@ -145,12 +170,13 @@ func printQueueInfo(info *asynq.QueueInfo) {
bold.Println("Queue Info") bold.Println("Queue Info")
fmt.Printf("Name: %s\n", info.Queue) fmt.Printf("Name: %s\n", info.Queue)
fmt.Printf("Size: %d\n", info.Size) fmt.Printf("Size: %d\n", info.Size)
fmt.Printf("Groups: %d\n", info.Groups)
fmt.Printf("Paused: %t\n\n", info.Paused) fmt.Printf("Paused: %t\n\n", info.Paused)
bold.Println("Task Count by State") bold.Println("Task Count by State")
printTable( printTable(
[]string{"active", "pending", "scheduled", "retry", "archived", "completed"}, []string{"active", "pending", "aggregating", "scheduled", "retry", "archived", "completed"},
func(w io.Writer, tmpl string) { func(w io.Writer, tmpl string) {
fmt.Fprintf(w, tmpl, info.Active, info.Pending, info.Scheduled, info.Retry, info.Archived, info.Completed) fmt.Fprintf(w, tmpl, info.Active, info.Pending, info.Aggregating, info.Scheduled, info.Retry, info.Archived, info.Completed)
}, },
) )
fmt.Println() fmt.Println()

View File

@@ -11,14 +11,19 @@ import (
"os" "os"
"strings" "strings"
"text/tabwriter" "text/tabwriter"
"time"
"unicode" "unicode"
"unicode/utf8" "unicode/utf8"
"github.com/go-redis/redis/v8" "github.com/MakeNowJust/heredoc/v2"
"github.com/fatih/color"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/redis/go-redis/v9"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/pflag"
"golang.org/x/exp/utf8string"
homedir "github.com/mitchellh/go-homedir" homedir "github.com/mitchellh/go-homedir"
"github.com/spf13/viper" "github.com/spf13/viper"
@@ -39,10 +44,22 @@ var (
// rootCmd represents the base command when called without any subcommands // rootCmd represents the base command when called without any subcommands
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
Use: "asynq", Use: "asynq <command> <subcommand> [flags]",
Short: "A monitoring tool for asynq queues", Short: "Asynq CLI",
Long: `Asynq is a montoring CLI to inspect tasks and queues managed by asynq.`, Long: `Command line tool to inspect tasks and queues managed by Asynq`,
Version: base.Version, Version: base.Version,
SilenceUsage: true,
SilenceErrors: true,
Example: heredoc.Doc(`
$ asynq stats
$ asynq queue pause myqueue
$ asynq task list --queue=myqueue --state=archived`),
Annotations: map[string]string{
"help:feedback": heredoc.Doc(`
Open an issue at https://github.com/hibiken/asynq/issues/new/choose`),
},
} }
var versionOutput = fmt.Sprintf("asynq version %s\n", base.Version) var versionOutput = fmt.Sprintf("asynq version %s\n", base.Version)
@@ -64,22 +81,239 @@ func Execute() {
} }
} }
func isRootCmd(cmd *cobra.Command) bool {
return cmd != nil && !cmd.HasParent()
}
// displayLine represents a line displayed in the output as '<name> <desc>',
// where pad is used to pad the name from desc.
type displayLine struct {
name string
desc string
pad int // number of rpad
}
func (l *displayLine) String() string {
return rpad(l.name, l.pad) + l.desc
}
type displayLines []*displayLine
func (dls displayLines) String() string {
var lines []string
for _, dl := range dls {
lines = append(lines, dl.String())
}
return strings.Join(lines, "\n")
}
// Capitalize the first word in the given string.
func capitalize(s string) string {
str := utf8string.NewString(s)
if str.RuneCount() == 0 {
return ""
}
var b strings.Builder
b.WriteString(strings.ToUpper(string(str.At(0))))
b.WriteString(str.Slice(1, str.RuneCount()))
return b.String()
}
func rootHelpFunc(cmd *cobra.Command, args []string) {
// Display helpful error message when user mistypes a subcommand (e.g. 'asynq queue lst').
if isRootCmd(cmd.Parent()) && len(args) >= 2 && args[1] != "--help" && args[1] != "-h" {
printSubcommandSuggestions(cmd, args[1])
return
}
var lines []*displayLine
var commands []*displayLine
for _, c := range cmd.Commands() {
if c.Hidden || c.Short == "" || c.Name() == "help" {
continue
}
l := &displayLine{name: c.Name() + ":", desc: capitalize(c.Short)}
commands = append(commands, l)
lines = append(lines, l)
}
var localFlags []*displayLine
cmd.LocalFlags().VisitAll(func(f *pflag.Flag) {
l := &displayLine{name: "--" + f.Name, desc: capitalize(f.Usage)}
localFlags = append(localFlags, l)
lines = append(lines, l)
})
var inheritedFlags []*displayLine
cmd.InheritedFlags().VisitAll(func(f *pflag.Flag) {
l := &displayLine{name: "--" + f.Name, desc: capitalize(f.Usage)}
inheritedFlags = append(inheritedFlags, l)
lines = append(lines, l)
})
adjustPadding(lines...)
type helpEntry struct {
Title string
Body string
}
var helpEntries []*helpEntry
desc := cmd.Long
if desc == "" {
desc = cmd.Short
}
if desc != "" {
helpEntries = append(helpEntries, &helpEntry{"", desc})
}
helpEntries = append(helpEntries, &helpEntry{"USAGE", cmd.UseLine()})
if len(commands) > 0 {
helpEntries = append(helpEntries, &helpEntry{"COMMANDS", displayLines(commands).String()})
}
if cmd.LocalFlags().HasFlags() {
helpEntries = append(helpEntries, &helpEntry{"FLAGS", displayLines(localFlags).String()})
}
if cmd.InheritedFlags().HasFlags() {
helpEntries = append(helpEntries, &helpEntry{"INHERITED FLAGS", displayLines(inheritedFlags).String()})
}
if cmd.Example != "" {
helpEntries = append(helpEntries, &helpEntry{"EXAMPLES", cmd.Example})
}
helpEntries = append(helpEntries, &helpEntry{"LEARN MORE", heredoc.Doc(`
Use 'asynq <command> <subcommand> --help' for more information about a command.`)})
if s, ok := cmd.Annotations["help:feedback"]; ok {
helpEntries = append(helpEntries, &helpEntry{"FEEDBACK", s})
}
out := cmd.OutOrStdout()
bold := color.New(color.Bold)
for _, e := range helpEntries {
if e.Title != "" {
// If there is a title, add indentation to each line in the body
bold.Fprintln(out, e.Title)
fmt.Fprintln(out, indent(e.Body, 2 /* spaces */))
} else {
// If there is no title, print the body as is
fmt.Fprintln(out, e.Body)
}
fmt.Fprintln(out)
}
}
func rootUsageFunc(cmd *cobra.Command) error {
out := cmd.OutOrStdout()
fmt.Fprintf(out, "Usage: %s", cmd.UseLine())
if subcmds := cmd.Commands(); len(subcmds) > 0 {
fmt.Fprint(out, "\n\nAvailable commands:\n")
for _, c := range subcmds {
if c.Hidden {
continue
}
fmt.Fprintf(out, " %s\n", c.Name())
}
}
var localFlags []*displayLine
cmd.LocalFlags().VisitAll(func(f *pflag.Flag) {
localFlags = append(localFlags, &displayLine{name: "--" + f.Name, desc: capitalize(f.Usage)})
})
adjustPadding(localFlags...)
if len(localFlags) > 0 {
fmt.Fprint(out, "\n\nFlags:\n")
for _, l := range localFlags {
fmt.Fprintf(out, " %s\n", l.String())
}
}
return nil
}
func printSubcommandSuggestions(cmd *cobra.Command, arg string) {
out := cmd.OutOrStdout()
fmt.Fprintf(out, "unknown command %q for %q\n", arg, cmd.CommandPath())
if cmd.SuggestionsMinimumDistance <= 0 {
cmd.SuggestionsMinimumDistance = 2
}
candidates := cmd.SuggestionsFor(arg)
if len(candidates) > 0 {
fmt.Fprint(out, "\nDid you mean this?\n")
for _, c := range candidates {
fmt.Fprintf(out, "\t%s\n", c)
}
}
fmt.Fprintln(out)
rootUsageFunc(cmd)
}
func adjustPadding(lines ...*displayLine) {
// find the maximum width of the name
max := 0
for _, l := range lines {
if n := utf8.RuneCountInString(l.name); n > max {
max = n
}
}
for _, l := range lines {
l.pad = max
}
}
// rpad adds padding to the right of a string.
func rpad(s string, padding int) string {
tmpl := fmt.Sprintf("%%-%ds ", padding)
return fmt.Sprintf(tmpl, s)
}
// lpad adds padding to the left of a string.
func lpad(s string, padding int) string {
tmpl := fmt.Sprintf("%%%ds ", padding)
return fmt.Sprintf(tmpl, s)
}
// indent indents the given text by given spaces.
func indent(text string, space int) string {
if len(text) == 0 {
return ""
}
var b strings.Builder
indentation := strings.Repeat(" ", space)
lastRune := '\n'
for _, r := range text {
if lastRune == '\n' {
b.WriteString(indentation)
}
b.WriteRune(r)
lastRune = r
}
return b.String()
}
// dedent removes any indentation from the given text.
func dedent(text string) string {
lines := strings.Split(text, "\n")
var b strings.Builder
for _, l := range lines {
b.WriteString(strings.TrimLeftFunc(l, unicode.IsSpace))
b.WriteRune('\n')
}
return b.String()
}
func init() { func init() {
cobra.OnInitialize(initConfig) cobra.OnInitialize(initConfig)
rootCmd.SetHelpFunc(rootHelpFunc)
rootCmd.SetUsageFunc(rootUsageFunc)
rootCmd.AddCommand(versionCmd) rootCmd.AddCommand(versionCmd)
rootCmd.SetVersionTemplate(versionOutput) rootCmd.SetVersionTemplate(versionOutput)
rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file to set flag defaut values (default is $HOME/.asynq.yaml)") rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "Config file to set flag defaut values (default is $HOME/.asynq.yaml)")
rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "redis server URI") rootCmd.PersistentFlags().StringVarP(&uri, "uri", "u", "127.0.0.1:6379", "Redis server URI")
rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "redis database number (default is 0)") rootCmd.PersistentFlags().IntVarP(&db, "db", "n", 0, "Redis database number (default is 0)")
rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "password to use when connecting to redis server") rootCmd.PersistentFlags().StringVarP(&password, "password", "p", "", "Password to use when connecting to redis server")
rootCmd.PersistentFlags().BoolVar(&useRedisCluster, "cluster", false, "connect to redis cluster") rootCmd.PersistentFlags().BoolVar(&useRedisCluster, "cluster", false, "Connect to redis cluster")
rootCmd.PersistentFlags().StringVar(&clusterAddrs, "cluster_addrs", rootCmd.PersistentFlags().StringVar(&clusterAddrs, "cluster_addrs",
"127.0.0.1:7000,127.0.0.1:7001,127.0.0.1:7002,127.0.0.1:7003,127.0.0.1:7004,127.0.0.1:7005", "127.0.0.1:7000,127.0.0.1:7001,127.0.0.1:7002,127.0.0.1:7003,127.0.0.1:7004,127.0.0.1:7005",
"list of comma-separated redis server addresses") "List of comma-separated redis server addresses")
rootCmd.PersistentFlags().StringVar(&tlsServerName, "tls_server", rootCmd.PersistentFlags().StringVar(&tlsServerName, "tls_server",
"", "server name for TLS validation") "", "Server name for TLS validation")
// Bind flags with config. // Bind flags with config.
viper.BindPFlag("uri", rootCmd.PersistentFlags().Lookup("uri")) viper.BindPFlag("uri", rootCmd.PersistentFlags().Lookup("uri"))
viper.BindPFlag("db", rootCmd.PersistentFlags().Lookup("db")) viper.BindPFlag("db", rootCmd.PersistentFlags().Lookup("db"))
@@ -118,7 +352,7 @@ func initConfig() {
// createRDB creates a RDB instance using flag values and returns it. // createRDB creates a RDB instance using flag values and returns it.
func createRDB() *rdb.RDB { func createRDB() *rdb.RDB {
var c redis.UniversalClient var c redis.UniversalClient
if useRedisCluster { if viper.GetBool("cluster") {
addrs := strings.Split(viper.GetString("cluster_addrs"), ",") addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
c = redis.NewClusterClient(&redis.ClusterOptions{ c = redis.NewClusterClient(&redis.ClusterOptions{
Addrs: addrs, Addrs: addrs,
@@ -136,13 +370,18 @@ func createRDB() *rdb.RDB {
return rdb.NewRDB(c) return rdb.NewRDB(c)
} }
// createRDB creates a Inspector instance using flag values and returns it. // createClient creates a Client instance using flag values and returns it.
func createClient() *asynq.Client {
return asynq.NewClient(getRedisConnOpt())
}
// createInspector creates a Inspector instance using flag values and returns it.
func createInspector() *asynq.Inspector { func createInspector() *asynq.Inspector {
return asynq.NewInspector(getRedisConnOpt()) return asynq.NewInspector(getRedisConnOpt())
} }
func getRedisConnOpt() asynq.RedisConnOpt { func getRedisConnOpt() asynq.RedisConnOpt {
if useRedisCluster { if viper.GetBool("cluster") {
addrs := strings.Split(viper.GetString("cluster_addrs"), ",") addrs := strings.Split(viper.GetString("cluster_addrs"), ",")
return asynq.RedisClusterClientOpt{ return asynq.RedisClusterClientOpt{
Addrs: addrs, Addrs: addrs,
@@ -223,3 +462,37 @@ func isPrintable(data []byte) bool {
} }
return !isAllSpace return !isAllSpace
} }
// Helper to turn a command line flag into a duration
func getDuration(cmd *cobra.Command, arg string) time.Duration {
durationStr, err := cmd.Flags().GetString(arg)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
duration, err := time.ParseDuration(durationStr)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
return duration
}
// Helper to turn a command line flag into a time
func getTime(cmd *cobra.Command, arg string) time.Time {
timeStr, err := cmd.Flags().GetString(arg)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
timeVal, err := time.Parse(time.RFC3339, timeStr)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
return timeVal
}

View File

@@ -12,6 +12,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/MakeNowJust/heredoc/v2"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -21,13 +22,16 @@ func init() {
} }
var serverCmd = &cobra.Command{ var serverCmd = &cobra.Command{
Use: "server", Use: "server <command> [flags]",
Short: "Manage servers", Short: "Manage servers",
Example: heredoc.Doc(`
$ asynq server list`),
} }
var serverListCmd = &cobra.Command{ var serverListCmd = &cobra.Command{
Use: "ls", Use: "list",
Short: "List servers", Aliases: []string{"ls"},
Short: "List servers",
Long: `Server list (asynq server ls) shows all running worker servers Long: `Server list (asynq server ls) shows all running worker servers
pulling tasks from the given redis instance. pulling tasks from the given redis instance.

View File

@@ -5,6 +5,7 @@
package cmd package cmd
import ( import (
"encoding/json"
"fmt" "fmt"
"io" "io"
"math" "math"
@@ -15,6 +16,7 @@ import (
"time" "time"
"unicode/utf8" "unicode/utf8"
"github.com/MakeNowJust/heredoc/v2"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/hibiken/asynq/internal/rdb" "github.com/hibiken/asynq/internal/rdb"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@@ -23,25 +25,24 @@ import (
// statsCmd represents the stats command // statsCmd represents the stats command
var statsCmd = &cobra.Command{ var statsCmd = &cobra.Command{
Use: "stats", Use: "stats",
Short: "Shows current state of the tasks and queues", Short: "View current state",
Long: `Stats (aysnq stats) will show the overview of tasks and queues at that instant. Long: heredoc.Doc(`
Stats shows the overview of tasks and queues at that instant.
Specifically, the command shows the following: The command shows the following:
* Number of tasks in each state * Number of tasks in each state
* Number of tasks in each queue * Number of tasks in each queue
* Aggregate data for the current day * Aggregate data for the current day
* Basic information about the running redis instance * Basic information about the running redis instance`),
To monitor the tasks continuously, it's recommended that you run this
command in conjunction with the watch command.
Example: watch -n 3 asynq stats -> Shows current state of tasks every three seconds`,
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: stats, Run: stats,
} }
var jsonFlag bool
func init() { func init() {
rootCmd.AddCommand(statsCmd) rootCmd.AddCommand(statsCmd)
statsCmd.Flags().BoolVar(&jsonFlag, "json", false, "Output stats in JSON format.")
// Here you will define your flags and configuration settings. // Here you will define your flags and configuration settings.
@@ -55,15 +56,22 @@ func init() {
} }
type AggregateStats struct { type AggregateStats struct {
Active int Active int `json:"active"`
Pending int Pending int `json:"pending"`
Scheduled int Aggregating int `json:"aggregating"`
Retry int Scheduled int `json:"scheduled"`
Archived int Retry int `json:"retry"`
Completed int Archived int `json:"archived"`
Processed int Completed int `json:"completed"`
Failed int Processed int `json:"processed"`
Timestamp time.Time Failed int `json:"failed"`
Timestamp time.Time `json:"timestamp"`
}
type FullStats struct {
Aggregate AggregateStats `json:"aggregate"`
QueueStats []*rdb.Stats `json:"queues"`
RedisInfo map[string]string `json:"redis"`
} }
func stats(cmd *cobra.Command, args []string) { func stats(cmd *cobra.Command, args []string) {
@@ -85,6 +93,7 @@ func stats(cmd *cobra.Command, args []string) {
} }
aggStats.Active += s.Active aggStats.Active += s.Active
aggStats.Pending += s.Pending aggStats.Pending += s.Pending
aggStats.Aggregating += s.Aggregating
aggStats.Scheduled += s.Scheduled aggStats.Scheduled += s.Scheduled
aggStats.Retry += s.Retry aggStats.Retry += s.Retry
aggStats.Archived += s.Archived aggStats.Archived += s.Archived
@@ -104,6 +113,23 @@ func stats(cmd *cobra.Command, args []string) {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)
} }
if jsonFlag {
statsJSON, err := json.Marshal(FullStats{
Aggregate: aggStats,
QueueStats: stats,
RedisInfo: info,
})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Println(string(statsJSON))
return
}
bold := color.New(color.Bold) bold := color.New(color.Bold)
bold.Println("Task Count by State") bold.Println("Task Count by State")
printStatsByState(&aggStats) printStatsByState(&aggStats)
@@ -128,13 +154,13 @@ func stats(cmd *cobra.Command, args []string) {
} }
func printStatsByState(s *AggregateStats) { func printStatsByState(s *AggregateStats) {
format := strings.Repeat("%v\t", 6) + "\n" format := strings.Repeat("%v\t", 7) + "\n"
tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0) tw := new(tabwriter.Writer).Init(os.Stdout, 0, 8, 2, ' ', 0)
fmt.Fprintf(tw, format, "active", "pending", "scheduled", "retry", "archived", "completed") fmt.Fprintf(tw, format, "active", "pending", "aggregating", "scheduled", "retry", "archived", "completed")
width := maxInt(9 /* defaultWidth */, maxWidthOf(s.Active, s.Pending, s.Scheduled, s.Retry, s.Archived, s.Completed)) // length of widest column width := maxInt(9 /* defaultWidth */, maxWidthOf(s.Active, s.Pending, s.Aggregating, s.Scheduled, s.Retry, s.Archived, s.Completed)) // length of widest column
sep := strings.Repeat("-", width) sep := strings.Repeat("-", width)
fmt.Fprintf(tw, format, sep, sep, sep, sep, sep, sep) fmt.Fprintf(tw, format, sep, sep, sep, sep, sep, sep, sep)
fmt.Fprintf(tw, format, s.Active, s.Pending, s.Scheduled, s.Retry, s.Archived, s.Completed) fmt.Fprintf(tw, format, s.Active, s.Pending, s.Aggregating, s.Scheduled, s.Retry, s.Archived, s.Completed)
tw.Flush() tw.Flush()
} }

View File

@@ -10,6 +10,7 @@ import (
"os" "os"
"time" "time"
"github.com/MakeNowJust/heredoc/v2"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@@ -18,143 +19,194 @@ import (
func init() { func init() {
rootCmd.AddCommand(taskCmd) rootCmd.AddCommand(taskCmd)
taskCmd.AddCommand(taskListCmd) taskCmd.AddCommand(taskListCmd)
taskListCmd.Flags().StringP("queue", "q", "", "queue to inspect") taskListCmd.Flags().StringP("queue", "q", "", "queue to inspect (required)")
taskListCmd.Flags().StringP("state", "s", "", "state of the tasks to inspect") taskListCmd.Flags().StringP("state", "s", "", "state of the tasks; one of { active | pending | aggregating | scheduled | retry | archived | completed } (required)")
taskListCmd.Flags().Int("page", 1, "page number") taskListCmd.Flags().Int("page", 1, "page number")
taskListCmd.Flags().Int("size", 30, "page size") taskListCmd.Flags().Int("size", 30, "page size")
taskListCmd.Flags().StringP("group", "g", "", "group to inspect (required for listing aggregating tasks)")
taskListCmd.MarkFlagRequired("queue") taskListCmd.MarkFlagRequired("queue")
taskListCmd.MarkFlagRequired("state") taskListCmd.MarkFlagRequired("state")
taskCmd.AddCommand(taskCancelCmd) taskCmd.AddCommand(taskCancelCmd)
taskCmd.AddCommand(taskInspectCmd) taskCmd.AddCommand(taskInspectCmd)
taskInspectCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs") taskInspectCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs (required)")
taskInspectCmd.Flags().StringP("id", "i", "", "id of the task") taskInspectCmd.Flags().StringP("id", "i", "", "id of the task (required)")
taskInspectCmd.MarkFlagRequired("queue") taskInspectCmd.MarkFlagRequired("queue")
taskInspectCmd.MarkFlagRequired("id") taskInspectCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskArchiveCmd) taskCmd.AddCommand(taskArchiveCmd)
taskArchiveCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs") taskArchiveCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs (required)")
taskArchiveCmd.Flags().StringP("id", "i", "", "id of the task") taskArchiveCmd.Flags().StringP("id", "i", "", "id of the task (required)")
taskArchiveCmd.MarkFlagRequired("queue") taskArchiveCmd.MarkFlagRequired("queue")
taskArchiveCmd.MarkFlagRequired("id") taskArchiveCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskDeleteCmd) taskCmd.AddCommand(taskDeleteCmd)
taskDeleteCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs") taskDeleteCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs (required)")
taskDeleteCmd.Flags().StringP("id", "i", "", "id of the task") taskDeleteCmd.Flags().StringP("id", "i", "", "id of the task (required)")
taskDeleteCmd.MarkFlagRequired("queue") taskDeleteCmd.MarkFlagRequired("queue")
taskDeleteCmd.MarkFlagRequired("id") taskDeleteCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskRunCmd) taskCmd.AddCommand(taskRunCmd)
taskRunCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs") taskRunCmd.Flags().StringP("queue", "q", "", "queue to which the task belongs (required)")
taskRunCmd.Flags().StringP("id", "i", "", "id of the task") taskRunCmd.Flags().StringP("id", "i", "", "id of the task (required)")
taskRunCmd.MarkFlagRequired("queue") taskRunCmd.MarkFlagRequired("queue")
taskRunCmd.MarkFlagRequired("id") taskRunCmd.MarkFlagRequired("id")
taskCmd.AddCommand(taskEnqueueCmd)
taskEnqueueCmd.Flags().StringP("type_name", "t", "", "type name to enqueue the task as (required)")
taskEnqueueCmd.Flags().StringP("payload", "l", "", "payload to enqueue (required)")
// The following are the various OptionTypes; if not specified we won't pass them so that composeOptions()
// can apply its own defaults
taskEnqueueCmd.Flags().Int("retry", 0, "maximum retries")
taskEnqueueCmd.Flags().String("queue", "", "queue to enqueue the task to")
taskEnqueueCmd.Flags().String("id", "", "id to enqueue the task as")
taskEnqueueCmd.Flags().String("timeout", "", "timeout for the task (how long it can run); must be parseable as a time.Duration")
taskEnqueueCmd.Flags().String("deadline", "", "deadline for the task; must be in RFC3339 format")
taskEnqueueCmd.Flags().String("unique", "", "unique period for the task (duration within which it is guaranteed to be unique); must be parseable as a time.Duration")
taskEnqueueCmd.Flags().String("process_at", "", "process at time for the task; must be in RFC3339 format")
taskEnqueueCmd.Flags().String("process_in", "", "process in window for the task; must be parseable as a time.Duration")
taskEnqueueCmd.Flags().String("retention", "", "retention window for the task; must be parseable as a time.Duration")
taskEnqueueCmd.Flags().String("group", "", "group for the task")
taskEnqueueCmd.MarkFlagRequired("type_name")
taskEnqueueCmd.MarkFlagRequired("payload")
taskCmd.AddCommand(taskArchiveAllCmd) taskCmd.AddCommand(taskArchiveAllCmd)
taskArchiveAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong") taskArchiveAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong (required)")
taskArchiveAllCmd.Flags().StringP("state", "s", "", "state of the tasks") taskArchiveAllCmd.Flags().StringP("state", "s", "", "state of the tasks; one of { pending | aggregating | scheduled | retry } (required)")
taskArchiveAllCmd.MarkFlagRequired("queue") taskArchiveAllCmd.MarkFlagRequired("queue")
taskArchiveAllCmd.MarkFlagRequired("state") taskArchiveAllCmd.MarkFlagRequired("state")
taskArchiveAllCmd.Flags().StringP("group", "g", "", "group to which the tasks belong (required for archiving aggregating tasks)")
taskCmd.AddCommand(taskDeleteAllCmd) taskCmd.AddCommand(taskDeleteAllCmd)
taskDeleteAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong") taskDeleteAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong (required)")
taskDeleteAllCmd.Flags().StringP("state", "s", "", "state of the tasks") taskDeleteAllCmd.Flags().StringP("state", "s", "", "state of the tasks; one of { pending | aggregating | scheduled | retry | archived | completed } (required)")
taskDeleteAllCmd.MarkFlagRequired("queue") taskDeleteAllCmd.MarkFlagRequired("queue")
taskDeleteAllCmd.MarkFlagRequired("state") taskDeleteAllCmd.MarkFlagRequired("state")
taskDeleteAllCmd.Flags().StringP("group", "g", "", "group to which the tasks belong (required for deleting aggregating tasks)")
taskCmd.AddCommand(taskRunAllCmd) taskCmd.AddCommand(taskRunAllCmd)
taskRunAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong") taskRunAllCmd.Flags().StringP("queue", "q", "", "queue to which the tasks belong (required)")
taskRunAllCmd.Flags().StringP("state", "s", "", "state of the tasks") taskRunAllCmd.Flags().StringP("state", "s", "", "state of the tasks; one of { scheduled | retry | archived } (required)")
taskRunAllCmd.MarkFlagRequired("queue") taskRunAllCmd.MarkFlagRequired("queue")
taskRunAllCmd.MarkFlagRequired("state") taskRunAllCmd.MarkFlagRequired("state")
taskRunAllCmd.Flags().StringP("group", "g", "", "group to which the tasks belong (required for running aggregating tasks)")
} }
var taskCmd = &cobra.Command{ var taskCmd = &cobra.Command{
Use: "task", Use: "task <command> [flags]",
Short: "Manage tasks", Short: "Manage tasks",
Example: heredoc.Doc(`
$ asynq task list --queue=myqueue --state=scheduled
$ asynq task inspect --queue=myqueue --id=7837f142-6337-4217-9276-8f27281b67d1
$ asynq task delete --queue=myqueue --id=7837f142-6337-4217-9276-8f27281b67d1
$ asynq task deleteall --queue=myqueue --state=archived`),
} }
var taskListCmd = &cobra.Command{ var taskListCmd = &cobra.Command{
Use: "ls --queue=QUEUE --state=STATE", Use: "list --queue=<queue> --state=<state> [flags]",
Short: "List tasks", Aliases: []string{"ls"},
Long: `List tasks of the given state from the specified queue. Short: "List tasks",
Long: heredoc.Doc(`
List tasks of the given state from the specified queue.
The value for the state flag should be one of: The --queue and --state flags are required.
- active
- pending
- scheduled
- retry
- archived
- completed
List opeartion paginates the result set. Note: For aggregating tasks, additional --group flag is required.
By default, the command fetches the first 30 tasks.
Use --page and --size flags to specify the page number and size.
Example: List opeartion paginates the result set. By default, the command fetches the first 30 tasks.
To list pending tasks from "default" queue, run Use --page and --size flags to specify the page number and size.`),
asynq task ls --queue=default --state=pending Example: heredoc.Doc(`
$ asynq task list --queue=myqueue --state=pending
To list the tasks from the second page, run $ asynq task list --queue=myqueue --state=aggregating --group=mygroup
asynq task ls --queue=default --state=pending --page=1`, $ asynq task list --queue=myqueue --state=scheduled --page=2`),
Run: taskList, Run: taskList,
} }
var taskInspectCmd = &cobra.Command{ var taskInspectCmd = &cobra.Command{
Use: "inspect --queue=QUEUE --id=TASK_ID", Use: "inspect --queue=<queue> --id=<task_id>",
Short: "Display detailed information on the specified task", Short: "Display detailed information on the specified task",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskInspect, Run: taskInspect,
Example: heredoc.Doc(`
$ asynq task inspect --queue=myqueue --id=f1720682-f5a6-4db1-8953-4f48ae541d0f`),
} }
var taskCancelCmd = &cobra.Command{ var taskCancelCmd = &cobra.Command{
Use: "cancel TASK_ID [TASK_ID...]", Use: "cancel <task_id> [<task_id>...]",
Short: "Cancel one or more active tasks", Short: "Cancel one or more active tasks",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
Run: taskCancel, Run: taskCancel,
Example: heredoc.Doc(`
$ asynq task cancel f1720682-f5a6-4db1-8953-4f48ae541d0f`),
} }
var taskArchiveCmd = &cobra.Command{ var taskArchiveCmd = &cobra.Command{
Use: "archive --queue=QUEUE --id=TASK_ID", Use: "archive --queue=<queue> --id=<task_id>",
Short: "Archive a task with the given id", Short: "Archive a task with the given id",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskArchive, Run: taskArchive,
Example: heredoc.Doc(`
$ asynq task archive --queue=myqueue --id=f1720682-f5a6-4db1-8953-4f48ae541d0f`),
} }
var taskDeleteCmd = &cobra.Command{ var taskDeleteCmd = &cobra.Command{
Use: "delete --queue=QUEUE --id=TASK_ID", Use: "delete --queue=<queue> --id=<task_id>",
Short: "Delete a task with the given id", Aliases: []string{"remove", "rm"},
Args: cobra.NoArgs, Short: "Delete a task with the given id",
Run: taskDelete, Args: cobra.NoArgs,
Run: taskDelete,
Example: heredoc.Doc(`
$ asynq task delete --queue=myqueue --id=f1720682-f5a6-4db1-8953-4f48ae541d0f`),
} }
var taskRunCmd = &cobra.Command{ var taskRunCmd = &cobra.Command{
Use: "run --queue=QUEUE --id=TASK_ID", Use: "run --queue=<queue> --id=<task_id>",
Short: "Run a task with the given id", Short: "Run a task with the given id",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskRun, Run: taskRun,
Example: heredoc.Doc(`
$ asynq task run --queue=myqueue --id=f1720682-f5a6-4db1-8953-4f48ae541d0f`),
}
var taskEnqueueCmd = &cobra.Command{
Use: "enqueue --type_name=footype --payload=barpayload",
Short: "Enqueue a task",
Args: cobra.NoArgs,
Run: taskEnqueue,
Example: heredoc.Doc(`
$ asynq task enqueue -t footype -l barpayload
$ asynq task enqueue -t footask -l barpayload --retry 3 --id f1720682-f5a6-4db1-8953-4f48ae541d0f --queue bazqueue --timeout 100s --deadline 2024-12-14T01:23:45Z --unique 100s --process_at 2024-12-14T01:22:05Z --process_in 100s --retention 5h --group baygroup`),
} }
var taskArchiveAllCmd = &cobra.Command{ var taskArchiveAllCmd = &cobra.Command{
Use: "archiveall --queue=QUEUE --state=STATE", Use: "archiveall --queue=<queue> --state=<state>",
Short: "Archive all tasks in the given state", Short: "Archive all tasks in the given state",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskArchiveAll, Run: taskArchiveAll,
Example: heredoc.Doc(`
$ asynq task archiveall --queue=myqueue --state=retry
$ asynq task archiveall --queue=myqueue --state=aggregating --group=mygroup`),
} }
var taskDeleteAllCmd = &cobra.Command{ var taskDeleteAllCmd = &cobra.Command{
Use: "deleteall --queue=QUEUE --state=STATE", Use: "deleteall --queue=<queue> --state=<state>",
Short: "Delete all tasks in the given state", Short: "Delete all tasks in the given state",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskDeleteAll, Run: taskDeleteAll,
Example: heredoc.Doc(`
$ asynq task deleteall --queue=myqueue --state=archived
$ asynq task deleteall --queue=myqueue --state=aggregating --group=mygroup`),
} }
var taskRunAllCmd = &cobra.Command{ var taskRunAllCmd = &cobra.Command{
Use: "runall --queue=QUEUE --state=STATE", Use: "runall --queue=<queue> --state=<state>",
Short: "Run all tasks in the given state", Short: "Run all tasks in the given state",
Args: cobra.NoArgs, Args: cobra.NoArgs,
Run: taskRunAll, Run: taskRunAll,
Example: heredoc.Doc(`
$ asynq task runall --queue=myqueue --state=retry
$ asynq task runall --queue=myqueue --state=aggregating --group=mygroup`),
} }
func taskList(cmd *cobra.Command, args []string) { func taskList(cmd *cobra.Command, args []string) {
@@ -192,6 +244,17 @@ func taskList(cmd *cobra.Command, args []string) {
listArchivedTasks(qname, pageNum, pageSize) listArchivedTasks(qname, pageNum, pageSize)
case "completed": case "completed":
listCompletedTasks(qname, pageNum, pageSize) listCompletedTasks(qname, pageNum, pageSize)
case "aggregating":
group, err := cmd.Flags().GetString("group")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if group == "" {
fmt.Println("Flag --group is required for listing aggregating tasks")
os.Exit(1)
}
listAggregatingTasks(qname, group, pageNum, pageSize)
default: default:
fmt.Printf("error: state=%q is not supported\n", state) fmt.Printf("error: state=%q is not supported\n", state)
os.Exit(1) os.Exit(1)
@@ -334,6 +397,27 @@ func listCompletedTasks(qname string, pageNum, pageSize int) {
}) })
} }
func listAggregatingTasks(qname, group string, pageNum, pageSize int) {
i := createInspector()
tasks, err := i.ListAggregatingTasks(qname, group, asynq.PageSize(pageSize), asynq.Page(pageNum))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if len(tasks) == 0 {
fmt.Printf("No aggregating tasks in group %q \n", group)
return
}
printTable(
[]string{"ID", "Type", "Payload", "Group"},
func(w io.Writer, tmpl string) {
for _, t := range tasks {
fmt.Fprintf(w, tmpl, t.ID, t.Type, sprintBytes(t.Payload), t.Group)
}
},
)
}
func taskCancel(cmd *cobra.Command, args []string) { func taskCancel(cmd *cobra.Command, args []string) {
i := createInspector() i := createInspector()
for _, id := range args { for _, id := range args {
@@ -465,6 +549,95 @@ func taskRun(cmd *cobra.Command, args []string) {
fmt.Println("task is now pending") fmt.Println("task is now pending")
} }
func taskEnqueue(cmd *cobra.Command, args []string) {
typeName, err := cmd.Flags().GetString("type_name")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
payload, err := cmd.Flags().GetString("payload")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
// For all of the optional flags, we need to explicitly check whether they were set or
// not; for consistency we want to use the defaults set in composeOptions() rather than
// the ones in the flag definitions.
opts := []asynq.Option{}
if cmd.Flags().Changed("retry") {
retry, err := cmd.Flags().GetInt("retry")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
opts = append(opts, asynq.MaxRetry(retry))
}
if cmd.Flags().Changed("queue") {
queue, err := cmd.Flags().GetString("queue")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
opts = append(opts, asynq.Queue(queue))
}
if cmd.Flags().Changed("id") {
id, err := cmd.Flags().GetString("id")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
opts = append(opts, asynq.TaskID(id))
}
if cmd.Flags().Changed("timeout") {
opts = append(opts, asynq.Timeout(getDuration(cmd, "timeout")))
}
if cmd.Flags().Changed("deadline") {
opts = append(opts, asynq.Deadline(getTime(cmd, "deadline")))
}
if cmd.Flags().Changed("unique") {
opts = append(opts, asynq.Unique(getDuration(cmd, "unique")))
}
if cmd.Flags().Changed("process_at") {
opts = append(opts, asynq.ProcessAt(getTime(cmd, "process_at")))
}
if cmd.Flags().Changed("process_in") {
opts = append(opts, asynq.ProcessIn(getDuration(cmd, "process_in")))
}
if cmd.Flags().Changed("retention") {
opts = append(opts, asynq.Retention(getDuration(cmd, "retention")))
}
if cmd.Flags().Changed("group") {
group, err := cmd.Flags().GetString("group")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
opts = append(opts, asynq.Group(group))
}
c := createClient()
task := asynq.NewTask(typeName, []byte(payload), opts...)
taskInfo, err := c.Enqueue(task)
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
fmt.Printf("Enqueued task %s to queue %s\n", taskInfo.ID, taskInfo.Queue)
}
func taskArchiveAll(cmd *cobra.Command, args []string) { func taskArchiveAll(cmd *cobra.Command, args []string) {
qname, err := cmd.Flags().GetString("queue") qname, err := cmd.Flags().GetString("queue")
if err != nil { if err != nil {
@@ -486,6 +659,17 @@ func taskArchiveAll(cmd *cobra.Command, args []string) {
n, err = i.ArchiveAllScheduledTasks(qname) n, err = i.ArchiveAllScheduledTasks(qname)
case "retry": case "retry":
n, err = i.ArchiveAllRetryTasks(qname) n, err = i.ArchiveAllRetryTasks(qname)
case "aggregating":
group, err := cmd.Flags().GetString("group")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
if group == "" {
fmt.Println("error: Flag --group is required for aggregating tasks")
os.Exit(1)
}
n, err = i.ArchiveAllAggregatingTasks(qname, group)
default: default:
fmt.Printf("error: unsupported state %q\n", state) fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1) os.Exit(1)
@@ -522,6 +706,17 @@ func taskDeleteAll(cmd *cobra.Command, args []string) {
n, err = i.DeleteAllArchivedTasks(qname) n, err = i.DeleteAllArchivedTasks(qname)
case "completed": case "completed":
n, err = i.DeleteAllCompletedTasks(qname) n, err = i.DeleteAllCompletedTasks(qname)
case "aggregating":
group, err := cmd.Flags().GetString("group")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
if group == "" {
fmt.Println("error: Flag --group is required for aggregating tasks")
os.Exit(1)
}
n, err = i.DeleteAllAggregatingTasks(qname, group)
default: default:
fmt.Printf("error: unsupported state %q\n", state) fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1) os.Exit(1)
@@ -554,6 +749,17 @@ func taskRunAll(cmd *cobra.Command, args []string) {
n, err = i.RunAllRetryTasks(qname) n, err = i.RunAllRetryTasks(qname)
case "archived": case "archived":
n, err = i.RunAllArchivedTasks(qname) n, err = i.RunAllArchivedTasks(qname)
case "aggregating":
group, err := cmd.Flags().GetString("group")
if err != nil {
fmt.Printf("error: %v\n", err)
os.Exit(1)
}
if group == "" {
fmt.Println("error: Flag --group is required for aggregating tasks")
os.Exit(1)
}
n, err = i.RunAllAggregatingTasks(qname, group)
default: default:
fmt.Printf("error: unsupported state %q\n", state) fmt.Printf("error: unsupported state %q\n", state)
os.Exit(1) os.Exit(1)
@@ -564,3 +770,4 @@ func taskRunAll(cmd *cobra.Command, args []string) {
} }
fmt.Printf("%d tasks are now pending\n", n) fmt.Printf("%d tasks are now pending\n", n)
} }

View File

@@ -1,15 +1,55 @@
module github.com/hibiken/asynq/tools module github.com/hibiken/asynq/tools
go 1.13 go 1.22
require ( require (
github.com/fatih/color v1.9.0 github.com/MakeNowJust/heredoc/v2 v2.0.1
github.com/go-redis/redis/v8 v8.11.2 github.com/fatih/color v1.18.0
github.com/google/uuid v1.2.0 github.com/gdamore/tcell/v2 v2.5.1
github.com/hibiken/asynq v0.17.1 github.com/google/go-cmp v0.6.0
github.com/hibiken/asynq v0.25.0
github.com/hibiken/asynq/x v0.0.0-20220131170841-349f4c50fb1d
github.com/mattn/go-runewidth v0.0.16
github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-homedir v1.1.0
github.com/prometheus/client_golang v1.11.1
github.com/redis/go-redis/v9 v9.7.0
github.com/spf13/cobra v1.1.1 github.com/spf13/cobra v1.1.1
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.7.0 github.com/spf13/viper v1.7.0
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136
) )
replace github.com/hibiken/asynq => ./.. require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/gdamore/encoding v1.0.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/magiconair/properties v1.8.1 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.1.2 // indirect
github.com/pelletier/go-toml v1.2.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/spf13/afero v1.1.2 // indirect
github.com/spf13/cast v1.7.0 // indirect
github.com/spf13/jwalterweatherman v1.0.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf // indirect
golang.org/x/text v0.3.8 // indirect
golang.org/x/time v0.7.0 // indirect
google.golang.org/protobuf v1.35.1 // indirect
gopkg.in/ini.v1 v1.51.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

View File

@@ -14,21 +14,33 @@ dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/MakeNowJust/heredoc/v2 v2.0.1 h1:rlCHh70XXXv7toz95ajQWOWQnN4WNLt0TdpZYIR/J6A=
github.com/MakeNowJust/heredoc/v2 v2.0.1/go.mod h1:6/2Abh5s+hc3g9nbWLe9ObDIOhaRrqsyY9MWy+4JdRM=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84= github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
@@ -46,19 +58,29 @@ github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s= github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/gdamore/encoding v1.0.0 h1:+7OoQ1Bc6eTm5niUzBa0Ctsh6JbMW6Ra+YNuAtDBdko=
github.com/gdamore/encoding v1.0.0/go.mod h1:alR0ol34c49FCSBLjhosxzcPHQbf2trDkoo5dl+VrEg=
github.com/gdamore/tcell/v2 v2.5.1 h1:zc3LPdpK184lBW7syF2a5C6MV827KmErk9jGVnmsl/I=
github.com/gdamore/tcell/v2 v2.5.1/go.mod h1:wSkrPaXoiIWZqW/g7Px4xc79di6FTcpB8tvaKJ6uGBo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-redis/redis/v8 v8.11.2 h1:WqlSpAwz8mxDSMCvbyz1Mkiqe0LE5OY4j3lgkvu1Ts0= github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-redis/redis/v8 v8.11.2/go.mod h1:DLomh7y2e3ggQXQLd1YgmvIfecPJoFl7WU5SOQ/r06M= github.com/go-redis/redis/v8 v8.11.2/go.mod h1:DLomh7y2e3ggQXQLd1YgmvIfecPJoFl7WU5SOQ/r06M=
github.com/go-redis/redis/v8 v8.11.4/go.mod h1:2Z2wHZXdQpCDXEGzqMockDpNyYvi2l4Pxt6RJr792+w=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@@ -75,8 +97,12 @@ github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrU
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
@@ -84,14 +110,20 @@ github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMyw
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
@@ -121,33 +153,51 @@ github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ= github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I= github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc= github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hibiken/asynq v0.19.0/go.mod h1:tyc63ojaW8SJ5SBm8mvI4DDONsguP5HE85EEl4Qr5Ig=
github.com/hibiken/asynq v0.25.0 h1:VCPyRRrrjFChsTSI8x5OCPu51MlEz6Rk+1p0kHKnZug=
github.com/hibiken/asynq v0.25.0/go.mod h1:DYQ1etBEl2Y+uSkqFElGYbk3M0ujLVwCfWE+TlvxtEk=
github.com/hibiken/asynq/x v0.0.0-20220131170841-349f4c50fb1d h1:Er+U+9PmnyRHRDQjSjRQ24HoWvOY7w9Pk7bUPYM3Ags=
github.com/hibiken/asynq/x v0.0.0-20220131170841-349f4c50fb1d/go.mod h1:VmxwMfMKyb6gyv8xG0oOBMXIhquWKPx+zPtbVBd2Q1s=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM= github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo= github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4= github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.11 h1:FxPOTFNqGkuDUGi3H/qkUbQO4ZiBa2brKq5r0l8TGeM= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
@@ -160,47 +210,74 @@ github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0Qu
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE= github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78= github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.15.0 h1:1V1NfVQR87RtWAgp1lv9JZJ5Jap+XFGKPi00andXGi4=
github.com/onsi/ginkgo v1.15.0/go.mod h1:hF8qUzuuC8DJGygJH3726JnCZX4MYbRB8yFfISqnKUg= github.com/onsi/ginkgo v1.15.0/go.mod h1:hF8qUzuuC8DJGygJH3726JnCZX4MYbRB8yFfISqnKUg=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.10.5 h1:7n6FEkpFmfCoo2t+YYqXH0evK+a9ICQz0xcAy9dYcaQ=
github.com/onsi/gomega v1.10.5/go.mod h1:gza4q3jKQJijlu05nKWRCW/GavJumGt8aNRxWg7mt48= github.com/onsi/gomega v1.10.5/go.mod h1:gza4q3jKQJijlu05nKWRCW/GavJumGt8aNRxWg7mt48=
github.com/onsi/gomega v1.16.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc= github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.11.1 h1:+4eQaD7vAZ6DsfsxB15hbE0odUjGI5ARs9yskGu1v4s=
github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0 h1:iMAkS2TDoNWnKM+Kopnx/8tnEStIfpYA0ur0xQzzhMQ=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs= github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro= github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s= github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
@@ -210,8 +287,9 @@ github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasO
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI= github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v1.1.1 h1:KfztREH0tPxJJ+geloSLaAkaPkr4ki2Er5quFV1TDo4= github.com/spf13/cobra v1.1.1 h1:KfztREH0tPxJJ+geloSLaAkaPkr4ki2Er5quFV1TDo4=
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI= github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk= github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
@@ -225,6 +303,8 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s= github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
@@ -236,8 +316,9 @@ go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v0.10.0 h1:G3eWbSNIskeRqtsN/1uI5B+eP73y3JUuBsv9AZjehb4=
go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI= go.uber.org/goleak v0.10.0/go.mod h1:VCZuO8V8mFPlL0F5J5GK1rtHV3DrFcQ1R8ryq7FK0aI=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
@@ -251,6 +332,7 @@ golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek= golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136 h1:A1gGSx58LAGVHUUsOf7IiR0u8Xb6W51gRwfDBhkdcaw=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY= golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
@@ -280,11 +362,13 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb h1:eBmm0M9fYhWpKZLjQUUKka/LtIxf46G4fxeEz5KJr9U=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -293,7 +377,9 @@ golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -302,29 +388,46 @@ golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091 h1:DMyOG0U+gKfu8JZzg2UQe9MeaC1X+xQWlAKcRnjxjCw= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220318055525-2edf467146b5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf h1:MZ2shdL+ZM/XzY3ZGOnh4Nlpnxz5GSOhOmtHo3iPU6M=
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8 h1:nAL+RVCQ9uMn3vJZbV+MRnydTJFPf8qqY42YiA6MrqY=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -348,7 +451,6 @@ golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
@@ -382,8 +484,11 @@ google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzi
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -394,14 +499,16 @@ gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMy
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno= gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

View File

@@ -0,0 +1,56 @@
package main
import (
"flag"
"fmt"
"log"
"net/http"
"github.com/hibiken/asynq"
"github.com/hibiken/asynq/x/metrics"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
// Declare command-line flags.
// These variables are binded to flags in init().
var (
flagRedisAddr string
flagRedisDB int
flagRedisPassword string
flagRedisUsername string
flagPort int
)
func init() {
flag.StringVar(&flagRedisAddr, "redis-addr", "127.0.0.1:6379", "host:port of redis server to connect to")
flag.IntVar(&flagRedisDB, "redis-db", 0, "redis DB number to use")
flag.StringVar(&flagRedisPassword, "redis-password", "", "password used to connect to redis server")
flag.StringVar(&flagRedisUsername, "redis-username", "", "username used to connect to redis server")
flag.IntVar(&flagPort, "port", 9876, "port to use for the HTTP server")
}
func main() {
flag.Parse()
// Using NewPedanticRegistry here to test the implementation of Collectors and Metrics.
reg := prometheus.NewPedanticRegistry()
inspector := asynq.NewInspector(asynq.RedisClientOpt{
Addr: flagRedisAddr,
DB: flagRedisDB,
Password: flagRedisPassword,
Username: flagRedisUsername,
})
reg.MustRegister(
metrics.NewQueueMetricsCollector(inspector),
// Add the standard process and go metrics to the registry
collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
collectors.NewGoCollector(),
)
http.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{}))
log.Printf("exporter server is listening on port: %d\n", flagPort)
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", flagPort), nil))
}

25
x/go.mod Normal file
View File

@@ -0,0 +1,25 @@
module github.com/hibiken/asynq/x
go 1.22
require (
github.com/google/uuid v1.6.0
github.com/hibiken/asynq v0.25.0
github.com/prometheus/client_golang v1.20.5
github.com/redis/go-redis/v9 v9.7.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.55.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/spf13/cast v1.7.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/time v0.7.0 // indirect
google.golang.org/protobuf v1.35.1 // indirect
)

50
x/go.sum Normal file
View File

@@ -0,0 +1,50 @@
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hibiken/asynq v0.25.0 h1:VCPyRRrrjFChsTSI8x5OCPu51MlEz6Rk+1p0kHKnZug=
github.com/hibiken/asynq v0.25.0/go.mod h1:DYQ1etBEl2Y+uSkqFElGYbk3M0ujLVwCfWE+TlvxtEk=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/redis/go-redis/v9 v9.7.0 h1:HhLSs+B6O021gwzl+locl0zEDnyNkxMtf/Z3NNBMa9E=
github.com/redis/go-redis/v9 v9.7.0/go.mod h1:f6zhXITC7JUJIlPEiBOTXxJgPLdZcA93GewI7inzyWw=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=

190
x/metrics/metrics.go Normal file
View File

@@ -0,0 +1,190 @@
// Package metrics provides implementations of prometheus.Collector to collect Asynq queue metrics.
package metrics
import (
"fmt"
"log"
"github.com/hibiken/asynq"
"github.com/prometheus/client_golang/prometheus"
)
// Namespace used in fully-qualified metrics names.
const namespace = "asynq"
// QueueMetricsCollector gathers queue metrics.
// It implements prometheus.Collector interface.
//
// All metrics exported from this collector have prefix "asynq".
type QueueMetricsCollector struct {
inspector *asynq.Inspector
}
// collectQueueInfo gathers QueueInfo of all queues.
// Since this operation is expensive, it must be called once per collection.
func (qmc *QueueMetricsCollector) collectQueueInfo() ([]*asynq.QueueInfo, error) {
qnames, err := qmc.inspector.Queues()
if err != nil {
return nil, fmt.Errorf("failed to get queue names: %v", err)
}
infos := make([]*asynq.QueueInfo, len(qnames))
for i, qname := range qnames {
qinfo, err := qmc.inspector.GetQueueInfo(qname)
if err != nil {
return nil, fmt.Errorf("failed to get queue info: %v", err)
}
infos[i] = qinfo
}
return infos, nil
}
// Descriptors used by QueueMetricsCollector
var (
tasksQueuedDesc = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "tasks_enqueued_total"),
"Number of tasks enqueued; broken down by queue and state.",
[]string{"queue", "state"}, nil,
)
queueSizeDesc = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "queue_size"),
"Number of tasks in a queue",
[]string{"queue"}, nil,
)
queueLatencyDesc = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "queue_latency_seconds"),
"Number of seconds the oldest pending task is waiting in pending state to be processed.",
[]string{"queue"}, nil,
)
queueMemUsgDesc = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "queue_memory_usage_approx_bytes"),
"Number of memory used by a given queue (approximated number by sampling).",
[]string{"queue"}, nil,
)
tasksProcessedTotalDesc = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "tasks_processed_total"),
"Number of tasks processed (both succeeded and failed); broken down by queue",
[]string{"queue"}, nil,
)
tasksFailedTotalDesc = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "tasks_failed_total"),
"Number of tasks failed; broken down by queue",
[]string{"queue"}, nil,
)
pausedQueues = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "queue_paused_total"),
"Number of queues paused",
[]string{"queue"}, nil,
)
)
func (qmc *QueueMetricsCollector) Describe(ch chan<- *prometheus.Desc) {
prometheus.DescribeByCollect(qmc, ch)
}
func (qmc *QueueMetricsCollector) Collect(ch chan<- prometheus.Metric) {
queueInfos, err := qmc.collectQueueInfo()
if err != nil {
log.Printf("Failed to collect metrics data: %v", err)
}
for _, info := range queueInfos {
ch <- prometheus.MustNewConstMetric(
tasksQueuedDesc,
prometheus.GaugeValue,
float64(info.Active),
info.Queue,
"active",
)
ch <- prometheus.MustNewConstMetric(
tasksQueuedDesc,
prometheus.GaugeValue,
float64(info.Pending),
info.Queue,
"pending",
)
ch <- prometheus.MustNewConstMetric(
tasksQueuedDesc,
prometheus.GaugeValue,
float64(info.Scheduled),
info.Queue,
"scheduled",
)
ch <- prometheus.MustNewConstMetric(
tasksQueuedDesc,
prometheus.GaugeValue,
float64(info.Retry),
info.Queue,
"retry",
)
ch <- prometheus.MustNewConstMetric(
tasksQueuedDesc,
prometheus.GaugeValue,
float64(info.Archived),
info.Queue,
"archived",
)
ch <- prometheus.MustNewConstMetric(
tasksQueuedDesc,
prometheus.GaugeValue,
float64(info.Completed),
info.Queue,
"completed",
)
ch <- prometheus.MustNewConstMetric(
queueSizeDesc,
prometheus.GaugeValue,
float64(info.Size),
info.Queue,
)
ch <- prometheus.MustNewConstMetric(
queueLatencyDesc,
prometheus.GaugeValue,
info.Latency.Seconds(),
info.Queue,
)
ch <- prometheus.MustNewConstMetric(
queueMemUsgDesc,
prometheus.GaugeValue,
float64(info.MemoryUsage),
info.Queue,
)
ch <- prometheus.MustNewConstMetric(
tasksProcessedTotalDesc,
prometheus.CounterValue,
float64(info.ProcessedTotal),
info.Queue,
)
ch <- prometheus.MustNewConstMetric(
tasksFailedTotalDesc,
prometheus.CounterValue,
float64(info.FailedTotal),
info.Queue,
)
pausedValue := 0 // zero to indicate "not paused"
if info.Paused {
pausedValue = 1
}
ch <- prometheus.MustNewConstMetric(
pausedQueues,
prometheus.GaugeValue,
float64(pausedValue),
info.Queue,
)
}
}
// NewQueueMetricsCollector returns a collector that exports metrics about Asynq queues.
func NewQueueMetricsCollector(inspector *asynq.Inspector) *QueueMetricsCollector {
return &QueueMetricsCollector{inspector: inspector}
}

View File

@@ -7,9 +7,9 @@ import (
"strings" "strings"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
asynqcontext "github.com/hibiken/asynq/internal/context" asynqcontext "github.com/hibiken/asynq/internal/context"
"github.com/redis/go-redis/v9"
) )
// NewSemaphore creates a counting Semaphore for the given scope with the given number of tokens. // NewSemaphore creates a counting Semaphore for the given scope with the given number of tokens.
@@ -110,5 +110,5 @@ func (s *Semaphore) Close() error {
} }
func semaphoreKey(scope string) string { func semaphoreKey(scope string) string {
return fmt.Sprintf("asynq:sema:%s", scope) return "asynq:sema:" + scope
} }

View File

@@ -8,11 +8,11 @@ import (
"testing" "testing"
"time" "time"
"github.com/go-redis/redis/v8"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/hibiken/asynq" "github.com/hibiken/asynq"
"github.com/hibiken/asynq/internal/base" "github.com/hibiken/asynq/internal/base"
asynqcontext "github.com/hibiken/asynq/internal/context" asynqcontext "github.com/hibiken/asynq/internal/context"
"github.com/redis/go-redis/v9"
) )
var ( var (
@@ -91,7 +91,7 @@ func TestNewSemaphore_Acquire(t *testing.T) {
maxConcurrency: 3, maxConcurrency: 3,
taskIDs: []string{uuid.NewString(), uuid.NewString()}, taskIDs: []string{uuid.NewString(), uuid.NewString()},
ctxFunc: func(id string) (context.Context, context.CancelFunc) { ctxFunc: func(id string) (context.Context, context.CancelFunc) {
return asynqcontext.New(&base.TaskMessage{ return asynqcontext.New(context.Background(), &base.TaskMessage{
ID: id, ID: id,
Queue: "task-1", Queue: "task-1",
}, time.Now().Add(time.Second)) }, time.Now().Add(time.Second))
@@ -104,7 +104,7 @@ func TestNewSemaphore_Acquire(t *testing.T) {
maxConcurrency: 3, maxConcurrency: 3,
taskIDs: []string{uuid.NewString(), uuid.NewString(), uuid.NewString(), uuid.NewString()}, taskIDs: []string{uuid.NewString(), uuid.NewString(), uuid.NewString(), uuid.NewString()},
ctxFunc: func(id string) (context.Context, context.CancelFunc) { ctxFunc: func(id string) (context.Context, context.CancelFunc) {
return asynqcontext.New(&base.TaskMessage{ return asynqcontext.New(context.Background(), &base.TaskMessage{
ID: id, ID: id,
Queue: "task-2", Queue: "task-2",
}, time.Now().Add(time.Second)) }, time.Now().Add(time.Second))
@@ -211,7 +211,7 @@ func TestNewSemaphore_Acquire_StaleToken(t *testing.T) {
// adding a set member to mimic the case where token is acquired but the goroutine crashed, // adding a set member to mimic the case where token is acquired but the goroutine crashed,
// in which case, the token will not be explicitly removed and should be present already // in which case, the token will not be explicitly removed and should be present already
rc.ZAdd(context.Background(), semaphoreKey("stale-token"), &redis.Z{ rc.ZAdd(context.Background(), semaphoreKey("stale-token"), redis.Z{
Score: float64(time.Now().Add(-10 * time.Second).Unix()), Score: float64(time.Now().Add(-10 * time.Second).Unix()),
Member: taskID, Member: taskID,
}) })
@@ -219,7 +219,7 @@ func TestNewSemaphore_Acquire_StaleToken(t *testing.T) {
sema := NewSemaphore(opt, "stale-token", 1) sema := NewSemaphore(opt, "stale-token", 1)
defer sema.Close() defer sema.Close()
ctx, cancel := asynqcontext.New(&base.TaskMessage{ ctx, cancel := asynqcontext.New(context.Background(), &base.TaskMessage{
ID: taskID, ID: taskID,
Queue: "task-1", Queue: "task-1",
}, time.Now().Add(time.Second)) }, time.Now().Add(time.Second))
@@ -248,7 +248,7 @@ func TestNewSemaphore_Release(t *testing.T) {
name: "task-5", name: "task-5",
taskIDs: []string{uuid.NewString()}, taskIDs: []string{uuid.NewString()},
ctxFunc: func(id string) (context.Context, context.CancelFunc) { ctxFunc: func(id string) (context.Context, context.CancelFunc) {
return asynqcontext.New(&base.TaskMessage{ return asynqcontext.New(context.Background(), &base.TaskMessage{
ID: id, ID: id,
Queue: "task-3", Queue: "task-3",
}, time.Now().Add(time.Second)) }, time.Now().Add(time.Second))
@@ -259,7 +259,7 @@ func TestNewSemaphore_Release(t *testing.T) {
name: "task-6", name: "task-6",
taskIDs: []string{uuid.NewString(), uuid.NewString()}, taskIDs: []string{uuid.NewString(), uuid.NewString()},
ctxFunc: func(id string) (context.Context, context.CancelFunc) { ctxFunc: func(id string) (context.Context, context.CancelFunc) {
return asynqcontext.New(&base.TaskMessage{ return asynqcontext.New(context.Background(), &base.TaskMessage{
ID: id, ID: id,
Queue: "task-4", Queue: "task-4",
}, time.Now().Add(time.Second)) }, time.Now().Add(time.Second))
@@ -277,9 +277,9 @@ func TestNewSemaphore_Release(t *testing.T) {
t.Errorf("%s;\nredis.UniversalClient.Del() got error %v", tt.desc, err) t.Errorf("%s;\nredis.UniversalClient.Del() got error %v", tt.desc, err)
} }
var members []*redis.Z var members []redis.Z
for i := 0; i < len(tt.taskIDs); i++ { for i := 0; i < len(tt.taskIDs); i++ {
members = append(members, &redis.Z{ members = append(members, redis.Z{
Score: float64(time.Now().Add(time.Duration(i) * time.Second).Unix()), Score: float64(time.Now().Add(time.Duration(i) * time.Second).Unix()),
Member: tt.taskIDs[i], Member: tt.taskIDs[i],
}) })
@@ -337,7 +337,7 @@ func TestNewSemaphore_Release_Error(t *testing.T) {
name: "task-8", name: "task-8",
taskIDs: []string{uuid.NewString()}, taskIDs: []string{uuid.NewString()},
ctxFunc: func(_ string) (context.Context, context.CancelFunc) { ctxFunc: func(_ string) (context.Context, context.CancelFunc) {
return asynqcontext.New(&base.TaskMessage{ return asynqcontext.New(context.Background(), &base.TaskMessage{
ID: testID, ID: testID,
Queue: "task-4", Queue: "task-4",
}, time.Now().Add(time.Second)) }, time.Now().Add(time.Second))
@@ -356,9 +356,9 @@ func TestNewSemaphore_Release_Error(t *testing.T) {
t.Errorf("%s;\nredis.UniversalClient.Del() got error %v", tt.desc, err) t.Errorf("%s;\nredis.UniversalClient.Del() got error %v", tt.desc, err)
} }
var members []*redis.Z var members []redis.Z
for i := 0; i < len(tt.taskIDs); i++ { for i := 0; i < len(tt.taskIDs); i++ {
members = append(members, &redis.Z{ members = append(members, redis.Z{
Score: float64(time.Now().Add(time.Duration(i) * time.Second).Unix()), Score: float64(time.Now().Add(time.Duration(i) * time.Second).Unix()),
Member: tt.taskIDs[i], Member: tt.taskIDs[i],
}) })