轻量分布式定时任务库 a lightweight distributed job scheduler library
MIT License
a lightweight distributed job scheduler library based on redis or etcd
Use redis or etcd to sync the services list and the state of services. Use consistent-hash to select the node which can execute the task.
If use distributed-lock to implement it. I will depends on the system-time of each node. There are some problems when the system-time is not synchronous:
If the task executing time is shorter than the system time, the task will be excuted again. (some node unlock after execution, but the lock will be locked by the other node which reach the execution time)
Whatever there is only a little miss in system time, the most fast node will catch the lock in the first time. It will cause a thing that all the task will be executed only by the most fast node.
ServiceName
and initialize dcron
. The ServiceName
will defined the same task unit.redisCli := redis.NewClient(&redis.Options{
Addr: DefaultRedisAddr,
})
drv := redisdriver.NewDriver(redisCli)
dcron := NewDcron("server1", drv)
TaskName
, the TaskName
is the primary-key of each task.dcron.AddFunc("test1","*/3 * * * *",func(){
fmt.Println("execute test1 task",time.Now().Format("15:04:05"))
})
// you can use Start() or Run() to start the dcron.
// unblocking start.
dcron.Start()
// blocking start.
dcron.Run()
After v0.6.0, We split out Dcron's drivers like etcddriver and redisdriver from main repo, and maintain them in independent repos. For details, please refer to dcron-contrib.
Dcron is based on https://github.com/robfig/cron, use NewDcron
to initialize Dcron
, the arg after the second argv will be passed to cron
For example, if you want to set the cron eval in second-level, you can use like that:
dcron := NewDcron("server1", drv,cron.WithSeconds())
Otherwise, you can sue NewDcronWithOption
to initialize, to set the logger or others. Optional configuration can be referred to: https://github.com/libi/dcron/blob/master/option.go
The ServiceName
is used to define the same set of tasks, which can be understood as the boundary of task allocation and scheduling.
Multiple nodes using the same service name will be considered as the same task group. Tasks in the same task group will be evenly distributed to each node in the group and will not be executed repeatedly.