1:内核定时器:
在内核中有系统自带的定时器,达到一定的时间就可以进行一个操作,这个和单片机中的定时器中断一样。
包含头文件是
1Timer.h (includelinux)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23struct timer_list{ /* * All fields that change during normal runtime grouped to the * same cacheline */ struct list_head entry; unsigned long expires; struct tvec_base *base; void (*function)(unsigned long); unsigned long data; int slack; #ifdef CONFIG_TIMER_STATS int start_pid; void *start_site; char start_comm[16]; #endif #ifdef CONFIG_LOCKDEP struct lockdep_map lockdep_map; #endif };
@2:void (*function)(unsigned long); 具体执行的函数
@3:unsigned long data; 作为参数 传入function函数
使用方法:
(1) 定义一个timer_list的对象 timer
(2)初始化timer中的data字段,function字段,expires的字段
(2) 调用init_timer(&timer)
或者是使用其他函数,这个用实例来讲,所属平台是amlogic中,驱动是key_pad
利用定时器,去不断轮询看那个键位按下
1
2setup_timer(&kp->timer, kp_timer_sr, (unsigned int)kp) ; mod_timer(&kp->timer, jiffies+msecs_to_jiffies(100));
其中setup_timer的定义是
1
2#define setup_timer(timer, fn, data) __setup_timer((timer), (fn), (data), 0)
1
2
3
4
5
6#define __setup_timer(_timer, _fn, _data, _flags) do { __init_timer((_timer), (_flags)); (_timer)->function = (_fn); (_timer)->data = (_data); } while (0)
mod_timer(&kp->timer, jiffies+msecs_to_jiffies(100));
1
2
3
4
5
6
7
8
9
10
11
12
13
14int mod_timer(struct timer_list *timer, unsigned long expires) { expires = apply_slack(timer, expires); /* * This is a common optimization triggered by the * networking code - if the timer is re-modified * to be the same thing then just return: */ if (timer_pending(timer) && timer->expires == expires) return 1; return __mod_timer(timer, expires, false, TIMER_NOT_PINNED); }
2.小任务机制:tasklet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28/* Tasklets --- multithreaded analogue of BHs. Main feature differing them of generic softirqs: tasklet is running only on one CPU simultaneously. Main feature differing them of BHs: different tasklets may be run simultaneously on different CPUs. Properties: * If tasklet_schedule() is called, then tasklet is guaranteed to be executed on some cpu at least once after this. * If the tasklet is already scheduled, but its execution is still not started, it will be executed only once. * If this tasklet is already running on another CPU (or schedule is called from tasklet itself), it is rescheduled for later. * Tasklet is strictly serialized wrt itself, but not wrt another tasklets. If client needs some intertask synchronization, he makes it with spinlocks. */ struct tasklet_struct { struct tasklet_struct *next; unsigned long state; atomic_t count; void (*func)(unsigned long); unsigned long data; };
看tasklet中结构体中的字段,比较重要的是data,以及func的函数指针,
使用方法是:
1
2
3
4
5#define DECLARE_TASKLET(name, func, data) struct tasklet_struct name = { NULL, 0, ATOMIC_INIT(0), func, data } #define DECLARE_TASKLET_DISABLED(name, func, data) struct tasklet_struct name = { NULL, 0, ATOMIC_INIT(1), func, data }
name字段是我们自定义的tasklet的名字,func是实际运行的函数,而data是传给func的数据
用法是
DECLARE_TASKLET(my_tasklet,my_func,0)
或者是
DECLARE_TASKLET_DISABLED(my_tasklet,my_func,0)
这两个宏定义的区别就是将tasklet中的引用计数count一个初始化为0,一个初始化为1了,其实就是为0 的时候tasklet使能,为1的时候失能,同理
1
2
3
4
5
6static inline void tasklet_disable(struct tasklet_struct *t) { tasklet_disable_nosync(t); tasklet_unlock_wait(t); smp_mb(); }
1
2
3
4
5static inline void tasklet_enable(struct tasklet_struct *t) { smp_mb__before_atomic_dec(); atomic_dec(&t->count); }
tasklet_enable,tasklet_disable,依旧是在这个count上做文章,tasklet_enable使count减1,tasklet_disable使count加1,这就要求我们在使用的时候要注意成对的出现这个enable和disable,否则会导致混乱
1
2
3
4
5static inline void tasklet_schedule(struct tasklet_struct *t) { if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) __tasklet_schedule(t); }
还有剩下3个函数,
1
2
3
4extern void tasklet_kill(struct tasklet_struct *t); extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu); extern void tasklet_init(struct tasklet_struct *t, void (*func)(unsigned long), unsigned long data);
其实还有其他的函数,就是能够使较高优先级调度,但是个人认为没有必要,连注释都这样说
1
2
3
4
5
6/* * This version avoids touching any other tasklets. Needed for kmemcheck * in order not to take any page faults while enqueueing this tasklet; * consider VERY carefully whether you really need this or * tasklet_hi_schedule()... */
@3:队列
队列在很多系统中都有,Windows,ucos,freeRTOS,linux中都存在,无非就是类似于经理把任务一次性都给你布置好,放在桌子上,挨个完成。
感觉队列和小任务tasklet有点类似,就是提交任务,然后系统调度。
但是有几点显著不同:
!:运行环境不同,tasklet运行在软件中断上下文,而队列可以运行在内核的进程中,前者是原子性的,后者没有要求,前者任务中不能休眠,而后者任务可以休眠,这个是队列中最大的好处。
!!:tasklet会尽可能的在最短时间内运行,而队列可能有较大的延迟。
!!!:内核代码可以指定请求的工作队列函数在指定的延时间隔执行。
(1)使用方法1:
!:create_workqueue(char *name) name 为字符串,自己定义的工作队列的名称
1
2#define create_workqueue(name) alloc_workqueue((name), WQ_MEM_RECLAIM, 1)
或者是调用
create_singlethread_workqueue(name) 创建单个线程
1
2#define create_singlethread_workqueue(name) alloc_workqueue((name), WQ_UNBOUND | WQ_MEM_RECLAIM, 1)
!!:向工作队列提交一个任务,
DECLARE_WORK(n, f),其中n是一个任务的名字,,f是实际调用的函数
例如,其中f的参数原型struct work_struct *work
编译中完成的
1static DECLARE_WORK(binder_deferred_work, binder_deferred_func);
1
2#define DECLARE_WORK(n, f) struct work_struct n = __WORK_INITIALIZER(n, f)
运行中使用 INIT_WORK(struct work_struct *work,void *func(struct work_struct *work))
1
2
3
4#define INIT_WORK(_work, _func) do { __INIT_WORK((_work), (_func), 0); } while (0)
PREPARE_WORK(work,func);
queue_work(struct workqueue_struct *wq,struct work_struct *work),其中wq是创建的工作队列,work是初始化的work_struct的指针
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15/** * queue_work - queue work on a workqueue * @wq: workqueue to use * @work: work to queue * * Returns %false if @work was already on a queue, %true otherwise. * * We queue the work to the CPU on which it was submitted, but if the CPU dies * it can be processed by another CPU. */ static inline bool queue_work(struct workqueue_struct *wq, struct work_struct *work) { return queue_work_on(WORK_CPU_UNBOUND, wq, work); }
queue_delayed_work(struct workqueue_struct *wq,struct delayed_work *dwork, unsigned long delay)
wq是工作队列指针,work是delay_work,delay(jiffies)是至少延时多长时间才会被执行。
1
2
3
4
5
6
7
8
9
10
11
12
13
14/** * queue_delayed_work - queue work on a workqueue after delay * @wq: workqueue to use * @dwork: delayable work to queue * @delay: number of jiffies to wait before queueing * * Equivalent to queue_delayed_work_on() but tries to use the local CPU. */ static inline bool queue_delayed_work(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay) { return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay); }
cancel_work_sync(struct work_struct *work);
bool cancel_delayed_work(struct delayed_work *dwork);
调用不能保证是否还在运行,再调用
flush_work(struct work_struct *work);
bool flush_delayed_work(struct delayed_work *dwork);
!!!!!:销毁工作队列
1void destroy_workqueue(struct workqueue_struct *wq);
实际使用例子内核源码中事例:
1
2struct workqueue_struct *stk_acc_wq;<span style="white-space: pre;"> struct work_struct stk_acc_work; void stk_acc_poll_work_func(struct work_struct *work)</span>
1
2stk->stk_acc_wq = create_singlethread_workqueue("stk_acc_wq"); INIT_WORK(&stk->stk_acc_work, stk_acc_poll_work_func);
1
2queue_work(stk->stk_acc_wq, &stk->stk_acc_work);
@4:共享队列:
有的任务不是长期需要执行的,是偶尔我们需要执行个任务,比如我经常遇到的TP的驱动,key的驱动都是这样的,中断上来后,启动调度的任务。
使用方法如下:
1
2!:INIT_WORK(&(kp->work_update), update_work_func); !!: schedule_work(&(kp->work_update));
第一个INIT_WORK(struct work_struct*,void (*func)(struct work_struct *))
第二个 bool schedule_work(struct work_struct *work)
这里没有提到工作队列,比较奇怪了,看看下面的代码就知道了
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15/** * schedule_work - put work task in global workqueue * @work: job to be done * * Returns %false if @work was already on the kernel-global workqueue and * %true otherwise. * * This puts a job in the kernel-global workqueue if it was not already * queued and leaves it in the same position on the kernel-global * workqueue otherwise. */ static inline bool schedule_work(struct work_struct *work) { return queue_work(system_wq, work); }
当然也可以调用bool schedule_delayed_work(struct delayed_work *dwork,unsigned long delay),但是这些个就差不多了,多看看源文件就知道怎么回事了。
最后
以上就是矮小鞋子最近收集整理的关于linux驱动之定时任务timer,队列queue,小任务tasklet机制及用法的全部内容,更多相关linux驱动之定时任务timer内容请搜索靠谱客的其他文章。
发表评论 取消回复