1 <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook V3.1//EN"[]>
3 <book id="LKLockingGuide">
5 <title>Unreliable Guide To Locking</title>
9 <firstname>Rusty</firstname>
10 <surname>Russell</surname>
13 <email>rusty@rustcorp.com.au</email>
21 <holder>Rusty Russell</holder>
26 This documentation is free software; you can redistribute
27 it and/or modify it under the terms of the GNU General Public
28 License as published by the Free Software Foundation; either
29 version 2 of the License, or (at your option) any later
34 This program is distributed in the hope that it will be
35 useful, but WITHOUT ANY WARRANTY; without even the implied
36 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
37 See the GNU General Public License for more details.
41 You should have received a copy of the GNU General Public
42 License along with this program; if not, write to the Free
43 Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
48 For more details see the file COPYING in the source
49 distribution of Linux.
56 <title>Introduction</title>
58 Welcome, to Rusty's Remarkably Unreliable Guide to Kernel
59 Locking issues. This document describes the locking systems in
60 the Linux Kernel in 2.6.
63 With the wide availability of HyperThreading, and <firstterm
64 linkend="gloss-preemption">preemption </firstterm> in the Linux
65 Kernel, everyone hacking on the kernel needs to know the
66 fundamentals of concurrency and locking for
67 <firstterm linkend="gloss-smp"><acronym>SMP</acronym></firstterm>.
72 <title>The Problem With Concurrency</title>
74 (Skip this if you know what a Race Condition is).
77 In a normal program, you can increment a counter like so:
80 very_important_count++;
84 This is what they would expect to happen:
88 <title>Expected Results</title>
90 <tgroup cols="2" align="left">
94 <entry>Instance 1</entry>
95 <entry>Instance 2</entry>
101 <entry>read very_important_count (5)</entry>
105 <entry>add 1 (6)</entry>
109 <entry>write very_important_count (6)</entry>
114 <entry>read very_important_count (6)</entry>
118 <entry>add 1 (7)</entry>
122 <entry>write very_important_count (7)</entry>
130 This is what might happen:
134 <title>Possible Results</title>
136 <tgroup cols="2" align="left">
139 <entry>Instance 1</entry>
140 <entry>Instance 2</entry>
146 <entry>read very_important_count (5)</entry>
151 <entry>read very_important_count (5)</entry>
154 <entry>add 1 (6)</entry>
159 <entry>add 1 (6)</entry>
162 <entry>write very_important_count (6)</entry>
167 <entry>write very_important_count (6)</entry>
173 <sect1 id="race-condition">
174 <title>Race Conditions and Critical Regions</title>
176 This overlap, where the result depends on the
177 relative timing of multiple tasks, is called a <firstterm>race condition</firstterm>.
178 The piece of code containing the concurrency issue is called a
179 <firstterm>critical region</firstterm>. And especially since Linux starting running
180 on SMP machines, they became one of the major issues in kernel
181 design and implementation.
184 Preemption can have the same effect, even if there is only one
185 CPU: by preempting one task during the critical region, we have
186 exactly the same race condition. In this case the thread which
187 preempts might run the critical region itself.
190 The solution is to recognize when these simultaneous accesses
191 occur, and use locks to make sure that only one instance can
192 enter the critical region at any time. There are many
193 friendly primitives in the Linux kernel to help you do this.
194 And then there are the unfriendly primitives, but I'll pretend
201 <title>Locking in the Linux Kernel</title>
204 If I could give you one piece of advice: never sleep with anyone
205 crazier than yourself. But if I had to give you advice on
206 locking: <emphasis>keep it simple</emphasis>.
210 Be reluctant to introduce new locks.
214 Strangely enough, this last one is the exact reverse of my advice when
215 you <emphasis>have</emphasis> slept with someone crazier than yourself.
216 And you should think about getting a big dog.
219 <sect1 id="lock-intro">
220 <title>Two Main Types of Kernel Locks: Spinlocks and Semaphores</title>
223 There are two main types of kernel locks. The fundamental type
225 (<filename class="headerfile">include/asm/spinlock.h</filename>),
226 which is a very simple single-holder lock: if you can't get the
227 spinlock, you keep trying (spinning) until you can. Spinlocks are
228 very small and fast, and can be used anywhere.
231 The second type is a semaphore
232 (<filename class="headerfile">include/asm/semaphore.h</filename>): it
233 can have more than one holder at any time (the number decided at
234 initialization time), although it is most commonly used as a
235 single-holder lock (a mutex). If you can't get a semaphore,
236 your task will put itself on the queue, and be woken up when the
237 semaphore is released. This means the CPU will do something
238 else while you are waiting, but there are many cases when you
239 simply can't sleep (see <xref linkend="sleeping-things">), and so
240 have to use a spinlock instead.
243 Neither type of lock is recursive: see
244 <xref linkend="deadlock">.
248 <sect1 id="uniprocessor">
249 <title>Locks and Uniprocessor Kernels</title>
252 For kernels compiled without <symbol>CONFIG_SMP</symbol>, and
253 without <symbol>CONFIG_PREEMPT</symbol> spinlocks do not exist at
254 all. This is an excellent design decision: when no-one else can
255 run at the same time, there is no reason to have a lock.
259 If the kernel is compiled without <symbol>CONFIG_SMP</symbol>,
260 but <symbol>CONFIG_PREEMPT</symbol> is set, then spinlocks
261 simply disable preemption, which is sufficient to prevent any
262 races. For most purposes, we can think of preemption as
263 equivalent to SMP, and not worry about it separately.
267 You should always test your locking code with <symbol>CONFIG_SMP</symbol>
268 and <symbol>CONFIG_PREEMPT</symbol> enabled, even if you don't have an SMP test box, because it
269 will still catch some kinds of locking bugs.
273 Semaphores still exist, because they are required for
274 synchronization between <firstterm linkend="gloss-usercontext">user
275 contexts</firstterm>, as we will see below.
279 <sect1 id="usercontextlocking">
280 <title>Locking Only In User Context</title>
283 If you have a data structure which is only ever accessed from
284 user context, then you can use a simple semaphore
285 (<filename>linux/asm/semaphore.h</filename>) to protect it. This
286 is the most trivial case: you initialize the semaphore to the number
287 of resources available (usually 1), and call
288 <function>down_interruptible()</function> to grab the semaphore, and
289 <function>up()</function> to release it. There is also a
290 <function>down()</function>, which should be avoided, because it
291 will not return if a signal is received.
295 Example: <filename>linux/net/core/netfilter.c</filename> allows
296 registration of new <function>setsockopt()</function> and
297 <function>getsockopt()</function> calls, with
298 <function>nf_register_sockopt()</function>. Registration and
299 de-registration are only done on module load and unload (and boot
300 time, where there is no concurrency), and the list of registrations
301 is only consulted for an unknown <function>setsockopt()</function>
302 or <function>getsockopt()</function> system call. The
303 <varname>nf_sockopt_mutex</varname> is perfect to protect this,
304 especially since the setsockopt and getsockopt calls may well
309 <sect1 id="lock-user-bh">
310 <title>Locking Between User Context and Softirqs</title>
313 If a <firstterm linkend="gloss-softirq">softirq</firstterm> shares
314 data with user context, you have two problems. Firstly, the current
315 user context can be interrupted by a softirq, and secondly, the
316 critical region could be entered from another CPU. This is where
317 <function>spin_lock_bh()</function>
318 (<filename class="headerfile">include/linux/spinlock.h</filename>) is
319 used. It disables softirqs on that CPU, then grabs the lock.
320 <function>spin_unlock_bh()</function> does the reverse. (The
321 '_bh' suffix is a historical reference to "Bottom Halves", the
322 old name for software interrupts. It should really be
323 called spin_lock_softirq()' in a perfect world).
327 Note that you can also use <function>spin_lock_irq()</function>
328 or <function>spin_lock_irqsave()</function> here, which stop
329 hardware interrupts as well: see <xref linkend="hardirq-context">.
333 This works perfectly for <firstterm linkend="gloss-up"><acronym>UP
334 </acronym></firstterm> as well: the spin lock vanishes, and this macro
335 simply becomes <function>local_bh_disable()</function>
336 (<filename class="headerfile">include/linux/interrupt.h</filename>), which
337 protects you from the softirq being run.
341 <sect1 id="lock-user-tasklet">
342 <title>Locking Between User Context and Tasklets</title>
345 This is exactly the same as above, because <firstterm
346 linkend="gloss-tasklet">tasklets</firstterm> are actually run
351 <sect1 id="lock-user-timers">
352 <title>Locking Between User Context and Timers</title>
355 This, too, is exactly the same as above, because <firstterm
356 linkend="gloss-timers">timers</firstterm> are actually run from
357 a softirq. From a locking point of view, tasklets and timers
362 <sect1 id="lock-tasklets">
363 <title>Locking Between Tasklets/Timers</title>
366 Sometimes a tasklet or timer might want to share data with
367 another tasklet or timer.
370 <sect2 id="lock-tasklets-same">
371 <title>The Same Tasklet/Timer</title>
373 Since a tasklet is never run on two CPUs at once, you don't
374 need to worry about your tasklet being reentrant (running
375 twice at once), even on SMP.
379 <sect2 id="lock-tasklets-different">
380 <title>Different Tasklets/Timers</title>
382 If another tasklet/timer wants
383 to share data with your tasklet or timer , you will both need to use
384 <function>spin_lock()</function> and
385 <function>spin_unlock()</function> calls.
386 <function>spin_lock_bh()</function> is
387 unnecessary here, as you are already in a tasklet, and
388 none will be run on the same CPU.
393 <sect1 id="lock-softirqs">
394 <title>Locking Between Softirqs</title>
397 Often a softirq might
398 want to share data with itself or a tasklet/timer.
401 <sect2 id="lock-softirqs-same">
402 <title>The Same Softirq</title>
405 The same softirq can run on the other CPUs: you can use a
406 per-CPU array (see <xref linkend="per-cpu">) for better
407 performance. If you're going so far as to use a softirq,
408 you probably care about scalable performance enough
409 to justify the extra complexity.
413 You'll need to use <function>spin_lock()</function> and
414 <function>spin_unlock()</function> for shared data.
418 <sect2 id="lock-softirqs-different">
419 <title>Different Softirqs</title>
422 You'll need to use <function>spin_lock()</function> and
423 <function>spin_unlock()</function> for shared data, whether it
424 be a timer, tasklet, different softirq or the same or another
425 softirq: any of them could be running on a different CPU.
431 <chapter id="hardirq-context">
432 <title>Hard IRQ Context</title>
435 Hardware interrupts usually communicate with a
436 tasklet or softirq. Frequently this involves putting work in a
437 queue, which the softirq will take out.
440 <sect1 id="hardirq-softirq">
441 <title>Locking Between Hard IRQ and Softirqs/Tasklets</title>
444 If a hardware irq handler shares data with a softirq, you have
445 two concerns. Firstly, the softirq processing can be
446 interrupted by a hardware interrupt, and secondly, the
447 critical region could be entered by a hardware interrupt on
448 another CPU. This is where <function>spin_lock_irq()</function> is
449 used. It is defined to disable interrupts on that cpu, then grab
450 the lock. <function>spin_unlock_irq()</function> does the reverse.
454 The irq handler does not to use
455 <function>spin_lock_irq()</function>, because the softirq cannot
456 run while the irq handler is running: it can use
457 <function>spin_lock()</function>, which is slightly faster. The
458 only exception would be if a different hardware irq handler uses
459 the same lock: <function>spin_lock_irq()</function> will stop
460 that from interrupting us.
464 This works perfectly for UP as well: the spin lock vanishes,
465 and this macro simply becomes <function>local_irq_disable()</function>
466 (<filename class="headerfile">include/asm/smp.h</filename>), which
467 protects you from the softirq/tasklet/BH being run.
471 <function>spin_lock_irqsave()</function>
472 (<filename>include/linux/spinlock.h</filename>) is a variant
473 which saves whether interrupts were on or off in a flags word,
474 which is passed to <function>spin_unlock_irqrestore()</function>. This
475 means that the same code can be used inside an hard irq handler (where
476 interrupts are already off) and in softirqs (where the irq
477 disabling is required).
481 Note that softirqs (and hence tasklets and timers) are run on
482 return from hardware interrupts, so
483 <function>spin_lock_irq()</function> also stops these. In that
484 sense, <function>spin_lock_irqsave()</function> is the most
485 general and powerful locking function.
489 <sect1 id="hardirq-hardirq">
490 <title>Locking Between Two Hard IRQ Handlers</title>
492 It is rare to have to share data between two IRQ handlers, but
493 if you do, <function>spin_lock_irqsave()</function> should be
494 used: it is architecture-specific whether all interrupts are
495 disabled inside irq handlers themselves.
501 <chapter id="cheatsheet">
502 <title>Cheat Sheet For Locking</title>
504 Pete Zaitcev gives the following summary:
509 If you are in a process context (any syscall) and want to
510 lock other process out, use a semaphore. You can take a semaphore
511 and sleep (<function>copy_from_user*(</function> or
512 <function>kmalloc(x,GFP_KERNEL)</function>).
517 Otherwise (== data can be touched in an interrupt), use
518 <function>spin_lock_irqsave()</function> and
519 <function>spin_unlock_irqrestore()</function>.
524 Avoid holding spinlock for more than 5 lines of code and
525 across any function call (except accessors like
526 <function>readb</function>).
531 <sect1 id="minimum-lock-reqirements">
532 <title>Table of Minimum Requirements</title>
534 <para> The following table lists the <emphasis>minimum</emphasis>
535 locking requirements between various contexts. In some cases,
536 the same context can only be running on one CPU at a time, so
537 no locking is required for that context (eg. a particular
538 thread can only run on one CPU at a time, but if it needs
539 shares data with another thread, locking is required).
542 Remember the advice above: you can always use
543 <function>spin_lock_irqsave()</function>, which is a superset
544 of all other spinlock primitives.
547 <title>Table of Locking Requirements</title>
552 <ENTRY>IRQ Handler A</ENTRY>
553 <ENTRY>IRQ Handler B</ENTRY>
554 <ENTRY>Softirq A</ENTRY>
555 <ENTRY>Softirq B</ENTRY>
556 <ENTRY>Tasklet A</ENTRY>
557 <ENTRY>Tasklet B</ENTRY>
558 <ENTRY>Timer A</ENTRY>
559 <ENTRY>Timer B</ENTRY>
560 <ENTRY>User Context A</ENTRY>
561 <ENTRY>User Context B</ENTRY>
565 <ENTRY>IRQ Handler A</ENTRY>
570 <ENTRY>IRQ Handler B</ENTRY>
571 <ENTRY>spin_lock_irqsave</ENTRY>
576 <ENTRY>Softirq A</ENTRY>
577 <ENTRY>spin_lock_irq</ENTRY>
578 <ENTRY>spin_lock_irq</ENTRY>
579 <ENTRY>spin_lock</ENTRY>
583 <ENTRY>Softirq B</ENTRY>
584 <ENTRY>spin_lock_irq</ENTRY>
585 <ENTRY>spin_lock_irq</ENTRY>
586 <ENTRY>spin_lock</ENTRY>
587 <ENTRY>spin_lock</ENTRY>
591 <ENTRY>Tasklet A</ENTRY>
592 <ENTRY>spin_lock_irq</ENTRY>
593 <ENTRY>spin_lock_irq</ENTRY>
594 <ENTRY>spin_lock</ENTRY>
595 <ENTRY>spin_lock</ENTRY>
600 <ENTRY>Tasklet B</ENTRY>
601 <ENTRY>spin_lock_irq</ENTRY>
602 <ENTRY>spin_lock_irq</ENTRY>
603 <ENTRY>spin_lock</ENTRY>
604 <ENTRY>spin_lock</ENTRY>
605 <ENTRY>spin_lock</ENTRY>
610 <ENTRY>Timer A</ENTRY>
611 <ENTRY>spin_lock_irq</ENTRY>
612 <ENTRY>spin_lock_irq</ENTRY>
613 <ENTRY>spin_lock</ENTRY>
614 <ENTRY>spin_lock</ENTRY>
615 <ENTRY>spin_lock</ENTRY>
616 <ENTRY>spin_lock</ENTRY>
621 <ENTRY>Timer B</ENTRY>
622 <ENTRY>spin_lock_irq</ENTRY>
623 <ENTRY>spin_lock_irq</ENTRY>
624 <ENTRY>spin_lock</ENTRY>
625 <ENTRY>spin_lock</ENTRY>
626 <ENTRY>spin_lock</ENTRY>
627 <ENTRY>spin_lock</ENTRY>
628 <ENTRY>spin_lock</ENTRY>
633 <ENTRY>User Context A</ENTRY>
634 <ENTRY>spin_lock_irq</ENTRY>
635 <ENTRY>spin_lock_irq</ENTRY>
636 <ENTRY>spin_lock_bh</ENTRY>
637 <ENTRY>spin_lock_bh</ENTRY>
638 <ENTRY>spin_lock_bh</ENTRY>
639 <ENTRY>spin_lock_bh</ENTRY>
640 <ENTRY>spin_lock_bh</ENTRY>
641 <ENTRY>spin_lock_bh</ENTRY>
646 <ENTRY>User Context B</ENTRY>
647 <ENTRY>spin_lock_irq</ENTRY>
648 <ENTRY>spin_lock_irq</ENTRY>
649 <ENTRY>spin_lock_bh</ENTRY>
650 <ENTRY>spin_lock_bh</ENTRY>
651 <ENTRY>spin_lock_bh</ENTRY>
652 <ENTRY>spin_lock_bh</ENTRY>
653 <ENTRY>spin_lock_bh</ENTRY>
654 <ENTRY>spin_lock_bh</ENTRY>
655 <ENTRY>down_interruptible</ENTRY>
665 <chapter id="Examples">
666 <title>Common Examples</title>
668 Let's step through a simple example: a cache of number to name
669 mappings. The cache keeps a count of how often each of the objects is
670 used, and when it gets full, throws out the least used one.
674 <sect1 id="examples-usercontext">
675 <title>All In User Context</title>
677 For our first example, we assume that all operations are in user
678 context (ie. from system calls), so we can sleep. This means we can
679 use a semaphore to protect the cache and all the objects within
684 #include <linux/list.h>
685 #include <linux/slab.h>
686 #include <linux/string.h>
687 #include <asm/semaphore.h>
688 #include <asm/errno.h>
692 struct list_head list;
698 /* Protects the cache, cache_num, and the objects within it */
699 static DECLARE_MUTEX(cache_lock);
700 static LIST_HEAD(cache);
701 static unsigned int cache_num = 0;
702 #define MAX_CACHE_SIZE 10
704 /* Must be holding cache_lock */
705 static struct object *__cache_find(int id)
709 list_for_each_entry(i, &cache, list)
710 if (i->id == id) {
717 /* Must be holding cache_lock */
718 static void __cache_delete(struct object *obj)
721 list_del(&obj->list);
726 /* Must be holding cache_lock */
727 static void __cache_add(struct object *obj)
729 list_add(&obj->list, &cache);
730 if (++cache_num > MAX_CACHE_SIZE) {
731 struct object *i, *outcast = NULL;
732 list_for_each_entry(i, &cache, list) {
733 if (!outcast || i->popularity < outcast->popularity)
736 __cache_delete(outcast);
740 int cache_add(int id, const char *name)
744 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
747 strlcpy(obj->name, name, sizeof(obj->name));
749 obj->popularity = 0;
751 down(&cache_lock);
757 void cache_delete(int id)
759 down(&cache_lock);
760 __cache_delete(__cache_find(id));
764 int cache_find(int id, char *name)
769 down(&cache_lock);
770 obj = __cache_find(id);
773 strcpy(name, obj->name);
781 Note that we always make sure we have the cache_lock when we add,
782 delete, or look up the cache: both the cache infrastructure itself and
783 the contents of the objects are protected by the lock. In this case
784 it's easy, since we copy the data for the user, and never let them
785 access the objects directly.
788 There is a slight (and common) optimization here: in
789 <function>cache_add</function> we set up the fields of the object
790 before grabbing the lock. This is safe, as no-one else can access it
791 until we put it in cache.
795 <sect1 id="examples-interrupt">
796 <title>Accessing From Interrupt Context</title>
798 Now consider the case where <function>cache_find</function> can be
799 called from interrupt context: either a hardware interrupt or a
800 softirq. An example would be a timer which deletes object from the
804 The change is shown below, in standard patch format: the
805 <symbol>-</symbol> are lines which are taken away, and the
806 <symbol>+</symbol> are lines which are added.
809 --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
810 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
815 -static DECLARE_MUTEX(cache_lock);
816 +static spinlock_t cache_lock = SPIN_LOCK_UNLOCKED;
817 static LIST_HEAD(cache);
818 static unsigned int cache_num = 0;
819 #define MAX_CACHE_SIZE 10
821 int cache_add(int id, const char *name)
824 + unsigned long flags;
826 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
830 obj->popularity = 0;
832 - down(&cache_lock);
833 + spin_lock_irqsave(&cache_lock, flags);
835 - up(&cache_lock);
836 + spin_unlock_irqrestore(&cache_lock, flags);
840 void cache_delete(int id)
842 - down(&cache_lock);
843 + unsigned long flags;
845 + spin_lock_irqsave(&cache_lock, flags);
846 __cache_delete(__cache_find(id));
847 - up(&cache_lock);
848 + spin_unlock_irqrestore(&cache_lock, flags);
851 int cache_find(int id, char *name)
855 + unsigned long flags;
857 - down(&cache_lock);
858 + spin_lock_irqsave(&cache_lock, flags);
859 obj = __cache_find(id);
862 strcpy(name, obj->name);
864 - up(&cache_lock);
865 + spin_unlock_irqrestore(&cache_lock, flags);
871 Note that the <function>spin_lock_irqsave</function> will turn off
872 interrupts if they are on, otherwise does nothing (if we are already
873 in an interrupt handler), hence these functions are safe to call from
877 Unfortunately, <function>cache_add</function> calls
878 <function>kmalloc</function> with the <symbol>GFP_KERNEL</symbol>
879 flag, which is only legal in user context. I have assumed that
880 <function>cache_add</function> is still only called in user context,
881 otherwise this should become a parameter to
882 <function>cache_add</function>.
885 <sect1 id="examples-refcnt">
886 <title>Exposing Objects Outside This File</title>
888 If our objects contained more information, it might not be sufficient
889 to copy the information in and out: other parts of the code might want
890 to keep pointers to these objects, for example, rather than looking up
891 the id every time. This produces two problems.
894 The first problem is that we use the <symbol>cache_lock</symbol> to
895 protect objects: we'd need to make this non-static so the rest of the
896 code can use it. This makes locking trickier, as it is no longer all
900 The second problem is the lifetime problem: if another structure keeps
901 a pointer to an object, it presumably expects that pointer to remain
902 valid. Unfortunately, this is only guaranteed while you hold the
903 lock, otherwise someone might call <function>cache_delete</function>
904 and even worse, add another object, re-using the same address.
907 As there is only one lock, you can't hold it forever: no-one else would
911 The solution to this problem is to use a reference count: everyone who
912 has a pointer to the object increases it when they first get the
913 object, and drops the reference count when they're finished with it.
914 Whoever drops it to zero knows it is unused, and can actually delete it.
921 --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
922 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
926 struct list_head list;
927 + unsigned int refcnt;
932 static unsigned int cache_num = 0;
933 #define MAX_CACHE_SIZE 10
935 +static void __object_put(struct object *obj)
937 + if (--obj->refcnt == 0)
941 +static void __object_get(struct object *obj)
946 +void object_put(struct object *obj)
948 + unsigned long flags;
950 + spin_lock_irqsave(&cache_lock, flags);
952 + spin_unlock_irqrestore(&cache_lock, flags);
955 +void object_get(struct object *obj)
957 + unsigned long flags;
959 + spin_lock_irqsave(&cache_lock, flags);
961 + spin_unlock_irqrestore(&cache_lock, flags);
964 /* Must be holding cache_lock */
965 static struct object *__cache_find(int id)
970 list_del(&obj->list);
976 strlcpy(obj->name, name, sizeof(obj->name));
978 obj->popularity = 0;
979 + obj->refcnt = 1; /* The cache holds a reference */
981 spin_lock_irqsave(&cache_lock, flags);
984 spin_unlock_irqrestore(&cache_lock, flags);
987 -int cache_find(int id, char *name)
988 +struct object *cache_find(int id)
994 spin_lock_irqsave(&cache_lock, flags);
995 obj = __cache_find(id);
998 - strcpy(name, obj->name);
1001 + __object_get(obj);
1002 spin_unlock_irqrestore(&cache_lock, flags);
1009 We encapsulate the reference counting in the standard 'get' and 'put'
1010 functions. Now we can return the object itself from
1011 <function>cache_find</function> which has the advantage that the user
1012 can now sleep holding the object (eg. to
1013 <function>copy_to_user</function> to name to userspace).
1016 The other point to note is that I said a reference should be held for
1017 every pointer to the object: thus the reference count is 1 when first
1018 inserted into the cache. In some versions the framework does not hold
1019 a reference count, but they are more complicated.
1022 <sect2 id="examples-refcnt-atomic">
1023 <title>Using Atomic Operations For The Reference Count</title>
1025 In practice, <type>atomic_t</type> would usually be used for
1026 <structfield>refcnt</structfield>. There are a number of atomic
1027 operations defined in
1029 <filename class="headerfile">include/asm/atomic.h</filename>: these are
1030 guaranteed to be seen atomically from all CPUs in the system, so no
1031 lock is required. In this case, it is simpler than using spinlocks,
1032 although for anything non-trivial using spinlocks is clearer. The
1033 <function>atomic_inc</function> and
1034 <function>atomic_dec_and_test</function> are used instead of the
1035 standard increment and decrement operators, and the lock is no longer
1036 used to protect the reference count itself.
1040 --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
1041 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
1045 struct list_head list;
1046 - unsigned int refcnt;
1052 static unsigned int cache_num = 0;
1053 #define MAX_CACHE_SIZE 10
1055 -static void __object_put(struct object *obj)
1057 - if (--obj->refcnt == 0)
1061 -static void __object_get(struct object *obj)
1066 void object_put(struct object *obj)
1068 - unsigned long flags;
1070 - spin_lock_irqsave(&cache_lock, flags);
1071 - __object_put(obj);
1072 - spin_unlock_irqrestore(&cache_lock, flags);
1073 + if (atomic_dec_and_test(&obj->refcnt))
1077 void object_get(struct object *obj)
1079 - unsigned long flags;
1081 - spin_lock_irqsave(&cache_lock, flags);
1082 - __object_get(obj);
1083 - spin_unlock_irqrestore(&cache_lock, flags);
1084 + atomic_inc(&obj->refcnt);
1087 /* Must be holding cache_lock */
1091 list_del(&obj->list);
1092 - __object_put(obj);
1098 strlcpy(obj->name, name, sizeof(obj->name));
1100 obj->popularity = 0;
1101 - obj->refcnt = 1; /* The cache holds a reference */
1102 + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
1104 spin_lock_irqsave(&cache_lock, flags);
1107 spin_lock_irqsave(&cache_lock, flags);
1108 obj = __cache_find(id);
1110 - __object_get(obj);
1112 spin_unlock_irqrestore(&cache_lock, flags);
1119 <sect1 id="examples-lock-per-obj">
1120 <title>Protecting The Objects Themselves</title>
1122 In these examples, we assumed that the objects (except the reference
1123 counts) never changed once they are created. If we wanted to allow
1124 the name to change, there are three possibilities:
1129 You can make <symbol>cache_lock</symbol> non-static, and tell people
1130 to grab that lock before changing the name in any object.
1135 You can provide a <function>cache_obj_rename</function> which grabs
1136 this lock and changes the name for the caller, and tell everyone to
1142 You can make the <symbol>cache_lock</symbol> protect only the cache
1143 itself, and use another lock to protect the name.
1149 Theoretically, you can make the locks as fine-grained as one lock for
1150 every field, for every object. In practice, the most common variants
1156 One lock which protects the infrastructure (the <symbol>cache</symbol>
1157 list in this example) and all the objects. This is what we have done
1163 One lock which protects the infrastructure (including the list
1164 pointers inside the objects), and one lock inside the object which
1165 protects the rest of that object.
1170 Multiple locks to protect the infrastructure (eg. one lock per hash
1171 chain), possibly with a separate per-object lock.
1177 Here is the "lock-per-object" implementation:
1180 --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
1181 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1186 + /* These two protected by cache_lock. */
1187 struct list_head list;
1192 + /* Doesn't change once created. */
1195 + spinlock_t lock; /* Protects the name */
1200 static spinlock_t cache_lock = SPIN_LOCK_UNLOCKED;
1203 obj->popularity = 0;
1204 atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
1205 + spin_lock_init(&obj->lock);
1207 spin_lock_irqsave(&cache_lock, flags);
1212 Note that I decide that the <structfield>popularity</structfield>
1213 count should be protected by the <symbol>cache_lock</symbol> rather
1214 than the per-object lock: this is because it (like the
1215 <structname>struct list_head</structname> inside the object) is
1216 logically part of the infrastructure. This way, I don't need to grab
1217 the lock of every object in <function>__cache_add</function> when
1218 seeking the least popular.
1222 I also decided that the <structfield>id</structfield> member is
1223 unchangeable, so I don't need to grab each object lock in
1224 <function>__cache_find()</function> to examine the
1225 <structfield>id</structfield>: the object lock is only used by a
1226 caller who wants to read or write the <structfield>name</structfield>
1231 Note also that I added a comment describing what data was protected by
1232 which locks. This is extremely important, as it describes the runtime
1233 behavior of the code, and can be hard to gain from just reading. And
1234 as Alan Cox says, <quote>Lock data, not code</quote>.
1239 <chapter id="common-problems">
1240 <title>Common Problems</title>
1241 <sect1 id="deadlock">
1242 <title>Deadlock: Simple and Advanced</title>
1245 There is a coding bug where a piece of code tries to grab a
1246 spinlock twice: it will spin forever, waiting for the lock to
1247 be released (spinlocks, rwlocks and semaphores are not
1248 recursive in Linux). This is trivial to diagnose: not a
1249 stay-up-five-nights-talk-to-fluffy-code-bunnies kind of
1254 For a slightly more complex case, imagine you have a region
1255 shared by a softirq and user context. If you use a
1256 <function>spin_lock()</function> call to protect it, it is
1257 possible that the user context will be interrupted by the softirq
1258 while it holds the lock, and the softirq will then spin
1259 forever trying to get the same lock.
1263 Both of these are called deadlock, and as shown above, it can
1264 occur even with a single CPU (although not on UP compiles,
1265 since spinlocks vanish on kernel compiles with
1266 <symbol>CONFIG_SMP</symbol>=n. You'll still get data corruption
1267 in the second example).
1271 This complete lockup is easy to diagnose: on SMP boxes the
1272 watchdog timer or compiling with <symbol>DEBUG_SPINLOCKS</symbol> set
1273 (<filename>include/linux/spinlock.h</filename>) will show this up
1274 immediately when it happens.
1278 A more complex problem is the so-called 'deadly embrace',
1279 involving two or more locks. Say you have a hash table: each
1280 entry in the table is a spinlock, and a chain of hashed
1281 objects. Inside a softirq handler, you sometimes want to
1282 alter an object from one place in the hash to another: you
1283 grab the spinlock of the old hash chain and the spinlock of
1284 the new hash chain, and delete the object from the old one,
1285 and insert it in the new one.
1289 There are two problems here. First, if your code ever
1290 tries to move the object to the same chain, it will deadlock
1291 with itself as it tries to lock it twice. Secondly, if the
1292 same softirq on another CPU is trying to move another object
1293 in the reverse direction, the following could happen:
1297 <title>Consequences</title>
1299 <tgroup cols="2" align="left">
1303 <entry>CPU 1</entry>
1304 <entry>CPU 2</entry>
1310 <entry>Grab lock A -> OK</entry>
1311 <entry>Grab lock B -> OK</entry>
1314 <entry>Grab lock B -> spin</entry>
1315 <entry>Grab lock A -> spin</entry>
1322 The two CPUs will spin forever, waiting for the other to give up
1323 their lock. It will look, smell, and feel like a crash.
1327 <sect1 id="techs-deadlock-prevent">
1328 <title>Preventing Deadlock</title>
1331 Textbooks will tell you that if you always lock in the same
1332 order, you will never get this kind of deadlock. Practice
1333 will tell you that this approach doesn't scale: when I
1334 create a new lock, I don't understand enough of the kernel
1335 to figure out where in the 5000 lock hierarchy it will fit.
1339 The best locks are encapsulated: they never get exposed in
1340 headers, and are never held around calls to non-trivial
1341 functions outside the same file. You can read through this
1342 code and see that it will never deadlock, because it never
1343 tries to grab another lock while it has that one. People
1344 using your code don't even need to know you are using a
1349 A classic problem here is when you provide callbacks or
1350 hooks: if you call these with the lock held, you risk simple
1351 deadlock, or a deadly embrace (who knows what the callback
1352 will do?). Remember, the other programmers are out to get
1353 you, so don't do this.
1356 <sect2 id="techs-deadlock-overprevent">
1357 <title>Overzealous Prevention Of Deadlocks</title>
1360 Deadlocks are problematic, but not as bad as data
1361 corruption. Code which grabs a read lock, searches a list,
1362 fails to find what it wants, drops the read lock, grabs a
1363 write lock and inserts the object has a race condition.
1367 If you don't see why, please stay the fuck away from my code.
1372 <sect1 id="racing-timers">
1373 <title>Racing Timers: A Kernel Pastime</title>
1376 Timers can produce their own special problems with races.
1377 Consider a collection of objects (list, hash, etc) where each
1378 object has a timer which is due to destroy it.
1382 If you want to destroy the entire collection (say on module
1383 removal), you might do the following:
1387 /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
1388 HUNGARIAN NOTATION */
1389 spin_lock_bh(&list_lock);
1392 struct foo *next = list->next;
1393 del_timer(&list->timer);
1398 spin_unlock_bh(&list_lock);
1402 Sooner or later, this will crash on SMP, because a timer can
1403 have just gone off before the <function>spin_lock_bh()</function>,
1404 and it will only get the lock after we
1405 <function>spin_unlock_bh()</function>, and then try to free
1406 the element (which has already been freed!).
1410 This can be avoided by checking the result of
1411 <function>del_timer()</function>: if it returns
1412 <returnvalue>1</returnvalue>, the timer has been deleted.
1413 If <returnvalue>0</returnvalue>, it means (in this
1414 case) that it is currently running, so we can do:
1419 spin_lock_bh(&list_lock);
1422 struct foo *next = list->next;
1423 if (!del_timer(&list->timer)) {
1424 /* Give timer a chance to delete this */
1425 spin_unlock_bh(&list_lock);
1432 spin_unlock_bh(&list_lock);
1436 Another common problem is deleting timers which restart
1437 themselves (by calling <function>add_timer()</function> at the end
1438 of their timer function). Because this is a fairly common case
1439 which is prone to races, you should use <function>del_timer_sync()</function>
1440 (<filename class="headerfile">include/linux/timer.h</filename>)
1441 to handle this case. It returns the number of times the timer
1442 had to be deleted before we finally stopped it from adding itself back
1449 <chapter id="Efficiency">
1450 <title>Locking Speed</title>
1453 There are three main things to worry about when considering speed of
1454 some code which does locking. First is concurrency: how many things
1455 are going to be waiting while someone else is holding a lock. Second
1456 is the time taken to actually acquire and release an uncontended lock.
1457 Third is using fewer, or smarter locks. I'm assuming that the lock is
1458 used fairly often: otherwise, you wouldn't be concerned about
1462 Concurrency depends on how long the lock is usually held: you should
1463 hold the lock for as long as needed, but no longer. In the cache
1464 example, we always create the object without the lock held, and then
1465 grab the lock only when we are ready to insert it in the list.
1468 Acquisition times depend on how much damage the lock operations do to
1469 the pipeline (pipeline stalls) and how likely it is that this CPU was
1470 the last one to grab the lock (ie. is the lock cache-hot for this
1471 CPU): on a machine with more CPUs, this likelihood drops fast.
1472 Consider a 700MHz Intel Pentium III: an instruction takes about 0.7ns,
1473 an atomic increment takes about 58ns, a lock which is cache-hot on
1474 this CPU takes 160ns, and a cacheline transfer from another CPU takes
1475 an additional 170 to 360ns. (These figures from Paul McKenney's
1476 <ulink url="http://www.linuxjournal.com/article.php?sid=6993"> Linux
1477 Journal RCU article</ulink>).
1480 These two aims conflict: holding a lock for a short time might be done
1481 by splitting locks into parts (such as in our final per-object-lock
1482 example), but this increases the number of lock acquisitions, and the
1483 results are often slower than having a single lock. This is another
1484 reason to advocate locking simplicity.
1487 The third concern is addressed below: there are some methods to reduce
1488 the amount of locking which needs to be done.
1491 <sect1 id="efficiency-rwlocks">
1492 <title>Read/Write Lock Variants</title>
1495 Both spinlocks and semaphores have read/write variants:
1496 <type>rwlock_t</type> and <structname>struct rw_semaphore</structname>.
1497 These divide users into two classes: the readers and the writers. If
1498 you are only reading the data, you can get a read lock, but to write to
1499 the data you need the write lock. Many people can hold a read lock,
1500 but a writer must be sole holder.
1504 If your code divides neatly along reader/writer lines (as our
1505 cache code does), and the lock is held by readers for
1506 significant lengths of time, using these locks can help. They
1507 are slightly slower than the normal locks though, so in practice
1508 <type>rwlock_t</type> is not usually worthwhile.
1512 <sect1 id="efficiency-read-copy-update">
1513 <title>Avoiding Locks: Read Copy Update</title>
1516 There is a special method of read/write locking called Read Copy
1517 Update. Using RCU, the readers can avoid taking a lock
1518 altogether: as we expect our cache to be read more often than
1519 updated (otherwise the cache is a waste of time), it is a
1520 candidate for this optimization.
1524 How do we get rid of read locks? Getting rid of read locks
1525 means that writers may be changing the list underneath the
1526 readers. That is actually quite simple: we can read a linked
1527 list while an element is being added if the writer adds the
1528 element very carefully. For example, adding
1529 <symbol>new</symbol> to a single linked list called
1530 <symbol>list</symbol>:
1534 new->next = list->next;
1536 list->next = new;
1540 The <function>wmb()</function> is a write memory barrier. It
1541 ensures that the first operation (setting the new element's
1542 <symbol>next</symbol> pointer) is complete and will be seen by
1543 all CPUs, before the second operation is (putting the new
1544 element into the list). This is important, since modern
1545 compilers and modern CPUs can both reorder instructions unless
1546 told otherwise: we want a reader to either not see the new
1547 element at all, or see the new element with the
1548 <symbol>next</symbol> pointer correctly pointing at the rest of
1552 Fortunately, there is a function to do this for standard
1553 <structname>struct list_head</structname> lists:
1554 <function>list_add_rcu()</function>
1555 (<filename>include/linux/list.h</filename>).
1558 Removing an element from the list is even simpler: we replace
1559 the pointer to the old element with a pointer to its successor,
1560 and readers will either see it, or skip over it.
1563 list->next = old->next;
1566 There is <function>list_del_rcu()</function>
1567 (<filename>include/linux/list.h</filename>) which does this (the
1568 normal version poisons the old object, which we don't want).
1571 The reader must also be careful: some CPUs can look through the
1572 <symbol>next</symbol> pointer to start reading the contents of
1573 the next element early, but don't realize that the pre-fetched
1574 contents is wrong when the <symbol>next</symbol> pointer changes
1575 underneath them. Once again, there is a
1576 <function>list_for_each_entry_rcu()</function>
1577 (<filename>include/linux/list.h</filename>) to help you. Of
1578 course, writers can just use
1579 <function>list_for_each_entry()</function>, since there cannot
1580 be two simultaneous writers.
1583 Our final dilemma is this: when can we actually destroy the
1584 removed element? Remember, a reader might be stepping through
1585 this element in the list right now: it we free this element and
1586 the <symbol>next</symbol> pointer changes, the reader will jump
1587 off into garbage and crash. We need to wait until we know that
1588 all the readers who were traversing the list when we deleted the
1589 element are finished. We use <function>call_rcu()</function> to
1590 register a callback which will actually destroy the object once
1591 the readers are finished.
1594 But how does Read Copy Update know when the readers are
1595 finished? The method is this: firstly, the readers always
1596 traverse the list inside
1597 <function>rcu_read_lock()</function>/<function>rcu_read_unlock()</function>
1598 pairs: these simply disable preemption so the reader won't go to
1599 sleep while reading the list.
1602 RCU then waits until every other CPU has slept at least once:
1603 since readers cannot sleep, we know that any readers which were
1604 traversing the list during the deletion are finished, and the
1605 callback is triggered. The real Read Copy Update code is a
1606 little more optimized than this, but this is the fundamental
1611 --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1612 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
1614 #include <linux/list.h>
1615 #include <linux/slab.h>
1616 #include <linux/string.h>
1617 +#include <linux/rcupdate.h>
1618 #include <asm/semaphore.h>
1619 #include <asm/errno.h>
1623 - /* These two protected by cache_lock. */
1624 + /* This is protected by RCU */
1625 struct list_head list;
1628 + struct rcu_head rcu;
1632 /* Doesn't change once created. */
1637 - list_for_each_entry(i, &cache, list) {
1638 + list_for_each_entry_rcu(i, &cache, list) {
1639 if (i->id == id) {
1646 +/* Final discard done once we know no readers are looking. */
1647 +static void cache_delete_rcu(void *arg)
1652 /* Must be holding cache_lock */
1653 static void __cache_delete(struct object *obj)
1656 - list_del(&obj->list);
1658 + list_del_rcu(&obj->list);
1660 + call_rcu(&obj->rcu, cache_delete_rcu, obj);
1663 /* Must be holding cache_lock */
1664 static void __cache_add(struct object *obj)
1666 - list_add(&obj->list, &cache);
1667 + list_add_rcu(&obj->list, &cache);
1668 if (++cache_num > MAX_CACHE_SIZE) {
1669 struct object *i, *outcast = NULL;
1670 list_for_each_entry(i, &cache, list) {
1672 obj->popularity = 0;
1673 atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
1674 spin_lock_init(&obj->lock);
1675 + INIT_RCU_HEAD(&obj->rcu);
1677 spin_lock_irqsave(&cache_lock, flags);
1679 @@ -104,12 +114,11 @@
1680 struct object *cache_find(int id)
1683 - unsigned long flags;
1685 - spin_lock_irqsave(&cache_lock, flags);
1687 obj = __cache_find(id);
1690 - spin_unlock_irqrestore(&cache_lock, flags);
1691 + rcu_read_unlock();
1697 Note that the reader will alter the
1698 <structfield>popularity</structfield> member in
1699 <function>__cache_find()</function>, and now it doesn't hold a lock.
1700 One solution would be to make it an <type>atomic_t</type>, but for
1701 this usage, we don't really care about races: an approximate result is
1702 good enough, so I didn't change it.
1706 The result is that <function>cache_find()</function> requires no
1707 synchronization with any other functions, so is almost as fast on SMP
1708 as it would be on UP.
1712 There is a furthur optimization possible here: remember our original
1713 cache code, where there were no reference counts and the caller simply
1714 held the lock whenever using the object? This is still possible: if
1715 you hold the lock, noone can delete the object, so you don't need to
1716 get and put the reference count.
1720 Now, because the 'read lock' in RCU is simply disabling preemption, a
1721 caller which always has preemption disabled between calling
1722 <function>cache_find()</function> and
1723 <function>object_put()</function> does not need to actually get and
1724 put the reference count: we could expose
1725 <function>__cache_find()</function> by making it non-static, and
1726 such callers could simply call that.
1729 The benefit here is that the reference count is not written to: the
1730 object is not altered in any way, which is much faster on SMP
1731 machines due to caching.
1735 <sect1 id="per-cpu">
1736 <title>Per-CPU Data</title>
1739 Another technique for avoiding locking which is used fairly
1740 widely is to duplicate information for each CPU. For example,
1741 if you wanted to keep a count of a common condition, you could
1742 use a spin lock and a single counter. Nice and simple.
1746 If that was too slow (it's usually not, but if you've got a
1747 really big machine to test on and can show that it is), you
1748 could instead use a counter for each CPU, then none of them need
1749 an exclusive lock. See <function>DEFINE_PER_CPU()</function>,
1750 <function>get_cpu_var()</function> and
1751 <function>put_cpu_var()</function>
1752 (<filename class="headerfile">include/linux/percpu.h</filename>).
1756 Of particular use for simple per-cpu counters is the
1757 <type>local_t</type> type, and the
1758 <function>cpu_local_inc()</function> and related functions,
1759 which are more efficient than simple code on some architectures
1760 (<filename class="headerfile">include/asm/local.h</filename>).
1764 Note that there is no simple, reliable way of getting an exact
1765 value of such a counter, without introducing more locks. This
1766 is not a problem for some uses.
1770 <sect1 id="mostly-hardirq">
1771 <title>Data Which Mostly Used By An IRQ Handler</title>
1774 If data is always accessed from within the same IRQ handler, you
1775 don't need a lock at all: the kernel already guarantees that the
1776 irq handler will not run simultaneously on multiple CPUs.
1779 Manfred Spraul points out that you can still do this, even if
1780 the data is very occasionally accessed in user context or
1781 softirqs/tasklets. The irq handler doesn't use a lock, and
1782 all other accesses are done as so:
1786 spin_lock(&lock);
1790 spin_unlock(&lock);
1793 The <function>disable_irq()</function> prevents the irq handler
1794 from running (and waits for it to finish if it's currently
1795 running on other CPUs). The spinlock prevents any other
1796 accesses happening at the same time. Naturally, this is slower
1797 than just a <function>spin_lock_irq()</function> call, so it
1798 only makes sense if this type of access happens extremely
1804 <chapter id="sleeping-things">
1805 <title>What Functions Are Safe To Call From Interrupts?</title>
1808 Many functions in the kernel sleep (ie. call schedule())
1809 directly or indirectly: you can never call them while holding a
1810 spinlock, or with preemption disabled. This also means you need
1811 to be in user context: calling them from an interrupt is illegal.
1814 <sect1 id="sleeping">
1815 <title>Some Functions Which Sleep</title>
1818 The most common ones are listed below, but you usually have to
1819 read the code to find out if other calls are safe. If everyone
1820 else who calls it can sleep, you probably need to be able to
1821 sleep, too. In particular, registration and deregistration
1822 functions usually expect to be called from user context, and can
1830 <firstterm linkend="gloss-userspace">userspace</firstterm>:
1835 <function>copy_from_user()</function>
1840 <function>copy_to_user()</function>
1845 <function>get_user()</function>
1850 <function> put_user()</function>
1858 <function>kmalloc(GFP_KERNEL)</function>
1864 <function>down_interruptible()</function> and
1865 <function>down()</function>
1868 There is a <function>down_trylock()</function> which can be
1869 used inside interrupt context, as it will not sleep.
1870 <function>up()</function> will also never sleep.
1876 <sect1 id="dont-sleep">
1877 <title>Some Functions Which Don't Sleep</title>
1880 Some functions are safe to call from any context, or holding
1887 <function>printk()</function>
1892 <function>kfree()</function>
1897 <function>add_timer()</function> and <function>del_timer()</function>
1904 <chapter id="references">
1905 <title>Further reading</title>
1910 <filename>Documentation/spinlocks.txt</filename>:
1911 Linus Torvalds' spinlocking tutorial in the kernel sources.
1917 Unix Systems for Modern Architectures: Symmetric
1918 Multiprocessing and Caching for Kernel Programmers:
1922 Curt Schimmel's very good introduction to kernel level
1923 locking (not written for Linux, but nearly everything
1924 applies). The book is expensive, but really worth every
1925 penny to understand SMP locking. [ISBN: 0201633388]
1931 <chapter id="thanks">
1932 <title>Thanks</title>
1935 Thanks to Telsa Gwynne for DocBooking, neatening and adding
1940 Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul
1941 Mackerras, Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim
1942 Waugh, Pete Zaitcev, James Morris, Robert Love, Paul McKenney,
1943 John Ashby for proofreading, correcting, flaming, commenting.
1947 Thanks to the cabal for having no influence on this document.
1951 <glossary id="glossary">
1952 <title>Glossary</title>
1954 <glossentry id="gloss-preemption">
1955 <glossterm>preemption</glossterm>
1958 Prior to 2.5, or when <symbol>CONFIG_PREEMPT</symbol> is
1959 unset, processes in user context inside the kernel would not
1960 preempt each other (ie. you had that CPU until you have it up,
1961 except for interrupts). With the addition of
1962 <symbol>CONFIG_PREEMPT</symbol> in 2.5.4, this changed: when
1963 in user context, higher priority tasks can "cut in": spinlocks
1964 were changed to disable preemption, even on UP.
1969 <glossentry id="gloss-bh">
1970 <glossterm>bh</glossterm>
1973 Bottom Half: for historical reasons, functions with
1974 '_bh' in them often now refer to any software interrupt, e.g.
1975 <function>spin_lock_bh()</function> blocks any software interrupt
1976 on the current CPU. Bottom halves are deprecated, and will
1977 eventually be replaced by tasklets. Only one bottom half will be
1978 running at any time.
1983 <glossentry id="gloss-hwinterrupt">
1984 <glossterm>Hardware Interrupt / Hardware IRQ</glossterm>
1987 Hardware interrupt request. <function>in_irq()</function> returns
1988 <returnvalue>true</returnvalue> in a hardware interrupt handler.
1993 <glossentry id="gloss-interruptcontext">
1994 <glossterm>Interrupt Context</glossterm>
1997 Not user context: processing a hardware irq or software irq.
1998 Indicated by the <function>in_interrupt()</function> macro
1999 returning <returnvalue>true</returnvalue>.
2004 <glossentry id="gloss-smp">
2005 <glossterm><acronym>SMP</acronym></glossterm>
2008 Symmetric Multi-Processor: kernels compiled for multiple-CPU
2009 machines. (CONFIG_SMP=y).
2014 <glossentry id="gloss-softirq">
2015 <glossterm>Software Interrupt / softirq</glossterm>
2018 Software interrupt handler. <function>in_irq()</function> returns
2019 <returnvalue>false</returnvalue>; <function>in_softirq()</function>
2020 returns <returnvalue>true</returnvalue>. Tasklets and softirqs
2021 both fall into the category of 'software interrupts'.
2024 Strictly speaking a softirq is one of up to 32 enumerated software
2025 interrupts which can run on multiple CPUs at once.
2026 Sometimes used to refer to tasklets as
2027 well (ie. all software interrupts).
2032 <glossentry id="gloss-tasklet">
2033 <glossterm>tasklet</glossterm>
2036 A dynamically-registrable software interrupt,
2037 which is guaranteed to only run on one CPU at a time.
2042 <glossentry id="gloss-timers">
2043 <glossterm>timer</glossterm>
2046 A dynamically-registrable software interrupt, which is run at
2047 (or close to) a given time. When running, it is just like a
2048 tasklet (in fact, they are called from the TIMER_SOFTIRQ).
2053 <glossentry id="gloss-up">
2054 <glossterm><acronym>UP</acronym></glossterm>
2057 Uni-Processor: Non-SMP. (CONFIG_SMP=n).
2062 <glossentry id="gloss-usercontext">
2063 <glossterm>User Context</glossterm>
2066 The kernel executing on behalf of a particular process (ie. a
2067 system call or trap) or kernel thread. You can tell which
2068 process with the <symbol>current</symbol> macro.) Not to
2069 be confused with userspace. Can be interrupted by software or
2070 hardware interrupts.
2075 <glossentry id="gloss-userspace">
2076 <glossterm>Userspace</glossterm>
2079 A process executing its own code outside the kernel.