4 * Provide support for fcntl()'s F_GETLK, F_SETLK, and F_SETLKW calls.
5 * Doug Evans (dje@spiff.uucp), August 07, 1992
7 * Deadlock detection added.
8 * FIXME: one thing isn't handled yet:
9 * - mandatory locks (requires lots of changes elsewhere)
10 * Kelly Carmichael (kelly@[142.24.8.65]), September 17, 1994.
12 * Miscellaneous edits, and a total rewrite of posix_lock_file() code.
13 * Kai Petzke (wpp@marie.physik.tu-berlin.de), 1994
15 * Converted file_lock_table to a linked list from an array, which eliminates
16 * the limits on how many active file locks are open.
17 * Chad Page (pageone@netcom.com), November 27, 1994
19 * Removed dependency on file descriptors. dup()'ed file descriptors now
20 * get the same locks as the original file descriptors, and a close() on
21 * any file descriptor removes ALL the locks on the file for the current
22 * process. Since locks still depend on the process id, locks are inherited
23 * after an exec() but not after a fork(). This agrees with POSIX, and both
24 * BSD and SVR4 practice.
25 * Andy Walker (andy@lysaker.kvaerner.no), February 14, 1995
27 * Scrapped free list which is redundant now that we allocate locks
28 * dynamically with kmalloc()/kfree().
29 * Andy Walker (andy@lysaker.kvaerner.no), February 21, 1995
31 * Implemented two lock personalities - FL_FLOCK and FL_POSIX.
33 * FL_POSIX locks are created with calls to fcntl() and lockf() through the
34 * fcntl() system call. They have the semantics described above.
36 * FL_FLOCK locks are created with calls to flock(), through the flock()
37 * system call, which is new. Old C libraries implement flock() via fcntl()
38 * and will continue to use the old, broken implementation.
40 * FL_FLOCK locks follow the 4.4 BSD flock() semantics. They are associated
41 * with a file pointer (filp). As a result they can be shared by a parent
42 * process and its children after a fork(). They are removed when the last
43 * file descriptor referring to the file pointer is closed (unless explicitly
46 * FL_FLOCK locks never deadlock, an existing lock is always removed before
47 * upgrading from shared to exclusive (or vice versa). When this happens
48 * any processes blocked by the current lock are woken up and allowed to
49 * run before the new lock is applied.
50 * Andy Walker (andy@lysaker.kvaerner.no), June 09, 1995
52 * Removed some race conditions in flock_lock_file(), marked other possible
53 * races. Just grep for FIXME to see them.
54 * Dmitry Gorodchanin (pgmdsg@ibi.com), February 09, 1996.
56 * Addressed Dmitry's concerns. Deadlock checking no longer recursive.
57 * Lock allocation changed to GFP_ATOMIC as we can't afford to sleep
58 * once we've checked for blocking and deadlocking.
59 * Andy Walker (andy@lysaker.kvaerner.no), April 03, 1996.
61 * Initial implementation of mandatory locks. SunOS turned out to be
62 * a rotten model, so I implemented the "obvious" semantics.
63 * See 'Documentation/mandatory.txt' for details.
64 * Andy Walker (andy@lysaker.kvaerner.no), April 06, 1996.
66 * Don't allow mandatory locks on mmap()'ed files. Added simple functions to
67 * check if a file has mandatory locks, used by mmap(), open() and creat() to
68 * see if system call should be rejected. Ref. HP-UX/SunOS/Solaris Reference
70 * Andy Walker (andy@lysaker.kvaerner.no), April 09, 1996.
72 * Tidied up block list handling. Added '/proc/locks' interface.
73 * Andy Walker (andy@lysaker.kvaerner.no), April 24, 1996.
75 * Fixed deadlock condition for pathological code that mixes calls to
76 * flock() and fcntl().
77 * Andy Walker (andy@lysaker.kvaerner.no), April 29, 1996.
79 * Allow only one type of locking scheme (FL_POSIX or FL_FLOCK) to be in use
80 * for a given file at a time. Changed the CONFIG_LOCK_MANDATORY scheme to
81 * guarantee sensible behaviour in the case where file system modules might
82 * be compiled with different options than the kernel itself.
83 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
85 * Added a couple of missing wake_up() calls. Thanks to Thomas Meckel
86 * (Thomas.Meckel@mni.fh-giessen.de) for spotting this.
87 * Andy Walker (andy@lysaker.kvaerner.no), May 15, 1996.
89 * Changed FL_POSIX locks to use the block list in the same way as FL_FLOCK
90 * locks. Changed process synchronisation to avoid dereferencing locks that
91 * have already been freed.
92 * Andy Walker (andy@lysaker.kvaerner.no), Sep 21, 1996.
94 * Made the block list a circular list to minimise searching in the list.
95 * Andy Walker (andy@lysaker.kvaerner.no), Sep 25, 1996.
97 * Made mandatory locking a mount option. Default is not to allow mandatory
99 * Andy Walker (andy@lysaker.kvaerner.no), Oct 04, 1996.
101 * Some adaptations for NFS support.
102 * Olaf Kirch (okir@monad.swb.de), Dec 1996,
104 * Fixed /proc/locks interface so that we can't overrun the buffer we are handed.
105 * Andy Walker (andy@lysaker.kvaerner.no), May 12, 1997.
107 * Use slab allocator instead of kmalloc/kfree.
108 * Use generic list implementation from <linux/list.h>.
109 * Sped up posix_locks_deadlock by only considering blocked locks.
110 * Matthew Wilcox <willy@debian.org>, March, 2000.
112 * Leases and LOCK_MAND
113 * Matthew Wilcox <willy@debian.org>, June, 2000.
114 * Stephen Rothwell <sfr@canb.auug.org.au>, June, 2000.
117 #include <linux/capability.h>
118 #include <linux/file.h>
119 #include <linux/fs.h>
120 #include <linux/init.h>
121 #include <linux/module.h>
122 #include <linux/security.h>
123 #include <linux/slab.h>
124 #include <linux/smp_lock.h>
125 #include <linux/syscalls.h>
126 #include <linux/time.h>
127 #include <linux/rcupdate.h>
128 #include <linux/vs_base.h>
129 #include <linux/vs_limit.h>
131 #include <asm/semaphore.h>
132 #include <asm/uaccess.h>
134 #define IS_POSIX(fl) (fl->fl_flags & FL_POSIX)
135 #define IS_FLOCK(fl) (fl->fl_flags & FL_FLOCK)
136 #define IS_LEASE(fl) (fl->fl_flags & FL_LEASE)
138 int leases_enable = 1;
139 int lease_break_time = 45;
141 #define for_each_lock(inode, lockp) \
142 for (lockp = &inode->i_flock; *lockp != NULL; lockp = &(*lockp)->fl_next)
144 static LIST_HEAD(file_lock_list);
145 static LIST_HEAD(blocked_list);
147 static struct kmem_cache *filelock_cache __read_mostly;
149 /* Allocate an empty lock structure. */
150 static struct file_lock *locks_alloc_lock(void)
152 if (!vx_locks_avail(1))
154 return kmem_cache_alloc(filelock_cache, GFP_KERNEL);
157 static void locks_release_private(struct file_lock *fl)
160 if (fl->fl_ops->fl_release_private)
161 fl->fl_ops->fl_release_private(fl);
165 if (fl->fl_lmops->fl_release_private)
166 fl->fl_lmops->fl_release_private(fl);
172 /* Free a lock which is not in use. */
173 static void locks_free_lock(struct file_lock *fl)
175 BUG_ON(waitqueue_active(&fl->fl_wait));
176 BUG_ON(!list_empty(&fl->fl_block));
177 BUG_ON(!list_empty(&fl->fl_link));
180 locks_release_private(fl);
181 kmem_cache_free(filelock_cache, fl);
184 void locks_init_lock(struct file_lock *fl)
186 INIT_LIST_HEAD(&fl->fl_link);
187 INIT_LIST_HEAD(&fl->fl_block);
188 init_waitqueue_head(&fl->fl_wait);
190 fl->fl_fasync = NULL;
196 fl->fl_start = fl->fl_end = 0;
202 EXPORT_SYMBOL(locks_init_lock);
205 * Initialises the fields of the file lock which are invariant for
208 static void init_once(void *foo, struct kmem_cache *cache, unsigned long flags)
210 struct file_lock *lock = (struct file_lock *) foo;
212 if ((flags & (SLAB_CTOR_VERIFY|SLAB_CTOR_CONSTRUCTOR)) !=
213 SLAB_CTOR_CONSTRUCTOR)
216 locks_init_lock(lock);
219 static void locks_copy_private(struct file_lock *new, struct file_lock *fl)
222 if (fl->fl_ops->fl_copy_lock)
223 fl->fl_ops->fl_copy_lock(new, fl);
224 new->fl_ops = fl->fl_ops;
227 if (fl->fl_lmops->fl_copy_lock)
228 fl->fl_lmops->fl_copy_lock(new, fl);
229 new->fl_lmops = fl->fl_lmops;
234 * Initialize a new lock from an existing file_lock structure.
236 static void __locks_copy_lock(struct file_lock *new, const struct file_lock *fl)
238 new->fl_owner = fl->fl_owner;
239 new->fl_pid = fl->fl_pid;
241 new->fl_flags = fl->fl_flags;
242 new->fl_type = fl->fl_type;
243 new->fl_start = fl->fl_start;
244 new->fl_end = fl->fl_end;
246 new->fl_lmops = NULL;
249 void locks_copy_lock(struct file_lock *new, struct file_lock *fl)
251 locks_release_private(new);
253 __locks_copy_lock(new, fl);
254 new->fl_file = fl->fl_file;
255 new->fl_ops = fl->fl_ops;
256 new->fl_lmops = fl->fl_lmops;
257 new->fl_xid = fl->fl_xid;
259 locks_copy_private(new, fl);
262 EXPORT_SYMBOL(locks_copy_lock);
264 static inline int flock_translate_cmd(int cmd) {
266 return cmd & (LOCK_MAND | LOCK_RW);
278 /* Fill in a file_lock structure with an appropriate FLOCK lock. */
279 static int flock_make_lock(struct file *filp, struct file_lock **lock,
282 struct file_lock *fl;
283 int type = flock_translate_cmd(cmd);
287 fl = locks_alloc_lock();
292 fl->fl_pid = current->tgid;
293 fl->fl_flags = FL_FLOCK;
295 fl->fl_end = OFFSET_MAX;
297 vxd_assert(filp->f_xid == vx_current_xid(),
298 "f_xid(%d) == current(%d)", filp->f_xid, vx_current_xid());
299 fl->fl_xid = filp->f_xid;
306 static int assign_type(struct file_lock *fl, int type)
320 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
323 static int flock_to_posix_lock(struct file *filp, struct file_lock *fl,
328 switch (l->l_whence) {
336 start = i_size_read(filp->f_path.dentry->d_inode);
342 /* POSIX-1996 leaves the case l->l_len < 0 undefined;
343 POSIX-2001 defines it. */
347 fl->fl_end = OFFSET_MAX;
349 end = start + l->l_len - 1;
351 } else if (l->l_len < 0) {
358 fl->fl_start = start; /* we record the absolute position */
359 if (fl->fl_end < fl->fl_start)
362 fl->fl_owner = current->files;
363 fl->fl_pid = current->tgid;
365 fl->fl_flags = FL_POSIX;
369 return assign_type(fl, l->l_type);
372 #if BITS_PER_LONG == 32
373 static int flock64_to_posix_lock(struct file *filp, struct file_lock *fl,
378 switch (l->l_whence) {
386 start = i_size_read(filp->f_path.dentry->d_inode);
395 fl->fl_end = OFFSET_MAX;
397 fl->fl_end = start + l->l_len - 1;
398 } else if (l->l_len < 0) {
399 fl->fl_end = start - 1;
404 fl->fl_start = start; /* we record the absolute position */
405 if (fl->fl_end < fl->fl_start)
408 fl->fl_owner = current->files;
409 fl->fl_pid = current->tgid;
411 fl->fl_flags = FL_POSIX;
419 fl->fl_type = l->l_type;
429 /* default lease lock manager operations */
430 static void lease_break_callback(struct file_lock *fl)
432 kill_fasync(&fl->fl_fasync, SIGIO, POLL_MSG);
435 static void lease_release_private_callback(struct file_lock *fl)
440 f_delown(fl->fl_file);
441 fl->fl_file->f_owner.signum = 0;
444 static int lease_mylease_callback(struct file_lock *fl, struct file_lock *try)
446 return fl->fl_file == try->fl_file;
449 static struct lock_manager_operations lease_manager_ops = {
450 .fl_break = lease_break_callback,
451 .fl_release_private = lease_release_private_callback,
452 .fl_mylease = lease_mylease_callback,
453 .fl_change = lease_modify,
457 * Initialize a lease, use the default lock manager operations
459 static int lease_init(struct file *filp, int type, struct file_lock *fl)
461 if (assign_type(fl, type) != 0)
464 fl->fl_owner = current->files;
465 fl->fl_pid = current->tgid;
466 fl->fl_xid = vx_current_xid();
469 fl->fl_flags = FL_LEASE;
471 fl->fl_end = OFFSET_MAX;
473 fl->fl_lmops = &lease_manager_ops;
477 /* Allocate a file_lock initialised to this type of lease */
478 static int lease_alloc(struct file *filp, int type, struct file_lock **flp)
480 struct file_lock *fl = locks_alloc_lock();
486 fl->fl_xid = vx_current_xid();
488 vxd_assert(filp->f_xid == fl->fl_xid,
489 "f_xid(%d) == fl_xid(%d)", filp->f_xid, fl->fl_xid);
491 error = lease_init(filp, type, fl);
501 /* Check if two locks overlap each other.
503 static inline int locks_overlap(struct file_lock *fl1, struct file_lock *fl2)
505 return ((fl1->fl_end >= fl2->fl_start) &&
506 (fl2->fl_end >= fl1->fl_start));
510 * Check whether two locks have the same owner.
512 static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
514 if (fl1->fl_lmops && fl1->fl_lmops->fl_compare_owner)
515 return fl2->fl_lmops == fl1->fl_lmops &&
516 fl1->fl_lmops->fl_compare_owner(fl1, fl2);
517 return fl1->fl_owner == fl2->fl_owner;
520 /* Remove waiter from blocker's block list.
521 * When blocker ends up pointing to itself then the list is empty.
523 static void __locks_delete_block(struct file_lock *waiter)
525 list_del_init(&waiter->fl_block);
526 list_del_init(&waiter->fl_link);
527 waiter->fl_next = NULL;
532 static void locks_delete_block(struct file_lock *waiter)
535 __locks_delete_block(waiter);
539 /* Insert waiter into blocker's block list.
540 * We use a circular list so that processes can be easily woken up in
541 * the order they blocked. The documentation doesn't require this but
542 * it seems like the reasonable thing to do.
544 static void locks_insert_block(struct file_lock *blocker,
545 struct file_lock *waiter)
547 BUG_ON(!list_empty(&waiter->fl_block));
548 list_add_tail(&waiter->fl_block, &blocker->fl_block);
549 waiter->fl_next = blocker;
550 if (IS_POSIX(blocker))
551 list_add(&waiter->fl_link, &blocked_list);
554 /* Wake up processes blocked waiting for blocker.
555 * If told to wait then schedule the processes until the block list
556 * is empty, otherwise empty the block list ourselves.
558 static void locks_wake_up_blocks(struct file_lock *blocker)
560 while (!list_empty(&blocker->fl_block)) {
561 struct file_lock *waiter = list_entry(blocker->fl_block.next,
562 struct file_lock, fl_block);
563 __locks_delete_block(waiter);
564 if (waiter->fl_lmops && waiter->fl_lmops->fl_notify)
565 waiter->fl_lmops->fl_notify(waiter);
567 wake_up(&waiter->fl_wait);
571 /* Insert file lock fl into an inode's lock list at the position indicated
572 * by pos. At the same time add the lock to the global file lock list.
574 static void locks_insert_lock(struct file_lock **pos, struct file_lock *fl)
576 list_add(&fl->fl_link, &file_lock_list);
578 /* insert into file's list */
582 if (fl->fl_ops && fl->fl_ops->fl_insert)
583 fl->fl_ops->fl_insert(fl);
587 * Delete a lock and then free it.
588 * Wake up processes that are blocked waiting for this lock,
589 * notify the FS that the lock has been cleared and
590 * finally free the lock.
592 static void locks_delete_lock(struct file_lock **thisfl_p)
594 struct file_lock *fl = *thisfl_p;
596 *thisfl_p = fl->fl_next;
598 list_del_init(&fl->fl_link);
600 fasync_helper(0, fl->fl_file, 0, &fl->fl_fasync);
601 if (fl->fl_fasync != NULL) {
602 printk(KERN_ERR "locks_delete_lock: fasync == %p\n", fl->fl_fasync);
603 fl->fl_fasync = NULL;
606 if (fl->fl_ops && fl->fl_ops->fl_remove)
607 fl->fl_ops->fl_remove(fl);
609 locks_wake_up_blocks(fl);
613 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
614 * checks for shared/exclusive status of overlapping locks.
616 static int locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
618 if (sys_fl->fl_type == F_WRLCK)
620 if (caller_fl->fl_type == F_WRLCK)
625 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
626 * checking before calling the locks_conflict().
628 static int posix_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
630 /* POSIX locks owned by the same process do not conflict with
633 if (!IS_POSIX(sys_fl) || posix_same_owner(caller_fl, sys_fl))
636 /* Check whether they overlap */
637 if (!locks_overlap(caller_fl, sys_fl))
640 return (locks_conflict(caller_fl, sys_fl));
643 /* Determine if lock sys_fl blocks lock caller_fl. FLOCK specific
644 * checking before calling the locks_conflict().
646 static int flock_locks_conflict(struct file_lock *caller_fl, struct file_lock *sys_fl)
648 /* FLOCK locks referring to the same filp do not conflict with
651 if (!IS_FLOCK(sys_fl) || (caller_fl->fl_file == sys_fl->fl_file))
653 if ((caller_fl->fl_type & LOCK_MAND) || (sys_fl->fl_type & LOCK_MAND))
656 return (locks_conflict(caller_fl, sys_fl));
659 static int interruptible_sleep_on_locked(wait_queue_head_t *fl_wait, int timeout)
662 DECLARE_WAITQUEUE(wait, current);
664 __set_current_state(TASK_INTERRUPTIBLE);
665 add_wait_queue(fl_wait, &wait);
669 result = schedule_timeout(timeout);
670 if (signal_pending(current))
671 result = -ERESTARTSYS;
672 remove_wait_queue(fl_wait, &wait);
673 __set_current_state(TASK_RUNNING);
677 static int locks_block_on_timeout(struct file_lock *blocker, struct file_lock *waiter, int time)
680 locks_insert_block(blocker, waiter);
681 result = interruptible_sleep_on_locked(&waiter->fl_wait, time);
682 __locks_delete_block(waiter);
687 posix_test_lock(struct file *filp, struct file_lock *fl,
688 struct file_lock *conflock)
690 struct file_lock *cfl;
693 for (cfl = filp->f_path.dentry->d_inode->i_flock; cfl; cfl = cfl->fl_next) {
696 if (posix_locks_conflict(cfl, fl))
700 __locks_copy_lock(conflock, cfl);
708 EXPORT_SYMBOL(posix_test_lock);
710 /* This function tests for deadlock condition before putting a process to
711 * sleep. The detection scheme is no longer recursive. Recursive was neat,
712 * but dangerous - we risked stack corruption if the lock data was bad, or
713 * if the recursion was too deep for any other reason.
715 * We rely on the fact that a task can only be on one lock's wait queue
716 * at a time. When we find blocked_task on a wait queue we can re-search
717 * with blocked_task equal to that queue's owner, until either blocked_task
718 * isn't found, or blocked_task is found on a queue owned by my_task.
720 * Note: the above assumption may not be true when handling lock requests
721 * from a broken NFS client. But broken NFS clients have a lot more to
722 * worry about than proper deadlock detection anyway... --okir
724 static int posix_locks_deadlock(struct file_lock *caller_fl,
725 struct file_lock *block_fl)
727 struct list_head *tmp;
730 if (posix_same_owner(caller_fl, block_fl))
732 list_for_each(tmp, &blocked_list) {
733 struct file_lock *fl = list_entry(tmp, struct file_lock, fl_link);
734 if (posix_same_owner(fl, block_fl)) {
743 /* Try to create a FLOCK lock on filp. We always insert new FLOCK locks
744 * at the head of the list, but that's secret knowledge known only to
745 * flock_lock_file and posix_lock_file.
747 * Note that if called with an FL_EXISTS argument, the caller may determine
748 * whether or not a lock was successfully freed by testing the return
751 static int flock_lock_file(struct file *filp, struct file_lock *request)
753 struct file_lock *new_fl = NULL;
754 struct file_lock **before;
755 struct inode * inode = filp->f_path.dentry->d_inode;
760 if (request->fl_flags & FL_ACCESS)
762 for_each_lock(inode, before) {
763 struct file_lock *fl = *before;
768 if (filp != fl->fl_file)
770 if (request->fl_type == fl->fl_type)
773 locks_delete_lock(before);
777 if (request->fl_type == F_UNLCK) {
778 if ((request->fl_flags & FL_EXISTS) && !found)
784 new_fl = locks_alloc_lock();
788 * If a higher-priority process was blocked on the old file lock,
789 * give it the opportunity to lock the file.
796 for_each_lock(inode, before) {
797 struct file_lock *fl = *before;
802 if (!flock_locks_conflict(request, fl))
805 if (request->fl_flags & FL_SLEEP)
806 locks_insert_block(fl, request);
809 if (request->fl_flags & FL_ACCESS)
811 locks_copy_lock(new_fl, request);
812 locks_insert_lock(&inode->i_flock, new_fl);
813 vx_locks_inc(new_fl);
820 locks_free_lock(new_fl);
824 static int __posix_lock_file_conf(struct inode *inode, struct file_lock *request,
825 struct file_lock *conflock, xid_t xid)
827 struct file_lock *fl;
828 struct file_lock *new_fl = NULL;
829 struct file_lock *new_fl2 = NULL;
830 struct file_lock *left = NULL;
831 struct file_lock *right = NULL;
832 struct file_lock **before;
833 int error, added = 0;
835 vxd_assert(xid == vx_current_xid(),
836 "xid(%d) == current(%d)", xid, vx_current_xid());
838 * We may need two file_lock structures for this operation,
839 * so we get them in advance to avoid races.
841 * In some cases we can be sure, that no new locks will be needed
843 if (!(request->fl_flags & FL_ACCESS) &&
844 (request->fl_type != F_UNLCK ||
845 request->fl_start != 0 || request->fl_end != OFFSET_MAX)) {
846 new_fl = locks_alloc_lock();
847 new_fl->fl_xid = xid;
848 vx_locks_inc(new_fl);
849 new_fl2 = locks_alloc_lock();
850 new_fl2->fl_xid = xid;
851 vx_locks_inc(new_fl2);
855 if (request->fl_type != F_UNLCK) {
856 for_each_lock(inode, before) {
857 struct file_lock *fl = *before;
860 if (!posix_locks_conflict(request, fl))
863 locks_copy_lock(conflock, fl);
865 if (!(request->fl_flags & FL_SLEEP))
868 if (posix_locks_deadlock(request, fl))
871 locks_insert_block(fl, request);
876 /* If we're just looking for a conflict, we're done. */
878 if (request->fl_flags & FL_ACCESS)
882 * Find the first old lock with the same owner as the new lock.
885 before = &inode->i_flock;
887 /* First skip locks owned by other processes. */
888 while ((fl = *before) && (!IS_POSIX(fl) ||
889 !posix_same_owner(request, fl))) {
890 before = &fl->fl_next;
893 /* Process locks with this owner. */
894 while ((fl = *before) && posix_same_owner(request, fl)) {
895 /* Detect adjacent or overlapping regions (if same lock type)
897 if (request->fl_type == fl->fl_type) {
898 /* In all comparisons of start vs end, use
899 * "start - 1" rather than "end + 1". If end
900 * is OFFSET_MAX, end + 1 will become negative.
902 if (fl->fl_end < request->fl_start - 1)
904 /* If the next lock in the list has entirely bigger
905 * addresses than the new one, insert the lock here.
907 if (fl->fl_start - 1 > request->fl_end)
910 /* If we come here, the new and old lock are of the
911 * same type and adjacent or overlapping. Make one
912 * lock yielding from the lower start address of both
913 * locks to the higher end address.
915 if (fl->fl_start > request->fl_start)
916 fl->fl_start = request->fl_start;
918 request->fl_start = fl->fl_start;
919 if (fl->fl_end < request->fl_end)
920 fl->fl_end = request->fl_end;
922 request->fl_end = fl->fl_end;
924 locks_delete_lock(before);
931 /* Processing for different lock types is a bit
934 if (fl->fl_end < request->fl_start)
936 if (fl->fl_start > request->fl_end)
938 if (request->fl_type == F_UNLCK)
940 if (fl->fl_start < request->fl_start)
942 /* If the next lock in the list has a higher end
943 * address than the new one, insert the new one here.
945 if (fl->fl_end > request->fl_end) {
949 if (fl->fl_start >= request->fl_start) {
950 /* The new lock completely replaces an old
951 * one (This may happen several times).
954 locks_delete_lock(before);
957 /* Replace the old lock with the new one.
958 * Wake up anybody waiting for the old one,
959 * as the change in lock type might satisfy
962 locks_wake_up_blocks(fl);
963 fl->fl_start = request->fl_start;
964 fl->fl_end = request->fl_end;
965 fl->fl_type = request->fl_type;
966 locks_release_private(fl);
967 locks_copy_private(fl, request);
972 /* Go on to next lock.
975 before = &fl->fl_next;
979 * The above code only modifies existing locks in case of
980 * merging or replacing. If new lock(s) need to be inserted
981 * all modifications are done bellow this, so it's safe yet to
984 error = -ENOLCK; /* "no luck" */
985 if (right && left == right && !new_fl2)
990 if (request->fl_type == F_UNLCK) {
991 if (request->fl_flags & FL_EXISTS)
1000 locks_copy_lock(new_fl, request);
1001 locks_insert_lock(before, new_fl);
1005 if (left == right) {
1006 /* The new lock breaks the old one in two pieces,
1007 * so we have to use the second new lock.
1011 locks_copy_lock(left, right);
1012 locks_insert_lock(before, left);
1014 right->fl_start = request->fl_end + 1;
1015 locks_wake_up_blocks(right);
1018 left->fl_end = request->fl_start - 1;
1019 locks_wake_up_blocks(left);
1024 * Free any unused locks.
1027 locks_free_lock(new_fl);
1029 locks_free_lock(new_fl2);
1034 * posix_lock_file - Apply a POSIX-style lock to a file
1035 * @filp: The file to apply the lock to
1036 * @fl: The lock to be applied
1038 * Add a POSIX style lock to a file.
1039 * We merge adjacent & overlapping locks whenever possible.
1040 * POSIX locks are sorted by owner task, then by starting address
1042 * Note that if called with an FL_EXISTS argument, the caller may determine
1043 * whether or not a lock was successfully freed by testing the return
1044 * value for -ENOENT.
1046 int posix_lock_file(struct file *filp, struct file_lock *fl)
1048 return __posix_lock_file_conf(filp->f_path.dentry->d_inode,
1049 fl, NULL, filp->f_xid);
1051 EXPORT_SYMBOL(posix_lock_file);
1054 * posix_lock_file_conf - Apply a POSIX-style lock to a file
1055 * @filp: The file to apply the lock to
1056 * @fl: The lock to be applied
1057 * @conflock: Place to return a copy of the conflicting lock, if found.
1059 * Except for the conflock parameter, acts just like posix_lock_file.
1061 int posix_lock_file_conf(struct file *filp, struct file_lock *fl,
1062 struct file_lock *conflock)
1064 return __posix_lock_file_conf(filp->f_path.dentry->d_inode,
1065 fl, conflock, filp->f_xid);
1067 EXPORT_SYMBOL(posix_lock_file_conf);
1070 * posix_lock_file_wait - Apply a POSIX-style lock to a file
1071 * @filp: The file to apply the lock to
1072 * @fl: The lock to be applied
1074 * Add a POSIX style lock to a file.
1075 * We merge adjacent & overlapping locks whenever possible.
1076 * POSIX locks are sorted by owner task, then by starting address
1078 int posix_lock_file_wait(struct file *filp, struct file_lock *fl)
1083 error = posix_lock_file(filp, fl);
1084 if ((error != -EAGAIN) || !(fl->fl_flags & FL_SLEEP))
1086 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1090 locks_delete_block(fl);
1095 EXPORT_SYMBOL(posix_lock_file_wait);
1098 * locks_mandatory_locked - Check for an active lock
1099 * @inode: the file to check
1101 * Searches the inode's list of locks to find any POSIX locks which conflict.
1102 * This function is called from locks_verify_locked() only.
1104 int locks_mandatory_locked(struct inode *inode)
1106 fl_owner_t owner = current->files;
1107 struct file_lock *fl;
1110 * Search the lock list for this inode for any POSIX locks.
1113 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
1116 if (fl->fl_owner != owner)
1120 return fl ? -EAGAIN : 0;
1124 * locks_mandatory_area - Check for a conflicting lock
1125 * @read_write: %FLOCK_VERIFY_WRITE for exclusive access, %FLOCK_VERIFY_READ
1127 * @inode: the file to check
1128 * @filp: how the file was opened (if it was)
1129 * @offset: start of area to check
1130 * @count: length of area to check
1132 * Searches the inode's list of locks to find any POSIX locks which conflict.
1133 * This function is called from rw_verify_area() and
1134 * locks_verify_truncate().
1136 int locks_mandatory_area(int read_write, struct inode *inode,
1137 struct file *filp, loff_t offset,
1140 struct file_lock fl;
1143 locks_init_lock(&fl);
1144 fl.fl_owner = current->files;
1145 fl.fl_pid = current->tgid;
1147 fl.fl_flags = FL_POSIX | FL_ACCESS;
1148 if (filp && !(filp->f_flags & O_NONBLOCK))
1149 fl.fl_flags |= FL_SLEEP;
1150 fl.fl_type = (read_write == FLOCK_VERIFY_WRITE) ? F_WRLCK : F_RDLCK;
1151 fl.fl_start = offset;
1152 fl.fl_end = offset + count - 1;
1155 error = __posix_lock_file_conf(inode, &fl, NULL, filp->f_xid);
1156 if (error != -EAGAIN)
1158 if (!(fl.fl_flags & FL_SLEEP))
1160 error = wait_event_interruptible(fl.fl_wait, !fl.fl_next);
1163 * If we've been sleeping someone might have
1164 * changed the permissions behind our back.
1166 if ((inode->i_mode & (S_ISGID | S_IXGRP)) == S_ISGID)
1170 locks_delete_block(&fl);
1177 EXPORT_SYMBOL(locks_mandatory_area);
1179 /* We already had a lease on this file; just change its type */
1180 int lease_modify(struct file_lock **before, int arg)
1182 struct file_lock *fl = *before;
1183 int error = assign_type(fl, arg);
1187 locks_wake_up_blocks(fl);
1189 locks_delete_lock(before);
1193 EXPORT_SYMBOL(lease_modify);
1195 static void time_out_leases(struct inode *inode)
1197 struct file_lock **before;
1198 struct file_lock *fl;
1200 before = &inode->i_flock;
1201 while ((fl = *before) && IS_LEASE(fl) && (fl->fl_type & F_INPROGRESS)) {
1202 if ((fl->fl_break_time == 0)
1203 || time_before(jiffies, fl->fl_break_time)) {
1204 before = &fl->fl_next;
1207 lease_modify(before, fl->fl_type & ~F_INPROGRESS);
1208 if (fl == *before) /* lease_modify may have freed fl */
1209 before = &fl->fl_next;
1214 * __break_lease - revoke all outstanding leases on file
1215 * @inode: the inode of the file to return
1216 * @mode: the open mode (read or write)
1218 * break_lease (inlined for speed) has checked there already
1219 * is a lease on this file. Leases are broken on a call to open()
1220 * or truncate(). This function can sleep unless you
1221 * specified %O_NONBLOCK to your open().
1223 int __break_lease(struct inode *inode, unsigned int mode)
1225 int error = 0, future;
1226 struct file_lock *new_fl, *flock;
1227 struct file_lock *fl;
1229 unsigned long break_time;
1230 int i_have_this_lease = 0;
1232 alloc_err = lease_alloc(NULL, mode & FMODE_WRITE ? F_WRLCK : F_RDLCK,
1237 time_out_leases(inode);
1239 flock = inode->i_flock;
1240 if ((flock == NULL) || !IS_LEASE(flock))
1243 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next)
1244 if (fl->fl_owner == current->files)
1245 i_have_this_lease = 1;
1247 if (mode & FMODE_WRITE) {
1248 /* If we want write access, we have to revoke any lease. */
1249 future = F_UNLCK | F_INPROGRESS;
1250 } else if (flock->fl_type & F_INPROGRESS) {
1251 /* If the lease is already being broken, we just leave it */
1252 future = flock->fl_type;
1253 } else if (flock->fl_type & F_WRLCK) {
1254 /* Downgrade the exclusive lease to a read-only lease. */
1255 future = F_RDLCK | F_INPROGRESS;
1257 /* the existing lease was read-only, so we can read too. */
1261 if (alloc_err && !i_have_this_lease && ((mode & O_NONBLOCK) == 0)) {
1267 if (lease_break_time > 0) {
1268 break_time = jiffies + lease_break_time * HZ;
1269 if (break_time == 0)
1270 break_time++; /* so that 0 means no break time */
1273 for (fl = flock; fl && IS_LEASE(fl); fl = fl->fl_next) {
1274 if (fl->fl_type != future) {
1275 fl->fl_type = future;
1276 fl->fl_break_time = break_time;
1277 /* lease must have lmops break callback */
1278 fl->fl_lmops->fl_break(fl);
1282 if (i_have_this_lease || (mode & O_NONBLOCK)) {
1283 error = -EWOULDBLOCK;
1288 break_time = flock->fl_break_time;
1289 if (break_time != 0) {
1290 break_time -= jiffies;
1291 if (break_time == 0)
1294 error = locks_block_on_timeout(flock, new_fl, break_time);
1297 time_out_leases(inode);
1298 /* Wait for the next lease that has not been broken yet */
1299 for (flock = inode->i_flock; flock && IS_LEASE(flock);
1300 flock = flock->fl_next) {
1301 if (flock->fl_type & F_INPROGRESS)
1310 locks_free_lock(new_fl);
1314 EXPORT_SYMBOL(__break_lease);
1319 * @time: pointer to a timespec which will contain the last modified time
1321 * This is to force NFS clients to flush their caches for files with
1322 * exclusive leases. The justification is that if someone has an
1323 * exclusive lease, then they could be modifiying it.
1325 void lease_get_mtime(struct inode *inode, struct timespec *time)
1327 struct file_lock *flock = inode->i_flock;
1328 if (flock && IS_LEASE(flock) && (flock->fl_type & F_WRLCK))
1329 *time = current_fs_time(inode->i_sb);
1331 *time = inode->i_mtime;
1334 EXPORT_SYMBOL(lease_get_mtime);
1337 * fcntl_getlease - Enquire what lease is currently active
1340 * The value returned by this function will be one of
1341 * (if no lease break is pending):
1343 * %F_RDLCK to indicate a shared lease is held.
1345 * %F_WRLCK to indicate an exclusive lease is held.
1347 * %F_UNLCK to indicate no lease is held.
1349 * (if a lease break is pending):
1351 * %F_RDLCK to indicate an exclusive lease needs to be
1352 * changed to a shared lease (or removed).
1354 * %F_UNLCK to indicate the lease needs to be removed.
1356 * XXX: sfr & willy disagree over whether F_INPROGRESS
1357 * should be returned to userspace.
1359 int fcntl_getlease(struct file *filp)
1361 struct file_lock *fl;
1365 time_out_leases(filp->f_path.dentry->d_inode);
1366 for (fl = filp->f_path.dentry->d_inode->i_flock; fl && IS_LEASE(fl);
1368 if (fl->fl_file == filp) {
1369 type = fl->fl_type & ~F_INPROGRESS;
1378 * __setlease - sets a lease on an open file
1379 * @filp: file pointer
1380 * @arg: type of lease to obtain
1381 * @flp: input - file_lock to use, output - file_lock inserted
1383 * The (input) flp->fl_lmops->fl_break function is required
1386 * Called with kernel lock held.
1388 static int __setlease(struct file *filp, long arg, struct file_lock **flp)
1390 struct file_lock *fl, **before, **my_before = NULL, *lease;
1391 struct dentry *dentry = filp->f_path.dentry;
1392 struct inode *inode = dentry->d_inode;
1393 int error, rdlease_count = 0, wrlease_count = 0;
1395 time_out_leases(inode);
1398 if (!flp || !(*flp) || !(*flp)->fl_lmops || !(*flp)->fl_lmops->fl_break)
1404 if ((arg == F_RDLCK) && (atomic_read(&inode->i_writecount) > 0))
1406 if ((arg == F_WRLCK)
1407 && ((atomic_read(&dentry->d_count) > 1)
1408 || (atomic_read(&inode->i_count) > 1)))
1412 * At this point, we know that if there is an exclusive
1413 * lease on this file, then we hold it on this filp
1414 * (otherwise our open of this file would have blocked).
1415 * And if we are trying to acquire an exclusive lease,
1416 * then the file is not open by anyone (including us)
1417 * except for this filp.
1419 for (before = &inode->i_flock;
1420 ((fl = *before) != NULL) && IS_LEASE(fl);
1421 before = &fl->fl_next) {
1422 if (lease->fl_lmops->fl_mylease(fl, lease))
1424 else if (fl->fl_type == (F_INPROGRESS | F_UNLCK))
1426 * Someone is in the process of opening this
1427 * file for writing so we may not take an
1428 * exclusive lease on it.
1435 if ((arg == F_RDLCK && (wrlease_count > 0)) ||
1436 (arg == F_WRLCK && ((rdlease_count + wrlease_count) > 0)))
1439 if (my_before != NULL) {
1441 error = lease->fl_lmops->fl_change(my_before, arg);
1454 fl = locks_alloc_lock();
1458 locks_copy_lock(fl, lease);
1459 locks_insert_lock(before, fl);
1469 * setlease - sets a lease on an open file
1470 * @filp: file pointer
1471 * @arg: type of lease to obtain
1472 * @lease: file_lock to use
1474 * Call this to establish a lease on the file.
1475 * The fl_lmops fl_break function is required by break_lease
1478 int setlease(struct file *filp, long arg, struct file_lock **lease)
1480 struct dentry *dentry = filp->f_path.dentry;
1481 struct inode *inode = dentry->d_inode;
1484 if ((current->fsuid != inode->i_uid) && !capable(CAP_LEASE))
1486 if (!S_ISREG(inode->i_mode))
1488 error = security_file_lock(filp, arg);
1493 error = __setlease(filp, arg, lease);
1499 EXPORT_SYMBOL(setlease);
1502 * fcntl_setlease - sets a lease on an open file
1503 * @fd: open file descriptor
1504 * @filp: file pointer
1505 * @arg: type of lease to obtain
1507 * Call this fcntl to establish a lease on the file.
1508 * Note that you also need to call %F_SETSIG to
1509 * receive a signal when the lease is broken.
1511 int fcntl_setlease(unsigned int fd, struct file *filp, long arg)
1513 struct file_lock fl, *flp = &fl;
1514 struct dentry *dentry = filp->f_path.dentry;
1515 struct inode *inode = dentry->d_inode;
1518 if ((current->fsuid != inode->i_uid) && !capable(CAP_LEASE))
1520 if (!S_ISREG(inode->i_mode))
1522 error = security_file_lock(filp, arg);
1526 locks_init_lock(&fl);
1527 error = lease_init(filp, arg, &fl);
1533 error = __setlease(filp, arg, &flp);
1534 if (error || arg == F_UNLCK)
1537 error = fasync_helper(fd, filp, 1, &flp->fl_fasync);
1539 /* remove lease just inserted by __setlease */
1540 flp->fl_type = F_UNLCK | F_INPROGRESS;
1541 flp->fl_break_time = jiffies- 10;
1542 time_out_leases(inode);
1546 error = __f_setown(filp, task_pid(current), PIDTYPE_PID, 0);
1553 * flock_lock_file_wait - Apply a FLOCK-style lock to a file
1554 * @filp: The file to apply the lock to
1555 * @fl: The lock to be applied
1557 * Add a FLOCK style lock to a file.
1559 int flock_lock_file_wait(struct file *filp, struct file_lock *fl)
1564 error = flock_lock_file(filp, fl);
1565 if ((error != -EAGAIN) || !(fl->fl_flags & FL_SLEEP))
1567 error = wait_event_interruptible(fl->fl_wait, !fl->fl_next);
1571 locks_delete_block(fl);
1577 EXPORT_SYMBOL(flock_lock_file_wait);
1580 * sys_flock: - flock() system call.
1581 * @fd: the file descriptor to lock.
1582 * @cmd: the type of lock to apply.
1584 * Apply a %FL_FLOCK style lock to an open file descriptor.
1585 * The @cmd can be one of
1587 * %LOCK_SH -- a shared lock.
1589 * %LOCK_EX -- an exclusive lock.
1591 * %LOCK_UN -- remove an existing lock.
1593 * %LOCK_MAND -- a `mandatory' flock. This exists to emulate Windows Share Modes.
1595 * %LOCK_MAND can be combined with %LOCK_READ or %LOCK_WRITE to allow other
1596 * processes read and write access respectively.
1598 asmlinkage long sys_flock(unsigned int fd, unsigned int cmd)
1601 struct file_lock *lock;
1602 int can_sleep, unlock;
1610 can_sleep = !(cmd & LOCK_NB);
1612 unlock = (cmd == LOCK_UN);
1614 if (!unlock && !(cmd & LOCK_MAND) && !(filp->f_mode & 3))
1617 error = flock_make_lock(filp, &lock, cmd);
1621 lock->fl_flags |= FL_SLEEP;
1623 error = security_file_lock(filp, cmd);
1627 if (filp->f_op && filp->f_op->flock)
1628 error = filp->f_op->flock(filp,
1629 (can_sleep) ? F_SETLKW : F_SETLK,
1632 error = flock_lock_file_wait(filp, lock);
1635 locks_free_lock(lock);
1643 /* Report the first existing lock that would conflict with l.
1644 * This implements the F_GETLK command of fcntl().
1646 int fcntl_getlk(struct file *filp, struct flock __user *l)
1648 struct file_lock *fl, cfl, file_lock;
1653 if (copy_from_user(&flock, l, sizeof(flock)))
1656 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
1659 error = flock_to_posix_lock(filp, &file_lock, &flock);
1663 if (filp->f_op && filp->f_op->lock) {
1664 error = filp->f_op->lock(filp, F_GETLK, &file_lock);
1665 if (file_lock.fl_ops && file_lock.fl_ops->fl_release_private)
1666 file_lock.fl_ops->fl_release_private(&file_lock);
1670 fl = (file_lock.fl_type == F_UNLCK ? NULL : &file_lock);
1672 fl = (posix_test_lock(filp, &file_lock, &cfl) ? &cfl : NULL);
1675 flock.l_type = F_UNLCK;
1677 flock.l_pid = fl->fl_pid;
1678 #if BITS_PER_LONG == 32
1680 * Make sure we can represent the posix lock via
1681 * legacy 32bit flock.
1684 if (fl->fl_start > OFFT_OFFSET_MAX)
1686 if ((fl->fl_end != OFFSET_MAX)
1687 && (fl->fl_end > OFFT_OFFSET_MAX))
1690 flock.l_start = fl->fl_start;
1691 flock.l_len = fl->fl_end == OFFSET_MAX ? 0 :
1692 fl->fl_end - fl->fl_start + 1;
1694 flock.l_type = fl->fl_type;
1697 if (!copy_to_user(l, &flock, sizeof(flock)))
1703 /* Apply the lock described by l to an open file descriptor.
1704 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1706 int fcntl_setlk(unsigned int fd, struct file *filp, unsigned int cmd,
1707 struct flock __user *l)
1709 struct file_lock *file_lock = locks_alloc_lock();
1711 struct inode *inode;
1714 if (file_lock == NULL)
1717 vxd_assert(filp->f_xid == vx_current_xid(),
1718 "f_xid(%d) == current(%d)", filp->f_xid, vx_current_xid());
1719 file_lock->fl_xid = filp->f_xid;
1720 vx_locks_inc(file_lock);
1723 * This might block, so we do it before checking the inode.
1726 if (copy_from_user(&flock, l, sizeof(flock)))
1729 inode = filp->f_path.dentry->d_inode;
1731 /* Don't allow mandatory locks on files that may be memory mapped
1734 if (IS_MANDLOCK(inode) &&
1735 (inode->i_mode & (S_ISGID | S_IXGRP)) == S_ISGID &&
1736 mapping_writably_mapped(filp->f_mapping)) {
1742 error = flock_to_posix_lock(filp, file_lock, &flock);
1745 if (cmd == F_SETLKW) {
1746 file_lock->fl_flags |= FL_SLEEP;
1750 switch (flock.l_type) {
1752 if (!(filp->f_mode & FMODE_READ))
1756 if (!(filp->f_mode & FMODE_WRITE))
1766 error = security_file_lock(filp, file_lock->fl_type);
1770 if (filp->f_op && filp->f_op->lock != NULL)
1771 error = filp->f_op->lock(filp, cmd, file_lock);
1774 error = posix_lock_file(filp, file_lock);
1775 if ((error != -EAGAIN) || (cmd == F_SETLK))
1777 error = wait_event_interruptible(file_lock->fl_wait,
1778 !file_lock->fl_next);
1782 locks_delete_block(file_lock);
1788 * Attempt to detect a close/fcntl race and recover by
1789 * releasing the lock that was just acquired.
1791 if (!error && fcheck(fd) != filp && flock.l_type != F_UNLCK) {
1792 flock.l_type = F_UNLCK;
1797 locks_free_lock(file_lock);
1801 #if BITS_PER_LONG == 32
1802 /* Report the first existing lock that would conflict with l.
1803 * This implements the F_GETLK command of fcntl().
1805 int fcntl_getlk64(struct file *filp, struct flock64 __user *l)
1807 struct file_lock *fl, cfl, file_lock;
1808 struct flock64 flock;
1812 if (copy_from_user(&flock, l, sizeof(flock)))
1815 if ((flock.l_type != F_RDLCK) && (flock.l_type != F_WRLCK))
1818 error = flock64_to_posix_lock(filp, &file_lock, &flock);
1822 if (filp->f_op && filp->f_op->lock) {
1823 error = filp->f_op->lock(filp, F_GETLK, &file_lock);
1824 if (file_lock.fl_ops && file_lock.fl_ops->fl_release_private)
1825 file_lock.fl_ops->fl_release_private(&file_lock);
1829 fl = (file_lock.fl_type == F_UNLCK ? NULL : &file_lock);
1831 fl = (posix_test_lock(filp, &file_lock, &cfl) ? &cfl : NULL);
1834 flock.l_type = F_UNLCK;
1836 flock.l_pid = fl->fl_pid;
1837 flock.l_start = fl->fl_start;
1838 flock.l_len = fl->fl_end == OFFSET_MAX ? 0 :
1839 fl->fl_end - fl->fl_start + 1;
1841 flock.l_type = fl->fl_type;
1844 if (!copy_to_user(l, &flock, sizeof(flock)))
1851 /* Apply the lock described by l to an open file descriptor.
1852 * This implements both the F_SETLK and F_SETLKW commands of fcntl().
1854 int fcntl_setlk64(unsigned int fd, struct file *filp, unsigned int cmd,
1855 struct flock64 __user *l)
1857 struct file_lock *file_lock = locks_alloc_lock();
1858 struct flock64 flock;
1859 struct inode *inode;
1862 if (file_lock == NULL)
1865 vxd_assert(filp->f_xid == vx_current_xid(),
1866 "f_xid(%d) == current(%d)", filp->f_xid, vx_current_xid());
1867 file_lock->fl_xid = filp->f_xid;
1868 vx_locks_inc(file_lock);
1871 * This might block, so we do it before checking the inode.
1874 if (copy_from_user(&flock, l, sizeof(flock)))
1877 inode = filp->f_path.dentry->d_inode;
1879 /* Don't allow mandatory locks on files that may be memory mapped
1882 if (IS_MANDLOCK(inode) &&
1883 (inode->i_mode & (S_ISGID | S_IXGRP)) == S_ISGID &&
1884 mapping_writably_mapped(filp->f_mapping)) {
1890 error = flock64_to_posix_lock(filp, file_lock, &flock);
1893 if (cmd == F_SETLKW64) {
1894 file_lock->fl_flags |= FL_SLEEP;
1898 switch (flock.l_type) {
1900 if (!(filp->f_mode & FMODE_READ))
1904 if (!(filp->f_mode & FMODE_WRITE))
1914 error = security_file_lock(filp, file_lock->fl_type);
1918 if (filp->f_op && filp->f_op->lock != NULL)
1919 error = filp->f_op->lock(filp, cmd, file_lock);
1922 error = posix_lock_file(filp, file_lock);
1923 if ((error != -EAGAIN) || (cmd == F_SETLK64))
1925 error = wait_event_interruptible(file_lock->fl_wait,
1926 !file_lock->fl_next);
1930 locks_delete_block(file_lock);
1936 * Attempt to detect a close/fcntl race and recover by
1937 * releasing the lock that was just acquired.
1939 if (!error && fcheck(fd) != filp && flock.l_type != F_UNLCK) {
1940 flock.l_type = F_UNLCK;
1945 locks_free_lock(file_lock);
1948 #endif /* BITS_PER_LONG == 32 */
1951 * This function is called when the file is being removed
1952 * from the task's fd array. POSIX locks belonging to this task
1953 * are deleted at this time.
1955 void locks_remove_posix(struct file *filp, fl_owner_t owner)
1957 struct file_lock lock;
1960 * If there are no locks held on this file, we don't need to call
1961 * posix_lock_file(). Another process could be setting a lock on this
1962 * file at the same time, but we wouldn't remove that lock anyway.
1964 if (!filp->f_path.dentry->d_inode->i_flock)
1967 lock.fl_type = F_UNLCK;
1968 lock.fl_flags = FL_POSIX | FL_CLOSE;
1970 lock.fl_end = OFFSET_MAX;
1971 lock.fl_owner = owner;
1972 lock.fl_pid = current->tgid;
1973 lock.fl_file = filp;
1975 lock.fl_lmops = NULL;
1977 if (filp->f_op && filp->f_op->lock != NULL)
1978 filp->f_op->lock(filp, F_SETLK, &lock);
1980 posix_lock_file(filp, &lock);
1982 if (lock.fl_ops && lock.fl_ops->fl_release_private)
1983 lock.fl_ops->fl_release_private(&lock);
1986 EXPORT_SYMBOL(locks_remove_posix);
1989 * This function is called on the last close of an open file.
1991 void locks_remove_flock(struct file *filp)
1993 struct inode * inode = filp->f_path.dentry->d_inode;
1994 struct file_lock *fl;
1995 struct file_lock **before;
1997 if (!inode->i_flock)
2000 if (filp->f_op && filp->f_op->flock) {
2001 struct file_lock fl = {
2002 .fl_pid = current->tgid,
2004 .fl_flags = FL_FLOCK,
2006 .fl_end = OFFSET_MAX,
2008 filp->f_op->flock(filp, F_SETLKW, &fl);
2009 if (fl.fl_ops && fl.fl_ops->fl_release_private)
2010 fl.fl_ops->fl_release_private(&fl);
2014 before = &inode->i_flock;
2016 while ((fl = *before) != NULL) {
2017 if (fl->fl_file == filp) {
2019 locks_delete_lock(before);
2023 lease_modify(before, F_UNLCK);
2029 before = &fl->fl_next;
2035 * posix_unblock_lock - stop waiting for a file lock
2036 * @filp: how the file was opened
2037 * @waiter: the lock which was waiting
2039 * lockd needs to block waiting for locks.
2042 posix_unblock_lock(struct file *filp, struct file_lock *waiter)
2047 if (waiter->fl_next)
2048 __locks_delete_block(waiter);
2055 EXPORT_SYMBOL(posix_unblock_lock);
2057 static void lock_get_status(char* out, struct file_lock *fl, int id, char *pfx)
2059 struct inode *inode = NULL;
2061 if (fl->fl_file != NULL)
2062 inode = fl->fl_file->f_path.dentry->d_inode;
2064 out += sprintf(out, "%d:%s ", id, pfx);
2066 out += sprintf(out, "%6s %s ",
2067 (fl->fl_flags & FL_ACCESS) ? "ACCESS" : "POSIX ",
2068 (inode == NULL) ? "*NOINODE*" :
2069 (IS_MANDLOCK(inode) &&
2070 (inode->i_mode & (S_IXGRP | S_ISGID)) == S_ISGID) ?
2071 "MANDATORY" : "ADVISORY ");
2072 } else if (IS_FLOCK(fl)) {
2073 if (fl->fl_type & LOCK_MAND) {
2074 out += sprintf(out, "FLOCK MSNFS ");
2076 out += sprintf(out, "FLOCK ADVISORY ");
2078 } else if (IS_LEASE(fl)) {
2079 out += sprintf(out, "LEASE ");
2080 if (fl->fl_type & F_INPROGRESS)
2081 out += sprintf(out, "BREAKING ");
2082 else if (fl->fl_file)
2083 out += sprintf(out, "ACTIVE ");
2085 out += sprintf(out, "BREAKER ");
2087 out += sprintf(out, "UNKNOWN UNKNOWN ");
2089 if (fl->fl_type & LOCK_MAND) {
2090 out += sprintf(out, "%s ",
2091 (fl->fl_type & LOCK_READ)
2092 ? (fl->fl_type & LOCK_WRITE) ? "RW " : "READ "
2093 : (fl->fl_type & LOCK_WRITE) ? "WRITE" : "NONE ");
2095 out += sprintf(out, "%s ",
2096 (fl->fl_type & F_INPROGRESS)
2097 ? (fl->fl_type & F_UNLCK) ? "UNLCK" : "READ "
2098 : (fl->fl_type & F_WRLCK) ? "WRITE" : "READ ");
2101 #ifdef WE_CAN_BREAK_LSLK_NOW
2102 out += sprintf(out, "%d %s:%ld ", fl->fl_pid,
2103 inode->i_sb->s_id, inode->i_ino);
2105 /* userspace relies on this representation of dev_t ;-( */
2106 out += sprintf(out, "%d %02x:%02x:%ld ", fl->fl_pid,
2107 MAJOR(inode->i_sb->s_dev),
2108 MINOR(inode->i_sb->s_dev), inode->i_ino);
2111 out += sprintf(out, "%d <none>:0 ", fl->fl_pid);
2114 if (fl->fl_end == OFFSET_MAX)
2115 out += sprintf(out, "%Ld EOF\n", fl->fl_start);
2117 out += sprintf(out, "%Ld %Ld\n", fl->fl_start,
2120 out += sprintf(out, "0 EOF\n");
2124 static void move_lock_status(char **p, off_t* pos, off_t offset)
2128 if(*pos >= offset) {
2129 /* the complete line is valid */
2134 if(*pos+len > offset) {
2135 /* use the second part of the line */
2136 int i = offset-*pos;
2137 memmove(*p,*p+i,len-i);
2142 /* discard the complete line */
2147 * get_locks_status - reports lock usage in /proc/locks
2148 * @buffer: address in userspace to write into
2150 * @offset: how far we are through the buffer
2151 * @length: how much to read
2154 int get_locks_status(char *buffer, char **start, off_t offset, int length)
2156 struct list_head *tmp;
2162 list_for_each(tmp, &file_lock_list) {
2163 struct list_head *btmp;
2164 struct file_lock *fl = list_entry(tmp, struct file_lock, fl_link);
2166 if (!vx_check(fl->fl_xid, VS_WATCH_P|VS_IDENT))
2169 lock_get_status(q, fl, ++i, "");
2170 move_lock_status(&q, &pos, offset);
2172 if(pos >= offset+length)
2175 list_for_each(btmp, &fl->fl_block) {
2176 struct file_lock *bfl = list_entry(btmp,
2177 struct file_lock, fl_block);
2178 lock_get_status(q, bfl, i, " ->");
2179 move_lock_status(&q, &pos, offset);
2181 if(pos >= offset+length)
2188 if(q-buffer < length)
2194 * lock_may_read - checks that the region is free of locks
2195 * @inode: the inode that is being read
2196 * @start: the first byte to read
2197 * @len: the number of bytes to read
2199 * Emulates Windows locking requirements. Whole-file
2200 * mandatory locks (share modes) can prohibit a read and
2201 * byte-range POSIX locks can prohibit a read if they overlap.
2203 * N.B. this function is only ever called
2204 * from knfsd and ownership of locks is never checked.
2206 int lock_may_read(struct inode *inode, loff_t start, unsigned long len)
2208 struct file_lock *fl;
2211 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2213 if (fl->fl_type == F_RDLCK)
2215 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2217 } else if (IS_FLOCK(fl)) {
2218 if (!(fl->fl_type & LOCK_MAND))
2220 if (fl->fl_type & LOCK_READ)
2231 EXPORT_SYMBOL(lock_may_read);
2234 * lock_may_write - checks that the region is free of locks
2235 * @inode: the inode that is being written
2236 * @start: the first byte to write
2237 * @len: the number of bytes to write
2239 * Emulates Windows locking requirements. Whole-file
2240 * mandatory locks (share modes) can prohibit a write and
2241 * byte-range POSIX locks can prohibit a write if they overlap.
2243 * N.B. this function is only ever called
2244 * from knfsd and ownership of locks is never checked.
2246 int lock_may_write(struct inode *inode, loff_t start, unsigned long len)
2248 struct file_lock *fl;
2251 for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
2253 if ((fl->fl_end < start) || (fl->fl_start > (start + len)))
2255 } else if (IS_FLOCK(fl)) {
2256 if (!(fl->fl_type & LOCK_MAND))
2258 if (fl->fl_type & LOCK_WRITE)
2269 EXPORT_SYMBOL(lock_may_write);
2271 static int __init filelock_init(void)
2273 filelock_cache = kmem_cache_create("file_lock_cache",
2274 sizeof(struct file_lock), 0, SLAB_PANIC,
2279 core_initcall(filelock_init);