X-Git-Url: http://git.onelab.eu/?a=blobdiff_plain;f=Documentation%2Fcachetlb.txt;h=debf6813934af05e878863c4a8c53bbf6ae64e62;hb=a2f44b27303a5353859d77a3e96a1d3f33f56ab7;hp=e132fb1163b0785dee1a072569f4aaa467e899dd;hpb=cace1c4618b6c6442b7dc973e935e7f3268e4aa7;p=linux-2.6.git diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt index e132fb116..debf68139 100644 --- a/Documentation/cachetlb.txt +++ b/Documentation/cachetlb.txt @@ -49,9 +49,6 @@ changes occur: page table operations such as what happens during fork, and exec. - Platform developers note that generic code will always - invoke this interface without mm->page_table_lock held. - 3) void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) @@ -72,9 +69,6 @@ changes occur: call flush_tlb_page (see below) for each entry which may be modified. - Platform developers note that generic code will always - invoke this interface with mm->page_table_lock held. - 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) This time we need to remove the PAGE_SIZE sized translation @@ -93,9 +87,6 @@ changes occur: This is used primarily during fault processing. - Platform developers note that generic code will always - invoke this interface with mm->page_table_lock held. - 5) void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start, unsigned long end) @@ -145,7 +136,7 @@ changes occur: 8) void lazy_mmu_prot_update(pte_t pte) This interface is called whenever the protection on any user PTEs change. This interface provides a notification - to architecture specific code to take appropiate action. + to architecture specific code to take appropriate action. Next, we have the cache flushing interfaces. In general, when Linux @@ -188,10 +179,21 @@ Here are the routines, one by one: lines associated with 'mm'. This interface is used to handle whole address space - page table operations such as what happens during - fork, exit, and exec. + page table operations such as what happens during exit and exec. + +2) void flush_cache_dup_mm(struct mm_struct *mm) + + This interface flushes an entire user address space from + the caches. That is, after running, there will be no cache + lines associated with 'mm'. + + This interface is used to handle whole address space + page table operations such as what happens during fork. -2) void flush_cache_range(struct vm_area_struct *vma, + This option is separate from flush_cache_mm to allow some + optimizations for VIPT caches. + +3) void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) Here we are flushing a specific range of (user) virtual @@ -208,7 +210,7 @@ Here are the routines, one by one: call flush_cache_page (see below) for each entry which may be modified. -3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) +4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) This time we need to remove a PAGE_SIZE sized range from the cache. The 'vma' is the backing structure used by @@ -229,7 +231,7 @@ Here are the routines, one by one: This is used primarily during fault processing. -4) void flush_cache_kmaps(void) +5) void flush_cache_kmaps(void) This routine need only be implemented if the platform utilizes highmem. It will be called right before all of the kmaps @@ -241,7 +243,7 @@ Here are the routines, one by one: This routing should be implemented in asm/highmem.h -5) void flush_cache_vmap(unsigned long start, unsigned long end) +6) void flush_cache_vmap(unsigned long start, unsigned long end) void flush_cache_vunmap(unsigned long start, unsigned long end) Here in these two interfaces we are flushing a specific range @@ -371,6 +373,28 @@ maps this page at its virtual address. likely that you will need to flush the instruction cache for copy_to_user_page(). + void flush_anon_page(struct vm_area_struct *vma, struct page *page, + unsigned long vmaddr) + When the kernel needs to access the contents of an anonymous + page, it calls this function (currently only + get_user_pages()). Note: flush_dcache_page() deliberately + doesn't work for an anonymous page. The default + implementation is a nop (and should remain so for all coherent + architectures). For incoherent architectures, it should flush + the cache of the page at vmaddr. + + void flush_kernel_dcache_page(struct page *page) + When the kernel needs to modify a user page is has obtained + with kmap, it calls this function after all modifications are + complete (but before kunmapping it) to bring the underlying + page up to date. It is assumed here that the user has no + incoherent cached copies (i.e. the original page was obtained + from a mechanism like get_user_pages()). The default + implementation is a nop and should remain so on all coherent + architectures. On incoherent architectures, this should flush + the kernel cache for page (using page_address(page)). + + void flush_icache_range(unsigned long start, unsigned long end) When the kernel stores into addresses that it will execute out of (eg when loading modules), this function is called.