page table operations such as what happens during
fork, and exec.
- Platform developers note that generic code will always
- invoke this interface without mm->page_table_lock held.
-
3) void flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
call flush_tlb_page (see below) for each entry which may be
modified.
- Platform developers note that generic code will always
- invoke this interface with mm->page_table_lock held.
-
4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
This time we need to remove the PAGE_SIZE sized translation
This is used primarily during fault processing.
- Platform developers note that generic code will always
- invoke this interface with mm->page_table_lock held.
-
5) void flush_tlb_pgtables(struct mm_struct *mm,
unsigned long start, unsigned long end)
The ia64 sn2 platform is one example of a platform
that uses this interface.
+8) void lazy_mmu_prot_update(pte_t pte)
+ This interface is called whenever the protection on
+ any user PTEs change. This interface provides a notification
+ to architecture specific code to take appropriate action.
+
Next, we have the cache flushing interfaces. In general, when Linux
is changing an existing virtual-->physical mapping to a new value,
change_range_of_page_tables(mm, start, end);
flush_tlb_range(vma, start, end);
- 3) flush_cache_page(vma, addr);
+ 3) flush_cache_page(vma, addr, pfn);
set_pte(pte_pointer, new_pte_val);
flush_tlb_page(vma, addr);
lines associated with 'mm'.
This interface is used to handle whole address space
- page table operations such as what happens during
- fork, exit, and exec.
+ page table operations such as what happens during exit and exec.
-2) void flush_cache_range(struct vm_area_struct *vma,
+2) void flush_cache_dup_mm(struct mm_struct *mm)
+
+ This interface flushes an entire user address space from
+ the caches. That is, after running, there will be no cache
+ lines associated with 'mm'.
+
+ This interface is used to handle whole address space
+ page table operations such as what happens during fork.
+
+ This option is separate from flush_cache_mm to allow some
+ optimizations for VIPT caches.
+
+3) void flush_cache_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
Here we are flushing a specific range of (user) virtual
call flush_cache_page (see below) for each entry which may be
modified.
-3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
+4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
This time we need to remove a PAGE_SIZE sized range
from the cache. The 'vma' is the backing structure used by
executable (and thus could be in the 'instruction cache' in
"Harvard" type cache layouts).
+ The 'pfn' indicates the physical page frame (shift this value
+ left by PAGE_SHIFT to get the physical address) that 'addr'
+ translates to. It is this mapping which should be removed from
+ the cache.
+
After running, there will be no entries in the cache for
- 'vma->vm_mm' for virtual address 'addr'.
+ 'vma->vm_mm' for virtual address 'addr' which translates
+ to 'pfn'.
This is used primarily during fault processing.
-4) void flush_cache_kmaps(void)
+5) void flush_cache_kmaps(void)
This routine need only be implemented if the platform utilizes
highmem. It will be called right before all of the kmaps
This routing should be implemented in asm/highmem.h
-5) void flush_cache_vmap(unsigned long start, unsigned long end)
+6) void flush_cache_vmap(unsigned long start, unsigned long end)
void flush_cache_vunmap(unsigned long start, unsigned long end)
Here in these two interfaces we are flushing a specific range
likely that you will need to flush the instruction cache
for copy_to_user_page().
+ void flush_anon_page(struct vm_area_struct *vma, struct page *page,
+ unsigned long vmaddr)
+ When the kernel needs to access the contents of an anonymous
+ page, it calls this function (currently only
+ get_user_pages()). Note: flush_dcache_page() deliberately
+ doesn't work for an anonymous page. The default
+ implementation is a nop (and should remain so for all coherent
+ architectures). For incoherent architectures, it should flush
+ the cache of the page at vmaddr.
+
+ void flush_kernel_dcache_page(struct page *page)
+ When the kernel needs to modify a user page is has obtained
+ with kmap, it calls this function after all modifications are
+ complete (but before kunmapping it) to bring the underlying
+ page up to date. It is assumed here that the user has no
+ incoherent cached copies (i.e. the original page was obtained
+ from a mechanism like get_user_pages()). The default
+ implementation is a nop and should remain so on all coherent
+ architectures. On incoherent architectures, this should flush
+ the kernel cache for page (using page_address(page)).
+
+
void flush_icache_range(unsigned long start, unsigned long end)
When the kernel stores into addresses that it will execute
out of (eg when loading modules), this function is called.