Lines Matching defs:hugetlb

3  * Generic hugetlb support.
44 #include <linux/hugetlb.h>
263 * hugetlb vma_lock helper routines
951 pr_warn("hugetlb: Huge Page Reserved count may go negative.\n");
1109 * re-initialized to the proper values, to indicate that hugetlb cgroup
1233 * Clear the old hugetlb private page reservation.
1236 * During a mremap() operation of a hugetlb vma we call move_vma()
1637 * Remove hugetlb folio from lists.
1644 * Must be called with hugetlb lock held.
1671 * We can only clear the hugetlb destructor after allocating vmemmap
1729 * This folio is about to be managed by the hugetlb allocator and
1766 * page and put the page back on the hugetlb free list and treat
1783 * hugetlb destructor under the hugetlb lock.
1834 * remove_hugetlb_folio() will clear the hugetlb bit, so do
1961 * Must be called with the hugetlb lock held
2191 * Common helper to allocate a fresh hugetlb page. All specific allocators
2192 * should use this function to get new hugetlb pages
2232 * Allocates a fresh page to the hugetlb allocator pool in the node interleaved
2257 * This routine only 'removes' the hugetlb page. The caller must make
2295 * freeing unused vmemmap pages associated with each hugetlb page
2540 * Increase the hugetlb pool such that it can accommodate a reservation
2597 * of pages to the hugetlb pool and free the extras back to the buddy
2606 /* Free the needed pages to the hugetlb pool */
2610 /* Add the page to the hugetlb allocator */
2827 * and the hugetlb mutex should remain held when calling this routine.
4065 * the base kernel, on the hugetlb module.
4175 * hugetlb init time: register hstate attributes for all registered node
4530 * specified as the first hugetlb parameter: hugepages=X. If so,
4556 * (from policy_nodemask) specifically for hugetlb case
4797 * When cpuset is configured, it breaks the strict hugetlb page
4807 * The change of semantics for shared hugetlb mapping with cpuset is
4939 * We cannot handle pagefaults against hugetlb pages at all. They cause
5141 * where we see pinned hugetlb pages while they're
5174 /* Install the new hugetlb folio if src pte stable */
5336 * This is a hugetlb vma, all the pte entries should point
5593 * hugetlb does not support FOLL_FORCE-style write faults that keep the
5924 * sent SIGBUS. The hugetlb fault mutex prevents two
5949 * else consumed the reservation since hugetlb
5950 * fault mutex is held when add a hugetlb page
6145 * hugetlb_no_page will drop vma lock and hugetlb fault
6164 * Release the hugetlb fault lock now, but retain
6282 * with modifications for hugetlb pages.
6989 * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
7225 int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison)
7229 *hugetlb = false;
7232 *hugetlb = true;
7272 * transfer temporary state of the new hugetlb folio. This is
7471 snprintf(name, sizeof(name), "hugetlb%d", nid);