Lines Matching defs:reserve
689 * Add the huge page range represented by [f, t) to the reserve
753 * Examine the existing reserve map and determine how many
756 * call to region_add that will actually modify the reserve
818 * Delete the specified range [f, t) from the reserve map. If the
823 * Returns the number of huge pages deleted from the reserve map.
927 * the reserve map region for a page. The huge page itself was free'ed
929 * usage count, and the global reserve count if needed. By incrementing
930 * these counts, the reserve map entry which could not be deleted will
955 * Count and return the number of huge pages in the reserve map
1046 * the reserve counters are updated with the hugetlb_lock held. It is safe
1210 * - For MAP_PRIVATE mappings, this is the reserve map which does
1254 /* Returns true if the VMA has associated reserve pages */
1261 * reserve count remains after releasing inode, because this
1299 * call to vma_needs_reserves(). The reserve map for
2693 * is not the case is if a reserve map was changed between calls. It
2702 * vma_del_reservation is used in error paths where an entry in the reserve
2779 * Subtle - The reserve map for private mappings has the
2781 * entry is in the reserve map, it means a reservation exists.
2782 * If an entry exists in the reserve map, it means the
2785 * value returned from reserve map manipulation routines above.
2833 * not set. However, alloc_hugetlb_folio always updates the reserve map.
2836 * global reserve count. But, free_huge_folio does not have enough context
2838 * mappings. Adjust the reserve map here to be consistent with global
2839 * reserve count adjustments to be made by free_huge_folio. Make sure the
2840 * reserve map indicates there is a reservation present.
2842 * In case 2, simply undo reserve map modifications done by alloc_hugetlb_folio.
2852 * Rare out of memory condition in reserve map
2854 * that global reserve count will not be incremented
2860 * accounting of reserve counts.
2870 * This indicates there is an entry in the reserve map
2882 * hugetlb_restore_reserve so that the reserve
2884 * is freed. This reserve will be consumed
2894 * reserve map.
2904 * on the folio so reserve count will be
2905 * incremented when freed. This reserve will
3058 * Examine the region/reserve map to determine if the process
3068 * reserves as indicated by the region/reserve map. Check
3081 * Even though there was no reservation in the region/reserve
4881 unsigned long reserve, start, end;
4893 reserve = (end - start) - region_count(resv, start, end);
4895 if (reserve) {
4897 * Decrement reserve counts. The global reserve count may be
4900 gbl_reserve = hugepage_subpool_put_pages(spool, reserve);
5158 /* Do not use reserve as it's private owned */
5507 * mapping it owns the reserve page for. The intention is to unmap the page
5630 * page is used to determine if the reserve at this address was
6745 * to reserve the full area even if read-only as mprotect() may be
6818 * pages in this range were added to the reserve
6821 * the subpool and reserve counts modified above
7451 pr_info("hugetlb_cma: reserve %lu MiB, up to %lu MiB per node\n",