Lines Matching defs:allocated
503 * region_del. The extra needed entries will be allocated.
508 * than or equal to zero. If file_region entries needed to be allocated for
580 * is needed and can not be allocated.
636 * be allocated. If the allocation fails, -ENOMEM will be returned.
814 * Return the size of the pages allocated when backing a VMA. In the majority
1013 * allocated page will go into page cache and is regarded as
1177 * We may have allocated or freed a huge page based on a different
1270 * If the page isn't allocated using the cma allocator,
1585 * For gigantic hugepages allocated through bootmem at
1807 * (allocated or reserved.)
2032 long needed, allocated;
2041 allocated = 0;
2057 allocated += i;
2065 (h->free_huge_pages + allocated);
2072 * we've allocated so far.
2084 needed += allocated;
2116 * 2) Free any unused surplus pages that may have been allocated to satisfy
2136 * by pre-allocated pages. Only free surplus pages.
2174 * the huge page has been allocated, vma_commit_reservation is called
2186 * be restored when a newly allocated huge page must be freed. It is
2293 * specific error paths, a huge page was allocated (via alloc_huge_page)
2296 * in the newly allocated page. When the page is freed via free_huge_page,
2350 * has a reservation for the page to be allocated. A return
2568 pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated %lu hugepages.\n",
2599 pr_info("HugeTLB registered %s page size, pre-allocated %ld pages\n",
2711 * boottime allocated gigantic pages.
3134 return; /* already allocated */
3421 * allocated here from bootmem allocator.
4507 * Only make newly allocated pages active. Existing pages found
5158 * are already allocated on behalf of the file. Private mappings need
5310 * allocated. If end == LONG_MAX, it will not fail.