Lines Matching refs:page
21 /* use small pages for supporting non-pow2 (32M/40M/48M) DRAM phys page sizes */
35 * for ASIC that supports setting the allocation page size by user we will address
36 * user's choice only if it is not 0 (as 0 means taking the default page size)
42 dev_err(hdev->dev, "user page size (%#llx) is not power of 2\n", psize);
179 dev_err(hdev->dev, "Failed to get handle for page\n");
302 * free_phys_pg_pack() - free physical page pack.
304 * @phys_pg_pack: physical page pack to free.
614 * with non-power-of-2 range we work only with page granularity
615 * and the start address is page aligned,
631 "Hint address 0x%llx is not page aligned - cannot be respected\n",
824 * init_phys_pg_pack_from_userptr() - initialize physical page pack from host
829 * @force_regular_page: tell the function to ignore huge page optimization,
836 * - Create a physical page pack from the physical pages related to the given
866 * sizes is at least 2MB, we can use huge page mapping.
904 /* align down to physical page size and save the offset */
933 * map_phys_pg_pack() - maps the physical page pack..
979 "failed to unmap handle %u, va: 0x%llx, pa: 0x%llx, page size: %u\n",
990 * because page size could be 4KB, so when unmapping host
1001 * unmap_phys_pg_pack() - unmaps the physical page pack.
1029 * because page size could be 4KB, so when unmapping host
1086 "unable to init page pack for vaddr 0x%llx\n",
1100 * huge page alignment may be needed in case of regular
1101 * page mapping, depending on the host VA alignment
1109 * huge page alignment is needed in case of huge page
1137 /* DRAM VA alignment is the same as the MMU page size */
1190 dev_err(hdev->dev, "mapping page pack failed for handle %u\n", handle);
1316 "unable to init page pack for vaddr 0x%llx\n",
1468 /* We use the page offset to hold the block id and thus we need to clear
1567 /* If the size of each page is larger than the dma max segment size,
1570 * <number of pages> * <chunks of max segment size in each page>
1603 /* Need to split each page into the number of chunks of
2305 userptr->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
2487 * @page_size: page size for this va_range.
2505 * page size is not a power of 2
2513 * to the end of the last full page in the range. For example if
2514 * end = 0x3ff5 with page size 0x1000, we need to align it to
2515 * 0x2fff. The remaining 0xff5 bytes do not form a full page.
2562 * @host_page_size: host page size.
2567 * @host_huge_page_size: host huge page size.
2570 * @dram_page_size: dram page size.
2721 * - Frees any existing physical page list from the idr which relates to the
2773 "page list 0x%px of asid %d is still alive\n",
2826 dev_err(hdev->dev, "Failed to create dram page pool\n");
2838 "Failed to add memory to dram page pool %d\n", rc);