Lines Matching refs:MEMCG_CHARGE_BATCH
593 * (MEMCG_CHARGE_BATCH * nr_cpus) update events. Though this optimization
594 * will let stats be out of sync by atmost (MEMCG_CHARGE_BATCH * nr_cpus) but
638 if (x > MEMCG_CHARGE_BATCH) {
646 atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold);
2269 if (nr_pages > MEMCG_CHARGE_BATCH)
2345 if (stock->nr_pages > MEMCG_CHARGE_BATCH)
2444 reclaim_high(memcg, MEMCG_CHARGE_BATCH, GFP_KERNEL);
2578 * MEMCG_CHARGE_BATCH pages is nominal, so work out how much smaller or
2581 return penalty_jiffies * nr_pages / MEMCG_CHARGE_BATCH;
2670 unsigned int batch = max(MEMCG_CHARGE_BATCH, nr_pages);
2849 if (current->memcg_nr_pages_over_high > MEMCG_CHARGE_BATCH &&
7464 BUILD_BUG_ON(MEMCG_CHARGE_BATCH > S32_MAX / PAGE_SIZE);