Lines Matching defs:map
102 } map;
136 * Create a map and return a file descriptor that refers to the
137 * map. The close-on-exec file descriptor flag (see **fcntl**\ (2))
141 * **BPF_MAP_CREATE** will delete the map (but see NOTES).
149 * Look up an element with a given *key* in the map referred to
156 * Look up the value of a spin-locked map without
166 * Create or update an element (key/value pair) in a specified map.
178 * Update a spin_lock-ed map element.
188 * The number of elements in the map reached the
189 * *max_entries* limit specified at map creation time.
192 * with *key* already exists in the map.
195 * *key* does not exist in the map.
199 * Look up and delete an element by key in a specified map.
207 * Look up an element by key in a specified map and return the key
209 * in the map.
216 * the map:
245 * Pin an eBPF program or map referred by the specified *bpf_fd*
317 * eBPF map of socket type (eg **BPF_MAP_TYPE_SOCKHASH**).
386 * Fetch the next eBPF map currently loaded into the kernel.
388 * Looks for the eBPF map with an id greater than *start_id*
408 * Open a file descriptor for the eBPF map corresponding to
491 * **BPF_RAW_TRACEPOINT_OPEN** will delete the map (but see NOTES).
552 * Look up an element with the given *key* in the map referred to
555 * For **BPF_MAP_TYPE_QUEUE** and **BPF_MAP_TYPE_STACK** map
557 * map types, it may be specified as:
560 * Look up and delete the value of a spin-locked map
564 * The **BPF_MAP_TYPE_QUEUE** and **BPF_MAP_TYPE_STACK** map types
568 * issuing this operation for these map types.
570 * This command is only valid for the following map types:
584 * Freeze the permissions of the specified map.
588 * map state of *map_fd*. Write operations from eBPF programs
589 * are still possible for a frozen map.
613 * Iterate and fetch multiple elements in a map.
624 * and value size of the map *map_fd*. The *keys* buffer must be
632 * Look up the value of a spin-locked map without
636 * On success, *count* elements from the map are copied into the
649 * iteration of a hash-based map type.
653 * Iterate and delete all elements in a map.
659 * from the map. This is at least *count* elements. Note that
671 * Update multiple elements in a map by *key*.
675 * and value size of the map *map_fd*. The *keys* buffer must be
693 * Update spin_lock-ed map elements. This must be
694 * specified if the map value contains a spinlock.
696 * On success, *count* elements from the map are updated.
707 * the map reached the *max_entries* limit specified at map
715 * with *key* already exists in the map.
718 * *key* does not exist in the map.
722 * Delete multiple elements in a map by *key*.
726 * size of the map *map_fd*, that is, *key_size* * *count*.
736 * Look up the value of a spin-locked map without
740 * On success, *count* elements from the map are updated.
837 * Bind a map to the lifetime of an eBPF program.
839 * The map identified by *map_fd* is bound to the program
843 * references to the map (for example, embedded in the eBPF
1186 * restrict map and helper usage for such programs. Sleepable BPF programs can
1229 * insn[0].imm: map fd or fd_idx
1233 * ldimm64 rewrite: address of map
1240 * insn[0].imm: map fd or fd_idx
1244 * ldimm64 rewrite: address of map[0]+offset
1291 * BPF_MAP_TYPE_LRU_[PERCPU_]HASH map, use a percpu LRU list
1297 /* Specify numa node during map creation */
1314 /* Clone map from listener for newly accepted socket */
1317 /* Enable memory-mapping BPF map */
1323 /* Create a map that is suitable to be an inner map with dynamic max entries */
1326 /* Create a map that will be registered/unregesitered by the backed bpf_link */
1380 __u32 max_entries; /* max number of entries in a map */
1384 __u32 inner_map_fd; /* fd pointing to the inner map */
1395 * map value
1397 /* Any per-map-type extra fields
1680 /* new struct_ops map fd to update link with */
1689 /* expected link's map fd; is specified only
1734 * void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)
1736 * Perform a lookup in *map* for an entry associated to *key*.
1741 * long bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)
1744 * *map* with *value*. *flags* is one of:
1747 * The entry for *key* must not exist in the map.
1749 * The entry for *key* must already exist in the map.
1759 * long bpf_map_delete_elem(struct bpf_map *map, const void *key)
1761 * Delete entry with *key* from *map*.
1951 * special map of type **BPF_MAP_TYPE_PROG_ARRAY**, and passes
2166 * u64 bpf_perf_event_read(struct bpf_map *map, u64 flags)
2169 * *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. The nature of
2170 * the perf event counter is selected when *map* is updated with
2171 * perf event file descriptors. The *map* is an array whose size
2192 * The value of the perf event counter read from the map, or a
2210 * **bpf_redirect_map**\ (), which uses a BPF map to store the
2242 * long bpf_perf_event_output(void *ctx, struct bpf_map *map, u64 flags, void *data, u64 size)
2245 * *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. This perf
2250 * The *flags* are used to indicate the index in *map* for which
2265 * *map*. This must be done before the eBPF program can send data
2304 * long bpf_get_stackid(void *ctx, struct bpf_map *map, u64 flags)
2309 * *map* of type **BPF_MAP_TYPE_STACK_TRACE**.
2455 * long bpf_skb_under_cgroup(struct sk_buff *skb, struct bpf_map *map, u32 index)
2458 * *map* of type **BPF_MAP_TYPE_CGROUP_ARRAY**, at *index*.
2507 * long bpf_current_task_under_cgroup(struct bpf_map *map, u32 index)
2511 * *map* of type **BPF_MAP_TYPE_CGROUP_ARRAY**, at *index*.
2807 * long bpf_redirect_map(struct bpf_map *map, u64 key, u64 flags)
2809 * Redirect the packet to the endpoint referenced by *map* at
2810 * index *key*. Depending on its type, this *map* can contain
2817 * the map lookup fails. This is so that the return value can be
2823 * interfaces in the map, with BPF_F_EXCLUDE_INGRESS the ingress
2827 * to an ifindex, but doesn't require a map to do so.
2832 * long bpf_sk_redirect_map(struct sk_buff *skb, struct bpf_map *map, u32 key, u64 flags)
2834 * Redirect the packet to the socket referenced by *map* (of type
2843 * long bpf_sock_map_update(struct bpf_sock_ops *skops, struct bpf_map *map, void *key, u64 flags)
2845 * Add an entry to, or update a *map* referencing sockets. The
2850 * The entry for *key* must not exist in the map.
2852 * The entry for *key* must already exist in the map.
2856 * If the *map* has eBPF programs (parser and verdict), those will
2891 * long bpf_perf_event_read_value(struct bpf_map *map, u64 flags, struct bpf_perf_event_value *buf, u32 buf_size)
2894 * of size *buf_size*. This helper relies on a *map* of type
2896 * counter is selected when *map* is updated with perf event file
2897 * descriptors. The *map* is an array whose size is the number of
3044 * long bpf_msg_redirect_map(struct sk_msg_buff *msg, struct bpf_map *map, u32 key, u64 flags)
3049 * the socket referenced by *map* (of type
3302 * long bpf_sock_hash_update(struct bpf_sock_ops *skops, struct bpf_map *map, void *key, u64 flags)
3304 * Add an entry to, or update a sockhash *map* referencing sockets.
3309 * The entry for *key* must not exist in the map.
3311 * The entry for *key* must already exist in the map.
3315 * If the *map* has eBPF programs (parser and verdict), those will
3321 * long bpf_msg_redirect_hash(struct sk_msg_buff *msg, struct bpf_map *map, void *key, u64 flags)
3326 * the socket referenced by *map* (of type
3335 * long bpf_sk_redirect_hash(struct sk_buff *skb, struct bpf_map *map, void *key, u64 flags)
3340 * to the socket referenced by *map* (of type
3496 * can be matched on or used for map lookups e.g. to implement
3515 * void *bpf_get_local_storage(void *map, u64 flags)
3519 * by the *map* argument.
3520 * The *flags* meaning is specific for each map type,
3533 * long bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags)
3536 * **BPF_MAP_TYPE_REUSEPORT_SOCKARRAY** *map*.
3642 * long bpf_map_push_elem(struct bpf_map *map, const void *value, u64 flags)
3644 * Push an element *value* in *map*. *flags* is one of:
3652 * long bpf_map_pop_elem(struct bpf_map *map, void *value)
3654 * Pop an element from *map*.
3658 * long bpf_map_peek_elem(struct bpf_map *map, void *value)
3660 * Get an element from *map* without removing it.
3709 * stored as part of a value of a map. Taking the lock allows to
3720 * * BTF description of the map is mandatory.
3723 * * Only one **struct bpf_spin_lock** is allowed per map element.
3733 * bpf_spin_lock** *lock*\ **;** field of a map is not allowed.
3735 * of the map value must be a struct and have **struct
3738 * * The **struct bpf_spin_lock** *lock* field in a map value must
3745 * networking packet (it can only be inside of a map values).
3750 * * **bpf_spin_lock** is not allowed in inner maps of map-in-map.
3946 * void *bpf_sk_storage_get(struct bpf_map *map, void *sk, void *value, u64 flags)
3951 * a *map* with *sk* as the **key**. From this
3953 * **bpf_map_lookup_elem**\ (*map*, **&**\ *sk*) except this
3954 * helper enforces the key must be a full socket and the map must
3958 * the *map*. The *map* is used as the bpf-local-storage
3959 * "type". The bpf-local-storage "type" (i.e. the *map*) is
3977 * long bpf_sk_storage_delete(struct bpf_map *map, void *sk)
4028 * long bpf_skb_output(void *ctx, struct bpf_map *map, u64 flags, void *data, u64 size)
4031 * *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. This perf
4036 * The *flags* are used to indicate the index in *map* for which
4176 * long bpf_xdp_output(void *ctx, struct bpf_map *map, u64 flags, void *data, u64 size)
4179 * *map* of type **BPF_MAP_TYPE_PERF_EVENT_ARRAY**. This perf
4184 * The *flags* are used to indicate the index in *map* for which
4673 * void *bpf_inode_storage_get(struct bpf_map *map, void *inode, void *value, u64 flags)
4678 * a *map* with *inode* as the **key**. From this
4680 * **bpf_map_lookup_elem**\ (*map*, **&**\ *inode*) except this
4681 * helper enforces the key must be an inode and the map must also
4685 * the *map*. The *map* is used as the bpf-local-storage
4686 * "type". The bpf-local-storage "type" (i.e. the *map*) is
4701 * int bpf_inode_storage_delete(struct bpf_map *map, void *inode)
4738 * larger programs can use map data to store the string
4847 * void *bpf_task_storage_get(struct bpf_map *map, struct task_struct *task, void *value, u64 flags)
4852 * a *map* with *task* as the **key**. From this
4854 * **bpf_map_lookup_elem**\ (*map*, **&**\ *task*) except this
4855 * helper enforces the key must be a task_struct and the map must also
4859 * the *map*. The *map* is used as the bpf-local-storage
4860 * "type". The bpf-local-storage "type" (i.e. the *map*) is
4875 * long bpf_task_storage_delete(struct bpf_map *map, struct task_struct *task)
4996 * long bpf_for_each_map_elem(struct bpf_map *map, void *callback_fn, void *callback_ctx, u64 flags)
4998 * For each element in **map**, call **callback_fn** function with
4999 * **map**, **callback_ctx** and other map-specific parameters.
5005 * The following are a list of supported map types and their
5012 * long (\*callback_fn)(struct bpf_map \*map, const void \*key, void \*value, void \*ctx);
5022 * The number of traversed map elements for success, **-EINVAL** for
5028 * based on a format string stored in a read-only map pointed by
5071 * long bpf_timer_init(struct bpf_timer *timer, struct bpf_map *map, u64 flags)
5078 * the same *map*.
5083 * **-EPERM** if *timer* is in a map that doesn't have any user references.
5084 * The user space should either hold a file descriptor to a map with timers
5085 * or pin such map in bpffs. When map is unpinned or file descriptor is
5086 * closed all timers in the map will be cancelled and freed.
5094 * **-EPERM** if *timer* is in a map that doesn't have any user references.
5095 * The user space should either hold a file descriptor to a map with timers
5096 * or pin such map in bpffs. When map is unpinned or file descriptor is
5097 * closed all timers in the map will be cancelled and freed.
5105 * Since struct bpf_timer is a field inside map element the map
5108 * When user space reference to a map reaches zero all timers
5109 * in a map are cancelled and corresponding program's refcnts are
5111 * process doesn't leave any timers running. If map is pinned in
5114 * cancel and free the timer in the given map element.
5115 * The map can contain timers that invoke callback_fn-s from different
5415 * corresponding release function, or moved into a BPF map before
5418 * void *bpf_map_lookup_percpu_elem(struct bpf_map *map, const void *key, u32 cpu)
5420 * Perform a lookup in *percpu map* for an entry associated to
5436 * *data* must be a ptr to a map value.
5605 * long bpf_user_ringbuf_drain(struct bpf_map *map, void *callback_fn, void *ctx, u64 flags)
5621 * buffer. If a user-space producer was epoll-waiting on this map,
5643 * void *bpf_cgrp_storage_get(struct bpf_map *map, struct cgroup *cgroup, void *value, u64 flags)
5648 * a *map* with *cgroup* as the **key**. From this
5650 * **bpf_map_lookup_elem**\ (*map*, **&**\ *cgroup*) except this
5651 * helper enforces the key must be a cgroup struct and the map must also
5656 * **BPF_MAP_TYPE_CGRP_STORAGE** map. When the local-storage value is
5657 * queried for some *map* on a *cgroup* object, the kernel will perform an
5659 * *cgroup* object until the local-storage value for the *map* is found.
5673 * long bpf_cgrp_storage_delete(struct bpf_map *map, struct cgroup *cgroup)
6340 /* DEVMAP map-value layout
6342 * The struct data-layout of map-value is a configuration interface.
6348 int fd; /* prog fd on map write */
6349 __u32 id; /* prog id on map read */
6353 /* CPUMAP map-value layout
6355 * The struct data-layout of map-value is a configuration interface.
6361 int fd; /* prog fd on map write */
6362 __u32 id; /* prog id on map read */
6530 } map;