diff mbox series

[bpf] xsk: fix number of pinned pages/umem size discrepancy

Message ID 20200910075609.7904-1-bjorn.topel@gmail.com
State Accepted
Delegated to: BPF Maintainers
Headers show
Series [bpf] xsk: fix number of pinned pages/umem size discrepancy | expand

Commit Message

Björn Töpel Sept. 10, 2020, 7:56 a.m. UTC
From: Björn Töpel <bjorn.topel@intel.com>

For AF_XDP sockets, there was a discrepancy between the number of of
pinned pages and the size of the umem region.

The size of the umem region is used to validate the AF_XDP descriptor
addresses. The logic that pinned the pages covered by the region only
took whole pages into consideration, creating a mismatch between the
size and pinned pages. A user could then pass AF_XDP addresses outside
the range of pinned pages, but still within the size of the region,
crashing the kernel.

This change correctly calculates the number of pages to be
pinned. Further, the size check for the aligned mode is
simplified. Now the code simply checks if the size is divisible by the
chunk size.

Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
Reported-by: Ciara Loftus <ciara.loftus@intel.com>
Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
---
 net/xdp/xdp_umem.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)


base-commit: 746f534a4809e07f427f7d13d10f3a6a9641e5c3

Comments

Ciara Loftus Sept. 10, 2020, 9:24 a.m. UTC | #1
> 
> From: Björn Töpel <bjorn.topel@intel.com>
> 
> For AF_XDP sockets, there was a discrepancy between the number of of
> pinned pages and the size of the umem region.
> 
> The size of the umem region is used to validate the AF_XDP descriptor
> addresses. The logic that pinned the pages covered by the region only
> took whole pages into consideration, creating a mismatch between the
> size and pinned pages. A user could then pass AF_XDP addresses outside
> the range of pinned pages, but still within the size of the region,
> crashing the kernel.
> 
> This change correctly calculates the number of pages to be
> pinned. Further, the size check for the aligned mode is
> simplified. Now the code simply checks if the size is divisible by the
> chunk size.
> 
> Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
> Reported-by: Ciara Loftus <ciara.loftus@intel.com>
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

Thanks for the patch Björn.

Tested-by: Ciara Loftus <ciara.loftus@intel.com>

> ---
>  net/xdp/xdp_umem.c | 17 ++++++++---------
>  1 file changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
> index e97db37354e4..b010bfde0149 100644
> --- a/net/xdp/xdp_umem.c
> +++ b/net/xdp/xdp_umem.c
> @@ -303,10 +303,10 @@ static int xdp_umem_account_pages(struct
> xdp_umem *umem)
> 
>  static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg
> *mr)
>  {
> +	u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr-
> >headroom;
>  	bool unaligned_chunks = mr->flags &
> XDP_UMEM_UNALIGNED_CHUNK_FLAG;
> -	u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
>  	u64 npgs, addr = mr->addr, size = mr->len;
> -	unsigned int chunks, chunks_per_page;
> +	unsigned int chunks, chunks_rem;
>  	int err;
> 
>  	if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size >
> PAGE_SIZE) {
> @@ -336,19 +336,18 @@ static int xdp_umem_reg(struct xdp_umem
> *umem, struct xdp_umem_reg *mr)
>  	if ((addr + size) < addr)
>  		return -EINVAL;
> 
> -	npgs = size >> PAGE_SHIFT;
> +	npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
> +	if (npgs_rem)
> +		npgs++;
>  	if (npgs > U32_MAX)
>  		return -EINVAL;
> 
> -	chunks = (unsigned int)div_u64(size, chunk_size);
> +	chunks = (unsigned int)div_u64_rem(size, chunk_size,
> &chunks_rem);
>  	if (chunks == 0)
>  		return -EINVAL;
> 
> -	if (!unaligned_chunks) {
> -		chunks_per_page = PAGE_SIZE / chunk_size;
> -		if (chunks < chunks_per_page || chunks %
> chunks_per_page)
> -			return -EINVAL;
> -	}
> +	if (!unaligned_chunks && chunks_rem)
> +		return -EINVAL;
> 
>  	if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
>  		return -EINVAL;
> 
> base-commit: 746f534a4809e07f427f7d13d10f3a6a9641e5c3
> --
> 2.25.1
Song Liu Sept. 14, 2020, 5:14 p.m. UTC | #2
On Thu, Sep 10, 2020 at 2:29 AM Loftus, Ciara <ciara.loftus@intel.com> wrote:
>
> >
> > From: Björn Töpel <bjorn.topel@intel.com>
> >
> > For AF_XDP sockets, there was a discrepancy between the number of of
> > pinned pages and the size of the umem region.
> >
> > The size of the umem region is used to validate the AF_XDP descriptor
> > addresses. The logic that pinned the pages covered by the region only
> > took whole pages into consideration, creating a mismatch between the
> > size and pinned pages. A user could then pass AF_XDP addresses outside
> > the range of pinned pages, but still within the size of the region,
> > crashing the kernel.
> >
> > This change correctly calculates the number of pages to be
> > pinned. Further, the size check for the aligned mode is
> > simplified. Now the code simply checks if the size is divisible by the
> > chunk size.
> >
> > Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
> > Reported-by: Ciara Loftus <ciara.loftus@intel.com>
> > Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
>
> Thanks for the patch Björn.
>
> Tested-by: Ciara Loftus <ciara.loftus@intel.com>

Acked-by: Song Liu <songliubraving@fb.com>
Alexei Starovoitov Sept. 15, 2020, 1:37 a.m. UTC | #3
On Thu, Sep 10, 2020 at 12:56 AM Björn Töpel <bjorn.topel@gmail.com> wrote:
>
> From: Björn Töpel <bjorn.topel@intel.com>
>
> For AF_XDP sockets, there was a discrepancy between the number of of
> pinned pages and the size of the umem region.
>
> The size of the umem region is used to validate the AF_XDP descriptor
> addresses. The logic that pinned the pages covered by the region only
> took whole pages into consideration, creating a mismatch between the
> size and pinned pages. A user could then pass AF_XDP addresses outside
> the range of pinned pages, but still within the size of the region,
> crashing the kernel.
>
> This change correctly calculates the number of pages to be
> pinned. Further, the size check for the aligned mode is
> simplified. Now the code simply checks if the size is divisible by the
> chunk size.
>
> Fixes: bbff2f321a86 ("xsk: new descriptor addressing scheme")
> Reported-by: Ciara Loftus <ciara.loftus@intel.com>
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>

Applied. Thanks
diff mbox series

Patch

diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
index e97db37354e4..b010bfde0149 100644
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@ -303,10 +303,10 @@  static int xdp_umem_account_pages(struct xdp_umem *umem)
 
 static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
 {
+	u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom;
 	bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
-	u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
 	u64 npgs, addr = mr->addr, size = mr->len;
-	unsigned int chunks, chunks_per_page;
+	unsigned int chunks, chunks_rem;
 	int err;
 
 	if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
@@ -336,19 +336,18 @@  static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
 	if ((addr + size) < addr)
 		return -EINVAL;
 
-	npgs = size >> PAGE_SHIFT;
+	npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
+	if (npgs_rem)
+		npgs++;
 	if (npgs > U32_MAX)
 		return -EINVAL;
 
-	chunks = (unsigned int)div_u64(size, chunk_size);
+	chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem);
 	if (chunks == 0)
 		return -EINVAL;
 
-	if (!unaligned_chunks) {
-		chunks_per_page = PAGE_SIZE / chunk_size;
-		if (chunks < chunks_per_page || chunks % chunks_per_page)
-			return -EINVAL;
-	}
+	if (!unaligned_chunks && chunks_rem)
+		return -EINVAL;
 
 	if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
 		return -EINVAL;