Message ID | CAG48ez1Fz_N+jA5=Nv-TDaVFcqqhE3K3hR8XG6SFpsmoeyhsuw@mail.gmail.com |
---|---|
State | RFC, archived |
Delegated to: | BPF Maintainers |
Headers | show |
Series | BPF: bug without effect in BPF_RSH case of adjust_scalar_min_max_vals() | expand |
On 04/12/17 17:03, Jann Horn wrote: > As far as I can tell, commit b03c9f9fdc37 ("bpf/verifier: track signed > and unsigned min/max values") introduced the following effectless bug > in the BPF_RSH case of adjust_scalar_min_max_vals() (unless that's > intentional): > > `dst_reg->smax_value` is only updated in the case where > `dst_reg->smin_value < 0` and `umin_val == 0`. This is obviously > harmless if `dst_reg->smax_value >= 0`, but if `dst_reg->smax_value < > 0`, this will temporarily result in a state where the signed upper > bound of `dst_reg` is lower than the signed lower bound (which will be > set to 0). I don't think this should ever happen. Yep, I think you're right that there's a bug there; but I'm not sure it's harmless in the dst_reg->smax_value >= 0 case either. Consider: if dst_reg->smin_value < 0 and dst_reg->smax_value >= 0, then -1 is a possible value; and ((u64)-1) >> 1 == S64_MAX. So in that case we have to set dst_reg->smax_value = ((u64)-1) >> umin_val (so long as umin_val isn't 0, which is the other branch). If dst_reg->smax_value < 0, then we should set dst_reg->smax_value = ((u64)dst_reg->smax_value) >> umin_val, again excepting the case of umin_val == 0. Thanks for spotting this! I'll rustle up a patch tomorrow, unless you beat me to it. Can I have an SOB for your BPF bytecode, so I can incorporate it into selftests? -Ed
On Mon, Dec 4, 2017 at 6:03 PM, Jann Horn <jannh@google.com> wrote: > As far as I can tell, commit b03c9f9fdc37 ("bpf/verifier: track signed > and unsigned min/max values") introduced the following effectless bug > in the BPF_RSH case of adjust_scalar_min_max_vals() (unless that's > intentional): [...] > ======================= > BPF_LD_MAP_FD(BPF_REG_ARG1, mapfd), > > BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_FP), > BPF_ALU64_IMM(BPF_ADD, BPF_REG_TMP, -4), // allocate 4 bytes stack > BPF_MOV32_IMM(BPF_REG_ARG2, 1), > BPF_STX_MEM(BPF_W, BPF_REG_TMP, BPF_REG_ARG2, 0), > BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_TMP), > BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), > BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), > BPF_MOV64_REG(BPF_REG_0, 0), // prepare exit > BPF_EXIT_INSN(), // exit > BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), > > BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 0xf), > BPF_MOV64_IMM(BPF_REG_1, -42), > BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), > BPF_MOV64_IMM(BPF_REG_2, 2), > BPF_ALU64_REG(BPF_RSH, BPF_REG_1, BPF_REG_2), > BPF_EXIT_INSN() > ======================= For using the eBPF bytecode in selftests: Signed-off-by: Jann Horn <jannh@google.com>
======================= diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d4593571c404..bcf6a4aa25cd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2205,8 +2205,10 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, dst_reg->var_off = tnum_rshift(tnum_unknown, umin_val); dst_reg->umin_value >>= umax_val; dst_reg->umax_value >>= umin_val; + pr_warn("BPF_RSH point A: smin=%lld, smax=%lld, umin=%llx, umax=%llx, tribits=%llx, trimask=%llx\n", dst_reg->smin_value, dst_reg->smax_value, dst_reg->umin_value, dst_reg->umax_value, dst_reg->var_off.value, dst_reg->var_off.mask); /* We may learn something more from the var_off */ __update_reg_bounds(dst_reg);