diff mbox

[v2,1/5] msix_init: assert programming error

Message ID 1471944454-13895-2-git-send-email-caoj.fnst@cn.fujitsu.com
State New
Headers show

Commit Message

Cao jin Aug. 23, 2016, 9:27 a.m. UTC
The input parameters is used for creating the msix capable device, so
they must obey the PCI spec, or else, it should be programming error.

CC: Markus Armbruster <armbru@redhat.com>
CC: Marcel Apfelbaum <marcel@redhat.com>
CC: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
---
 hw/pci/msix.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

Comments

Markus Armbruster Sept. 12, 2016, 1:29 p.m. UTC | #1
Cao jin <caoj.fnst@cn.fujitsu.com> writes:

> The input parameters is used for creating the msix capable device, so
> they must obey the PCI spec, or else, it should be programming error.

True when the the parameters come from a device model attempting to
define a PCI device violating the spec.  But what if the parameters come
from an actual PCI device violating the spec, via device assignment?

For what it's worth, the new behavior seems consistent with msi_init(),
which is good.

> CC: Markus Armbruster <armbru@redhat.com>
> CC: Marcel Apfelbaum <marcel@redhat.com>
> CC: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
> ---
>  hw/pci/msix.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/hw/pci/msix.c b/hw/pci/msix.c
> index 0ec1cb1..384a29d 100644
> --- a/hw/pci/msix.c
> +++ b/hw/pci/msix.c
> @@ -253,9 +253,7 @@ int msix_init(struct PCIDevice *dev, unsigned short nentries,
>          return -ENOTSUP;
>      }
>  
> -    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
> -        return -EINVAL;
> -    }
> +    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);
>  
>      table_size = nentries * PCI_MSIX_ENTRY_SIZE;
>      pba_size = QEMU_ALIGN_UP(nentries, 64) / 8;
> @@ -266,7 +264,7 @@ int msix_init(struct PCIDevice *dev, unsigned short nentries,
       /* Sanity test: table & pba don't overlap, fit within BARs, min aligned */
       if ((table_bar_nr == pba_bar_nr &&
            ranges_overlap(table_offset, table_size, pba_offset, pba_size)) ||
>          table_offset + table_size > memory_region_size(table_bar) ||
>          pba_offset + pba_size > memory_region_size(pba_bar) ||
>          (table_offset | pba_offset) & PCI_MSIX_FLAGS_BIRMASK) {
> -        return -EINVAL;
> +        assert(0);
>      }

Instead of

    if (... complicated condition ...) {
        assert(0);
    }

let's write

    assert(... negation of the complicated condition ...);

>  
>      cap = pci_add_capability(dev, PCI_CAP_ID_MSIX, cap_pos, MSIX_CAP_LENGTH);
Cao jin Sept. 13, 2016, 2:51 a.m. UTC | #2
On 09/12/2016 09:29 PM, Markus Armbruster wrote:
> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>
>> The input parameters is used for creating the msix capable device, so
>> they must obey the PCI spec, or else, it should be programming error.
>
> True when the the parameters come from a device model attempting to
> define a PCI device violating the spec.  But what if the parameters come
> from an actual PCI device violating the spec, via device assignment?

Before the patch, on invalid param, the vfio behaviour is:
   error_report("vfio: msix_init failed");
   then, device create fail.

After the patch, its behaviour is:
   asserted.

Do you mean we should still report some useful info to user on invalid 
params?

Cao jin
>
> For what it's worth, the new behavior seems consistent with msi_init(),
> which is good.
>
>> CC: Markus Armbruster <armbru@redhat.com>
>> CC: Marcel Apfelbaum <marcel@redhat.com>
>> CC: Michael S. Tsirkin <mst@redhat.com>
>> Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
>> ---
>>   hw/pci/msix.c | 6 ++----
>>   1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/hw/pci/msix.c b/hw/pci/msix.c
>> index 0ec1cb1..384a29d 100644
>> --- a/hw/pci/msix.c
>> +++ b/hw/pci/msix.c
>> @@ -253,9 +253,7 @@ int msix_init(struct PCIDevice *dev, unsigned short nentries,
>>           return -ENOTSUP;
>>       }
>>
>> -    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
>> -        return -EINVAL;
>> -    }
>> +    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);
>>
>>       table_size = nentries * PCI_MSIX_ENTRY_SIZE;
>>       pba_size = QEMU_ALIGN_UP(nentries, 64) / 8;
>> @@ -266,7 +264,7 @@ int msix_init(struct PCIDevice *dev, unsigned short nentries,
>         /* Sanity test: table & pba don't overlap, fit within BARs, min aligned */
>         if ((table_bar_nr == pba_bar_nr &&
>              ranges_overlap(table_offset, table_size, pba_offset, pba_size)) ||
>>           table_offset + table_size > memory_region_size(table_bar) ||
>>           pba_offset + pba_size > memory_region_size(pba_bar) ||
>>           (table_offset | pba_offset) & PCI_MSIX_FLAGS_BIRMASK) {
>> -        return -EINVAL;
>> +        assert(0);
>>       }
>
> Instead of
>
>      if (... complicated condition ...) {
>          assert(0);
>      }
>
> let's write
>
>      assert(... negation of the complicated condition ...);
>
>>
>>       cap = pci_add_capability(dev, PCI_CAP_ID_MSIX, cap_pos, MSIX_CAP_LENGTH);
>
>
> .
>
Markus Armbruster Sept. 13, 2016, 6:16 a.m. UTC | #3
Cc: Alex for device assignment expertise.

Cao jin <caoj.fnst@cn.fujitsu.com> writes:

> On 09/12/2016 09:29 PM, Markus Armbruster wrote:
>> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>>
>>> The input parameters is used for creating the msix capable device, so
>>> they must obey the PCI spec, or else, it should be programming error.
>>
>> True when the the parameters come from a device model attempting to
>> define a PCI device violating the spec.  But what if the parameters come
>> from an actual PCI device violating the spec, via device assignment?
>
> Before the patch, on invalid param, the vfio behaviour is:
>   error_report("vfio: msix_init failed");
>   then, device create fail.
>
> After the patch, its behaviour is:
>   asserted.
>
> Do you mean we should still report some useful info to user on invalid
> params?

In the normal case, asking msix_init() to create MSI-X that are out of
spec is a programming error: the code that does it is broken and needs
fixing.

Device assignment might be the exception: there, the parameters for
msix_init() come from the assigned device, not the program.  If they
violate the spec, the device is broken.  This wouldn't be a programming
error.  Alex, can this happen?

If yes, we may want to handle it by failing device assignment.

> Cao jin
>>
>> For what it's worth, the new behavior seems consistent with msi_init(),
>> which is good.

Whatever behavior on out-of-spec parameters we choose, msi_init() and
msix_init() should behave the same.
Alex Williamson Sept. 13, 2016, 2:49 p.m. UTC | #4
On Tue, 13 Sep 2016 08:16:20 +0200
Markus Armbruster <armbru@redhat.com> wrote:

> Cc: Alex for device assignment expertise.
> 
> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> 
> > On 09/12/2016 09:29 PM, Markus Armbruster wrote:  
> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> >>  
> >>> The input parameters is used for creating the msix capable device, so
> >>> they must obey the PCI spec, or else, it should be programming error.  
> >>
> >> True when the the parameters come from a device model attempting to
> >> define a PCI device violating the spec.  But what if the parameters come
> >> from an actual PCI device violating the spec, via device assignment?  
> >
> > Before the patch, on invalid param, the vfio behaviour is:
> >   error_report("vfio: msix_init failed");
> >   then, device create fail.
> >
> > After the patch, its behaviour is:
> >   asserted.
> >
> > Do you mean we should still report some useful info to user on invalid
> > params?  
> 
> In the normal case, asking msix_init() to create MSI-X that are out of
> spec is a programming error: the code that does it is broken and needs
> fixing.
> 
> Device assignment might be the exception: there, the parameters for
> msix_init() come from the assigned device, not the program.  If they
> violate the spec, the device is broken.  This wouldn't be a programming
> error.  Alex, can this happen?
> 
> If yes, we may want to handle it by failing device assignment.


Generally, I think the entire premise of these sorts of patches is
flawed.  We take a working error path that allows a driver to robustly
abort on unexpected date and turn it into a time bomb.  Often the
excuse for this is that "error handling is hard".  Tough.  Now a
hot-add of a device that triggers this changes from a simple failure to
a denial of service event.  Furthermore, we base that time bomb on our
interpretation of the spec, which we can only validate against in-tree
devices.

We have actually had assigned devices that fail the sanity test here,
there's a quirk in vfio_msix_early_setup() for a Chelsio device with
this bug.  Do we really want user experiencing aborts when a simple
device initialization failure is sufficient?

Generally abort code paths like this cause me to do my own sanity
testing, which is really poor practice since we should have that sanity
testing in the common code.  Thanks,

Alex
Markus Armbruster Sept. 29, 2016, 1:11 p.m. UTC | #5
Alex Williamson <alex.williamson@redhat.com> writes:

> On Tue, 13 Sep 2016 08:16:20 +0200
> Markus Armbruster <armbru@redhat.com> wrote:
>
>> Cc: Alex for device assignment expertise.
>> 
>> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>> 
>> > On 09/12/2016 09:29 PM, Markus Armbruster wrote:  
>> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>> >>  
>> >>> The input parameters is used for creating the msix capable device, so
>> >>> they must obey the PCI spec, or else, it should be programming error.  
>> >>
>> >> True when the the parameters come from a device model attempting to
>> >> define a PCI device violating the spec.  But what if the parameters come
>> >> from an actual PCI device violating the spec, via device assignment?  
>> >
>> > Before the patch, on invalid param, the vfio behaviour is:
>> >   error_report("vfio: msix_init failed");
>> >   then, device create fail.
>> >
>> > After the patch, its behaviour is:
>> >   asserted.
>> >
>> > Do you mean we should still report some useful info to user on invalid
>> > params?  
>> 
>> In the normal case, asking msix_init() to create MSI-X that are out of
>> spec is a programming error: the code that does it is broken and needs
>> fixing.
>> 
>> Device assignment might be the exception: there, the parameters for
>> msix_init() come from the assigned device, not the program.  If they
>> violate the spec, the device is broken.  This wouldn't be a programming
>> error.  Alex, can this happen?
>> 
>> If yes, we may want to handle it by failing device assignment.
>
>
> Generally, I think the entire premise of these sorts of patches is
> flawed.  We take a working error path that allows a driver to robustly
> abort on unexpected date and turn it into a time bomb.  Often the
> excuse for this is that "error handling is hard".  Tough.  Now a
> hot-add of a device that triggers this changes from a simple failure to
> a denial of service event.  Furthermore, we base that time bomb on our
> interpretation of the spec, which we can only validate against in-tree
> devices.
>
> We have actually had assigned devices that fail the sanity test here,
> there's a quirk in vfio_msix_early_setup() for a Chelsio device with
> this bug.  Do we really want user experiencing aborts when a simple
> device initialization failure is sufficient?
>
> Generally abort code paths like this cause me to do my own sanity
> testing, which is really poor practice since we should have that sanity
> testing in the common code.  Thanks,

I prefer to assert on programming error, because 1. it does double duty
as documentation, 2. error handling of impossible conditions is commonly
wrong, and 3. assertion failures have a much better chance to get the
program fixed.  Even when presence of a working error path kills 2., the
other two make me stick to assertions.

However, input out-of-spec is not a programming error.  For most users
of msix_init(), the arguments are hard-coded, thus invalid arguments are
a programming error.  For device assignment, they come from a physical
device, thus invalid arguments can either be a programming error (our
idea of "invalid" is invalid) or bad input (the physical device is
out-of-spec).  Since we can't know, we better handle it rather than
assert.

Bottom line: you convinced me msix_init() should stay as it is.  But now
msi_init() looks like it needs a change: it asserts on invalid
nr_vectors parameter.  Does that need fixing, Alex?
Alex Williamson Sept. 29, 2016, 4:10 p.m. UTC | #6
On Thu, 29 Sep 2016 15:11:27 +0200
Markus Armbruster <armbru@redhat.com> wrote:

> Alex Williamson <alex.williamson@redhat.com> writes:
> 
> > On Tue, 13 Sep 2016 08:16:20 +0200
> > Markus Armbruster <armbru@redhat.com> wrote:
> >  
> >> Cc: Alex for device assignment expertise.
> >> 
> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> >>   
> >> > On 09/12/2016 09:29 PM, Markus Armbruster wrote:    
> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> >> >>    
> >> >>> The input parameters is used for creating the msix capable device, so
> >> >>> they must obey the PCI spec, or else, it should be programming error.    
> >> >>
> >> >> True when the the parameters come from a device model attempting to
> >> >> define a PCI device violating the spec.  But what if the parameters come
> >> >> from an actual PCI device violating the spec, via device assignment?    
> >> >
> >> > Before the patch, on invalid param, the vfio behaviour is:
> >> >   error_report("vfio: msix_init failed");
> >> >   then, device create fail.
> >> >
> >> > After the patch, its behaviour is:
> >> >   asserted.
> >> >
> >> > Do you mean we should still report some useful info to user on invalid
> >> > params?    
> >> 
> >> In the normal case, asking msix_init() to create MSI-X that are out of
> >> spec is a programming error: the code that does it is broken and needs
> >> fixing.
> >> 
> >> Device assignment might be the exception: there, the parameters for
> >> msix_init() come from the assigned device, not the program.  If they
> >> violate the spec, the device is broken.  This wouldn't be a programming
> >> error.  Alex, can this happen?
> >> 
> >> If yes, we may want to handle it by failing device assignment.  
> >
> >
> > Generally, I think the entire premise of these sorts of patches is
> > flawed.  We take a working error path that allows a driver to robustly
> > abort on unexpected date and turn it into a time bomb.  Often the
> > excuse for this is that "error handling is hard".  Tough.  Now a
> > hot-add of a device that triggers this changes from a simple failure to
> > a denial of service event.  Furthermore, we base that time bomb on our
> > interpretation of the spec, which we can only validate against in-tree
> > devices.
> >
> > We have actually had assigned devices that fail the sanity test here,
> > there's a quirk in vfio_msix_early_setup() for a Chelsio device with
> > this bug.  Do we really want user experiencing aborts when a simple
> > device initialization failure is sufficient?
> >
> > Generally abort code paths like this cause me to do my own sanity
> > testing, which is really poor practice since we should have that sanity
> > testing in the common code.  Thanks,  
> 
> I prefer to assert on programming error, because 1. it does double duty
> as documentation, 2. error handling of impossible conditions is commonly
> wrong, and 3. assertion failures have a much better chance to get the
> program fixed.  Even when presence of a working error path kills 2., the
> other two make me stick to assertions.

So we're looking at:

> -    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
> -        return -EINVAL;
> -    }

vs

> +    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);

How do you argue that one of these provides better self documentation
than the other?

The assert may have a better chance of getting fixed, but it's because
the existence of the assert itself exposes a vulnerability in the code.
Which would you rather have in production, a VMM that crashes on the
slightest deviance from the input it expects or one that simply errors
the faulting code path and continues?

Error handling is hard, which is why we need to look at it as a
collection of smaller problems.  We return an error at a leaf function
and let callers of that function decide how to handle it.  If some of
those callers don't want to deal with error handling, abort there, we
can come back to them later, but let the code paths that do want proper
error handling to continue.  If we add aborts into the leaf function,
then any calling path that wants to be robust against an error needs to
fully sanitize the input itself, at which point we have different
drivers sanitizing in different ways, all building up walls to protect
themselves from the time bombs in these leaf functions.  It's crazy.

> However, input out-of-spec is not a programming error.  For most users
> of msix_init(), the arguments are hard-coded, thus invalid arguments are
> a programming error.  For device assignment, they come from a physical
> device, thus invalid arguments can either be a programming error (our
> idea of "invalid" is invalid) or bad input (the physical device is
> out-of-spec).  Since we can't know, we better handle it rather than
> assert.

So are we going to flag every call path that device assignment might
use as one that needs "proper" error handling any anything that's only
used by emulated devices can assert?  How will anyone ever know?  vfio
tries really hard to be just another device in the QEMU ecosystem.

> Bottom line: you convinced me msix_init() should stay as it is.  But now
> msi_init() looks like it needs a change: it asserts on invalid
> nr_vectors parameter.  Does that need fixing, Alex?

IMHO, they all need to be fixed.  Besides, look at the callers of
msi_init(), almost every one will assert on its own if msi_init()
fails, all we're doing is hindering drivers like vfio-pci that can
gracefully handle a failure.  I think that's exactly how each of these
should be handled, find a leaf function with asserts, convert it to
proper error handling, change the callers that don't already handle the
error or assert to assert, then work down through each code path to
figure out how they can more robustly handle an error.  I don't buy the
argument that error handling is too hard or that we're more likely to
get it wrong.  It needs to be handled as percolating small errors, each
of which is trivial to handle on its own.  Thanks,

Alex
Markus Armbruster Sept. 30, 2016, 2:06 p.m. UTC | #7
Alex Williamson <alex.williamson@redhat.com> writes:

> On Thu, 29 Sep 2016 15:11:27 +0200
> Markus Armbruster <armbru@redhat.com> wrote:
>
>> Alex Williamson <alex.williamson@redhat.com> writes:
>> 
>> > On Tue, 13 Sep 2016 08:16:20 +0200
>> > Markus Armbruster <armbru@redhat.com> wrote:
>> >  
>> >> Cc: Alex for device assignment expertise.
>> >> 
>> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>> >>   
>> >> > On 09/12/2016 09:29 PM, Markus Armbruster wrote:    
>> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>> >> >>    
>> >> >>> The input parameters is used for creating the msix capable device, so
>> >> >>> they must obey the PCI spec, or else, it should be programming error.    
>> >> >>
>> >> >> True when the the parameters come from a device model attempting to
>> >> >> define a PCI device violating the spec.  But what if the parameters come
>> >> >> from an actual PCI device violating the spec, via device assignment?    
>> >> >
>> >> > Before the patch, on invalid param, the vfio behaviour is:
>> >> >   error_report("vfio: msix_init failed");
>> >> >   then, device create fail.
>> >> >
>> >> > After the patch, its behaviour is:
>> >> >   asserted.
>> >> >
>> >> > Do you mean we should still report some useful info to user on invalid
>> >> > params?    
>> >> 
>> >> In the normal case, asking msix_init() to create MSI-X that are out of
>> >> spec is a programming error: the code that does it is broken and needs
>> >> fixing.
>> >> 
>> >> Device assignment might be the exception: there, the parameters for
>> >> msix_init() come from the assigned device, not the program.  If they
>> >> violate the spec, the device is broken.  This wouldn't be a programming
>> >> error.  Alex, can this happen?
>> >> 
>> >> If yes, we may want to handle it by failing device assignment.  
>> >
>> >
>> > Generally, I think the entire premise of these sorts of patches is
>> > flawed.  We take a working error path that allows a driver to robustly
>> > abort on unexpected date and turn it into a time bomb.  Often the
>> > excuse for this is that "error handling is hard".  Tough.  Now a
>> > hot-add of a device that triggers this changes from a simple failure to
>> > a denial of service event.  Furthermore, we base that time bomb on our
>> > interpretation of the spec, which we can only validate against in-tree
>> > devices.
>> >
>> > We have actually had assigned devices that fail the sanity test here,
>> > there's a quirk in vfio_msix_early_setup() for a Chelsio device with
>> > this bug.  Do we really want user experiencing aborts when a simple
>> > device initialization failure is sufficient?
>> >
>> > Generally abort code paths like this cause me to do my own sanity
>> > testing, which is really poor practice since we should have that sanity
>> > testing in the common code.  Thanks,  
>> 
>> I prefer to assert on programming error, because 1. it does double duty
>> as documentation, 2. error handling of impossible conditions is commonly
>> wrong, and 3. assertion failures have a much better chance to get the
>> program fixed.  Even when presence of a working error path kills 2., the
>> other two make me stick to assertions.
>
> So we're looking at:
>
>> -    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
>> -        return -EINVAL;
>> -    }
>
> vs
>
>> +    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);
>
> How do you argue that one of these provides better self documentation
> than the other?

The first one says "this can happen, and when it does, the function
fails cleanly."  For a genuine programming error, this is in part
misleading.

The second one says "I assert this can't happen.  We'd be toast if I was
wrong."

> The assert may have a better chance of getting fixed, but it's because
> the existence of the assert itself exposes a vulnerability in the code.
> Which would you rather have in production, a VMM that crashes on the
> slightest deviance from the input it expects or one that simply errors
> the faulting code path and continues?

Invalid input to a program should never be treated as programming error.

> Error handling is hard, which is why we need to look at it as a
> collection of smaller problems.  We return an error at a leaf function
> and let callers of that function decide how to handle it.  If some of
> those callers don't want to deal with error handling, abort there, we
> can come back to them later, but let the code paths that do want proper
> error handling to continue.  If we add aborts into the leaf function,
> then any calling path that wants to be robust against an error needs to
> fully sanitize the input itself, at which point we have different
> drivers sanitizing in different ways, all building up walls to protect
> themselves from the time bombs in these leaf functions.  It's crazy.

It depends on the kind of error in the leaf function.

I suspect we're talking past each other because we got different kinds
of errors in mind.

Programming is impossible without things like preconditions,
postconditions, invariants.

If a section of code is entered when its precondition doesn't hold,
we're toast.  This is the archetypical programming error.

If it can actually happen, the program is incorrect, and needs fixing.

Checking preconditions is often (but not always) practical.  In my
opinion, checking is good practice, and the proper way to check is
assert().  Makes the incorrect program fail before it can do further
damage, and helps with finding the programming error.

A preconditions is part of the contract between a function and its
users.  An strong precondition can make the function's job easier, but
that's no use if the resulting function is inconvenient to use.  On the
other hand, complicating the function to get a weaker precondition
nobody actually needs is just as dumb.

Returning an error is *not* checking preconditions.  Remember, if the
precondition doesn't hold, we're toast.  If we're toast when we return
an error, we're clearly doing it wrong.

You are arguing for weaker preconditions.  I'm not actually disagreeing
with you!  I'm merely expressing my opinion that checking preconditions
with assert() is a good idea.

>> However, input out-of-spec is not a programming error.  For most users
>> of msix_init(), the arguments are hard-coded, thus invalid arguments are
>> a programming error.  For device assignment, they come from a physical
>> device, thus invalid arguments can either be a programming error (our
>> idea of "invalid" is invalid) or bad input (the physical device is
>> out-of-spec).  Since we can't know, we better handle it rather than
>> assert.
>
> So are we going to flag every call path that device assignment might
> use as one that needs "proper" error handling any anything that's only
> used by emulated devices can assert?  How will anyone ever know?  vfio
> tries really hard to be just another device in the QEMU ecosystem.

It tries, but it can't help to add a few things.

Consider the number of MSI vectors.  It can only be 1, 2, 4, 8, 16 or
32.

When the callers of msi_init() pass literal numbers, making "the number
is valid" a precondition is quite sensible.

If the numbers come from the user via configuration, they need to be
checked.  Two sane ways to do that: check close to where the
configuration is processed, and check where it is used.  The former will
likely produce better error messages.  But the latter has its
advantages, too.  Checking next to its use in msi_init() involves making
it handle invalid numbers, i.e. weakening its precondition.

Making vectors configurable turned moves them from the realm of
preconditions to the realm of program input.  Code needs to be updated
for that.

What device assignment adds is moving many more bits to the program
input realm.  More code needs to be updated for that.

>> Bottom line: you convinced me msix_init() should stay as it is.  But now
>> msi_init() looks like it needs a change: it asserts on invalid
>> nr_vectors parameter.  Does that need fixing, Alex?
>
> IMHO, they all need to be fixed.  Besides, look at the callers of
> msi_init(), almost every one will assert on its own if msi_init()
> fails, all we're doing is hindering drivers like vfio-pci that can
> gracefully handle a failure.  I think that's exactly how each of these
> should be handled, find a leaf function with asserts, convert it to
> proper error handling, change the callers that don't already handle the
> error or assert to assert, then work down through each code path to
> figure out how they can more robustly handle an error.  I don't buy the
> argument that error handling is too hard or that we're more likely to
> get it wrong.  It needs to be handled as percolating small errors, each
> of which is trivial to handle on its own.  Thanks,

Once there's a need to handle a certain condition as an error, we should
do that, no argument.  This also provides a way to test the error path.

However, I wouldn't buy an argument that preconditions should be made as
weak as possible in leaf functions (let alone always) regardless of the
cost in complexity, and non-testability of error paths.  I'm strictly a
pay as you go person.

Back to the problem at hand.  Cao jin, would you be willing to fix
msi_init()?
Dr. David Alan Gilbert Sept. 30, 2016, 6:06 p.m. UTC | #8
* Markus Armbruster (armbru@redhat.com) wrote:
> Alex Williamson <alex.williamson@redhat.com> writes:
> 
> > On Thu, 29 Sep 2016 15:11:27 +0200
> > Markus Armbruster <armbru@redhat.com> wrote:
> >
> >> Alex Williamson <alex.williamson@redhat.com> writes:
> >> 
> >> > On Tue, 13 Sep 2016 08:16:20 +0200
> >> > Markus Armbruster <armbru@redhat.com> wrote:
> >> >  
> >> >> Cc: Alex for device assignment expertise.
> >> >> 
> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> >> >>   
> >> >> > On 09/12/2016 09:29 PM, Markus Armbruster wrote:    
> >> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> >> >> >>    
> >> >> >>> The input parameters is used for creating the msix capable device, so
> >> >> >>> they must obey the PCI spec, or else, it should be programming error.    
> >> >> >>
> >> >> >> True when the the parameters come from a device model attempting to
> >> >> >> define a PCI device violating the spec.  But what if the parameters come
> >> >> >> from an actual PCI device violating the spec, via device assignment?    
> >> >> >
> >> >> > Before the patch, on invalid param, the vfio behaviour is:
> >> >> >   error_report("vfio: msix_init failed");
> >> >> >   then, device create fail.
> >> >> >
> >> >> > After the patch, its behaviour is:
> >> >> >   asserted.
> >> >> >
> >> >> > Do you mean we should still report some useful info to user on invalid
> >> >> > params?    
> >> >> 
> >> >> In the normal case, asking msix_init() to create MSI-X that are out of
> >> >> spec is a programming error: the code that does it is broken and needs
> >> >> fixing.
> >> >> 
> >> >> Device assignment might be the exception: there, the parameters for
> >> >> msix_init() come from the assigned device, not the program.  If they
> >> >> violate the spec, the device is broken.  This wouldn't be a programming
> >> >> error.  Alex, can this happen?
> >> >> 
> >> >> If yes, we may want to handle it by failing device assignment.  
> >> >
> >> >
> >> > Generally, I think the entire premise of these sorts of patches is
> >> > flawed.  We take a working error path that allows a driver to robustly
> >> > abort on unexpected date and turn it into a time bomb.  Often the
> >> > excuse for this is that "error handling is hard".  Tough.  Now a
> >> > hot-add of a device that triggers this changes from a simple failure to
> >> > a denial of service event.  Furthermore, we base that time bomb on our
> >> > interpretation of the spec, which we can only validate against in-tree
> >> > devices.
> >> >
> >> > We have actually had assigned devices that fail the sanity test here,
> >> > there's a quirk in vfio_msix_early_setup() for a Chelsio device with
> >> > this bug.  Do we really want user experiencing aborts when a simple
> >> > device initialization failure is sufficient?
> >> >
> >> > Generally abort code paths like this cause me to do my own sanity
> >> > testing, which is really poor practice since we should have that sanity
> >> > testing in the common code.  Thanks,  
> >> 
> >> I prefer to assert on programming error, because 1. it does double duty
> >> as documentation, 2. error handling of impossible conditions is commonly
> >> wrong, and 3. assertion failures have a much better chance to get the
> >> program fixed.  Even when presence of a working error path kills 2., the
> >> other two make me stick to assertions.
> >
> > So we're looking at:
> >
> >> -    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
> >> -        return -EINVAL;
> >> -    }
> >
> > vs
> >
> >> +    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);
> >
> > How do you argue that one of these provides better self documentation
> > than the other?
> 
> The first one says "this can happen, and when it does, the function
> fails cleanly."  For a genuine programming error, this is in part
> misleading.
> 
> The second one says "I assert this can't happen.  We'd be toast if I was
> wrong."
> 
> > The assert may have a better chance of getting fixed, but it's because
> > the existence of the assert itself exposes a vulnerability in the code.
> > Which would you rather have in production, a VMM that crashes on the
> > slightest deviance from the input it expects or one that simply errors
> > the faulting code path and continues?
> 
> Invalid input to a program should never be treated as programming error.
> 
> > Error handling is hard, which is why we need to look at it as a
> > collection of smaller problems.  We return an error at a leaf function
> > and let callers of that function decide how to handle it.  If some of
> > those callers don't want to deal with error handling, abort there, we
> > can come back to them later, but let the code paths that do want proper
> > error handling to continue.  If we add aborts into the leaf function,
> > then any calling path that wants to be robust against an error needs to
> > fully sanitize the input itself, at which point we have different
> > drivers sanitizing in different ways, all building up walls to protect
> > themselves from the time bombs in these leaf functions.  It's crazy.
> 
> It depends on the kind of error in the leaf function.
> 
> I suspect we're talking past each other because we got different kinds
> of errors in mind.
> 
> Programming is impossible without things like preconditions,
> postconditions, invariants.
> 
> If a section of code is entered when its precondition doesn't hold,
> we're toast.  This is the archetypical programming error.
> 
> If it can actually happen, the program is incorrect, and needs fixing.
> 
> Checking preconditions is often (but not always) practical.  In my
> opinion, checking is good practice, and the proper way to check is
> assert().  Makes the incorrect program fail before it can do further
> damage, and helps with finding the programming error.
> 
> A preconditions is part of the contract between a function and its
> users.  An strong precondition can make the function's job easier, but
> that's no use if the resulting function is inconvenient to use.  On the
> other hand, complicating the function to get a weaker precondition
> nobody actually needs is just as dumb.
> 
> Returning an error is *not* checking preconditions.  Remember, if the
> precondition doesn't hold, we're toast.  If we're toast when we return
> an error, we're clearly doing it wrong.
> 
> You are arguing for weaker preconditions.  I'm not actually disagreeing
> with you!  I'm merely expressing my opinion that checking preconditions
> with assert() is a good idea.

I have a fairly strong dislike for asserts in qemu, and although I'm not
always consistent, my reasoning is mainly to do with asserts once a guest
is running.

Lets imagine you have a happily running guest and then you try and do
something new and complex (e.g. hotplug a vfio-device); now lets say that
new thing has something very broken about it, do you really want the previously
running guest to die?

My view is it can very much depend on how broken you think the
world is; you've got to remember that crashing at this point
is going to lose the user a VM, and that could mean losing
data - so at that point you have to make a decision about whether
your lack of confidence in the state of the VM due to the failed
precondition is worse than your knowledge that the VM is going to fail.

Perhaps giving the user an error and disabling the device lets
the admin gravefully shutdown the VM and walk away with all
their data intact.

So I wouldn't argue for weaker preconditions, just what the
result is if the precondition fails.

Dave
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Markus Armbruster Oct. 4, 2016, 9:33 a.m. UTC | #9
"Dr. David Alan Gilbert" <dgilbert@redhat.com> writes:

> * Markus Armbruster (armbru@redhat.com) wrote:
>> Alex Williamson <alex.williamson@redhat.com> writes:
>> 
>> > On Thu, 29 Sep 2016 15:11:27 +0200
>> > Markus Armbruster <armbru@redhat.com> wrote:
>> >
>> >> Alex Williamson <alex.williamson@redhat.com> writes:
>> >> 
>> >> > On Tue, 13 Sep 2016 08:16:20 +0200
>> >> > Markus Armbruster <armbru@redhat.com> wrote:
>> >> >  
>> >> >> Cc: Alex for device assignment expertise.
>> >> >> 
>> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>> >> >>   
>> >> >> > On 09/12/2016 09:29 PM, Markus Armbruster wrote:    
>> >> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
>> >> >> >>    
>> >> >> >>> The input parameters is used for creating the msix capable device, so
>> >> >> >>> they must obey the PCI spec, or else, it should be programming error.    
>> >> >> >>
>> >> >> >> True when the the parameters come from a device model attempting to
>> >> >> >> define a PCI device violating the spec.  But what if the parameters come
>> >> >> >> from an actual PCI device violating the spec, via device assignment?    
>> >> >> >
>> >> >> > Before the patch, on invalid param, the vfio behaviour is:
>> >> >> >   error_report("vfio: msix_init failed");
>> >> >> >   then, device create fail.
>> >> >> >
>> >> >> > After the patch, its behaviour is:
>> >> >> >   asserted.
>> >> >> >
>> >> >> > Do you mean we should still report some useful info to user on invalid
>> >> >> > params?    
>> >> >> 
>> >> >> In the normal case, asking msix_init() to create MSI-X that are out of
>> >> >> spec is a programming error: the code that does it is broken and needs
>> >> >> fixing.
>> >> >> 
>> >> >> Device assignment might be the exception: there, the parameters for
>> >> >> msix_init() come from the assigned device, not the program.  If they
>> >> >> violate the spec, the device is broken.  This wouldn't be a programming
>> >> >> error.  Alex, can this happen?
>> >> >> 
>> >> >> If yes, we may want to handle it by failing device assignment.  
>> >> >
>> >> >
>> >> > Generally, I think the entire premise of these sorts of patches is
>> >> > flawed.  We take a working error path that allows a driver to robustly
>> >> > abort on unexpected date and turn it into a time bomb.  Often the
>> >> > excuse for this is that "error handling is hard".  Tough.  Now a
>> >> > hot-add of a device that triggers this changes from a simple failure to
>> >> > a denial of service event.  Furthermore, we base that time bomb on our
>> >> > interpretation of the spec, which we can only validate against in-tree
>> >> > devices.
>> >> >
>> >> > We have actually had assigned devices that fail the sanity test here,
>> >> > there's a quirk in vfio_msix_early_setup() for a Chelsio device with
>> >> > this bug.  Do we really want user experiencing aborts when a simple
>> >> > device initialization failure is sufficient?
>> >> >
>> >> > Generally abort code paths like this cause me to do my own sanity
>> >> > testing, which is really poor practice since we should have that sanity
>> >> > testing in the common code.  Thanks,  
>> >> 
>> >> I prefer to assert on programming error, because 1. it does double duty
>> >> as documentation, 2. error handling of impossible conditions is commonly
>> >> wrong, and 3. assertion failures have a much better chance to get the
>> >> program fixed.  Even when presence of a working error path kills 2., the
>> >> other two make me stick to assertions.
>> >
>> > So we're looking at:
>> >
>> >> -    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
>> >> -        return -EINVAL;
>> >> -    }
>> >
>> > vs
>> >
>> >> +    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);
>> >
>> > How do you argue that one of these provides better self documentation
>> > than the other?
>> 
>> The first one says "this can happen, and when it does, the function
>> fails cleanly."  For a genuine programming error, this is in part
>> misleading.
>> 
>> The second one says "I assert this can't happen.  We'd be toast if I was
>> wrong."
>> 
>> > The assert may have a better chance of getting fixed, but it's because
>> > the existence of the assert itself exposes a vulnerability in the code.
>> > Which would you rather have in production, a VMM that crashes on the
>> > slightest deviance from the input it expects or one that simply errors
>> > the faulting code path and continues?
>> 
>> Invalid input to a program should never be treated as programming error.
>> 
>> > Error handling is hard, which is why we need to look at it as a
>> > collection of smaller problems.  We return an error at a leaf function
>> > and let callers of that function decide how to handle it.  If some of
>> > those callers don't want to deal with error handling, abort there, we
>> > can come back to them later, but let the code paths that do want proper
>> > error handling to continue.  If we add aborts into the leaf function,
>> > then any calling path that wants to be robust against an error needs to
>> > fully sanitize the input itself, at which point we have different
>> > drivers sanitizing in different ways, all building up walls to protect
>> > themselves from the time bombs in these leaf functions.  It's crazy.
>> 
>> It depends on the kind of error in the leaf function.
>> 
>> I suspect we're talking past each other because we got different kinds
>> of errors in mind.
>> 
>> Programming is impossible without things like preconditions,
>> postconditions, invariants.
>> 
>> If a section of code is entered when its precondition doesn't hold,
>> we're toast.  This is the archetypical programming error.
>> 
>> If it can actually happen, the program is incorrect, and needs fixing.
>> 
>> Checking preconditions is often (but not always) practical.  In my
>> opinion, checking is good practice, and the proper way to check is
>> assert().  Makes the incorrect program fail before it can do further
>> damage, and helps with finding the programming error.
>> 
>> A preconditions is part of the contract between a function and its
>> users.  An strong precondition can make the function's job easier, but
>> that's no use if the resulting function is inconvenient to use.  On the
>> other hand, complicating the function to get a weaker precondition
>> nobody actually needs is just as dumb.
>> 
>> Returning an error is *not* checking preconditions.  Remember, if the
>> precondition doesn't hold, we're toast.  If we're toast when we return
>> an error, we're clearly doing it wrong.
>> 
>> You are arguing for weaker preconditions.  I'm not actually disagreeing
>> with you!  I'm merely expressing my opinion that checking preconditions
>> with assert() is a good idea.
>
> I have a fairly strong dislike for asserts in qemu, and although I'm not
> always consistent, my reasoning is mainly to do with asserts once a guest
> is running.
>
> Lets imagine you have a happily running guest and then you try and do
> something new and complex (e.g. hotplug a vfio-device); now lets say that
> new thing has something very broken about it, do you really want the previously
> running guest to die?

If a precondition doesn't hold, we're toast.  The best we can do is
crash before we mess up things further.

A problematic condition we can safely recover from can be made an error
condition.

I think the crux of our misunderstandings (I hesitate to call it an
argument) is confusing recoverable error conditions with violated
preconditions.  We all agree (violently, perhaps) that assert() is not
an acceptable error handling mechanism.

> My view is it can very much depend on how broken you think the
> world is; you've got to remember that crashing at this point
> is going to lose the user a VM, and that could mean losing
> data - so at that point you have to make a decision about whether
> your lack of confidence in the state of the VM due to the failed
> precondition is worse than your knowledge that the VM is going to fail.
>
> Perhaps giving the user an error and disabling the device lets
> the admin gravefully shutdown the VM and walk away with all
> their data intact.

This is risky business unless you can prove the problematic condition is
safely isolated.  To elaborate on your device example: say some logic
error in device emulation code put the device instance in some broken
state.  If you detect that before the device could mess up anything
else, fencing the device is safe.  But if device state is borked because
some other code overran an array, continuing risks making things worse.
Crashing the guest is bad.  Letting it first overwrite good data with
bad data is worse.

Sadly, such proof is hardly ever possible in unrestricted C.  So we're
down to probabilities and tradeoffs.

I'd reject a claim that once the guest is running the tradeoffs *always*
favour trying to hobble on.

If you want a less bleak isolation and recovery story, check out Erlang.
Note that its "let it crash" philosophy is very much in accordance with
my views on what can safely be done after detecting a programming error
/ violated precondition.

> So I wouldn't argue for weaker preconditions, just what the
> result is if the precondition fails.

I respectfully disagree with your use of the concept "precondition".
Dr. David Alan Gilbert Oct. 4, 2016, 11:19 a.m. UTC | #10
* Markus Armbruster (armbru@redhat.com) wrote:
> "Dr. David Alan Gilbert" <dgilbert@redhat.com> writes:
> 
> > * Markus Armbruster (armbru@redhat.com) wrote:
> >> Alex Williamson <alex.williamson@redhat.com> writes:
> >> 
> >> > On Thu, 29 Sep 2016 15:11:27 +0200
> >> > Markus Armbruster <armbru@redhat.com> wrote:
> >> >
> >> >> Alex Williamson <alex.williamson@redhat.com> writes:
> >> >> 
> >> >> > On Tue, 13 Sep 2016 08:16:20 +0200
> >> >> > Markus Armbruster <armbru@redhat.com> wrote:
> >> >> >  
> >> >> >> Cc: Alex for device assignment expertise.
> >> >> >> 
> >> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> >> >> >>   
> >> >> >> > On 09/12/2016 09:29 PM, Markus Armbruster wrote:    
> >> >> >> >> Cao jin <caoj.fnst@cn.fujitsu.com> writes:
> >> >> >> >>    
> >> >> >> >>> The input parameters is used for creating the msix capable device, so
> >> >> >> >>> they must obey the PCI spec, or else, it should be programming error.    
> >> >> >> >>
> >> >> >> >> True when the the parameters come from a device model attempting to
> >> >> >> >> define a PCI device violating the spec.  But what if the parameters come
> >> >> >> >> from an actual PCI device violating the spec, via device assignment?    
> >> >> >> >
> >> >> >> > Before the patch, on invalid param, the vfio behaviour is:
> >> >> >> >   error_report("vfio: msix_init failed");
> >> >> >> >   then, device create fail.
> >> >> >> >
> >> >> >> > After the patch, its behaviour is:
> >> >> >> >   asserted.
> >> >> >> >
> >> >> >> > Do you mean we should still report some useful info to user on invalid
> >> >> >> > params?    
> >> >> >> 
> >> >> >> In the normal case, asking msix_init() to create MSI-X that are out of
> >> >> >> spec is a programming error: the code that does it is broken and needs
> >> >> >> fixing.
> >> >> >> 
> >> >> >> Device assignment might be the exception: there, the parameters for
> >> >> >> msix_init() come from the assigned device, not the program.  If they
> >> >> >> violate the spec, the device is broken.  This wouldn't be a programming
> >> >> >> error.  Alex, can this happen?
> >> >> >> 
> >> >> >> If yes, we may want to handle it by failing device assignment.  
> >> >> >
> >> >> >
> >> >> > Generally, I think the entire premise of these sorts of patches is
> >> >> > flawed.  We take a working error path that allows a driver to robustly
> >> >> > abort on unexpected date and turn it into a time bomb.  Often the
> >> >> > excuse for this is that "error handling is hard".  Tough.  Now a
> >> >> > hot-add of a device that triggers this changes from a simple failure to
> >> >> > a denial of service event.  Furthermore, we base that time bomb on our
> >> >> > interpretation of the spec, which we can only validate against in-tree
> >> >> > devices.
> >> >> >
> >> >> > We have actually had assigned devices that fail the sanity test here,
> >> >> > there's a quirk in vfio_msix_early_setup() for a Chelsio device with
> >> >> > this bug.  Do we really want user experiencing aborts when a simple
> >> >> > device initialization failure is sufficient?
> >> >> >
> >> >> > Generally abort code paths like this cause me to do my own sanity
> >> >> > testing, which is really poor practice since we should have that sanity
> >> >> > testing in the common code.  Thanks,  
> >> >> 
> >> >> I prefer to assert on programming error, because 1. it does double duty
> >> >> as documentation, 2. error handling of impossible conditions is commonly
> >> >> wrong, and 3. assertion failures have a much better chance to get the
> >> >> program fixed.  Even when presence of a working error path kills 2., the
> >> >> other two make me stick to assertions.
> >> >
> >> > So we're looking at:
> >> >
> >> >> -    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
> >> >> -        return -EINVAL;
> >> >> -    }
> >> >
> >> > vs
> >> >
> >> >> +    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);
> >> >
> >> > How do you argue that one of these provides better self documentation
> >> > than the other?
> >> 
> >> The first one says "this can happen, and when it does, the function
> >> fails cleanly."  For a genuine programming error, this is in part
> >> misleading.
> >> 
> >> The second one says "I assert this can't happen.  We'd be toast if I was
> >> wrong."
> >> 
> >> > The assert may have a better chance of getting fixed, but it's because
> >> > the existence of the assert itself exposes a vulnerability in the code.
> >> > Which would you rather have in production, a VMM that crashes on the
> >> > slightest deviance from the input it expects or one that simply errors
> >> > the faulting code path and continues?
> >> 
> >> Invalid input to a program should never be treated as programming error.
> >> 
> >> > Error handling is hard, which is why we need to look at it as a
> >> > collection of smaller problems.  We return an error at a leaf function
> >> > and let callers of that function decide how to handle it.  If some of
> >> > those callers don't want to deal with error handling, abort there, we
> >> > can come back to them later, but let the code paths that do want proper
> >> > error handling to continue.  If we add aborts into the leaf function,
> >> > then any calling path that wants to be robust against an error needs to
> >> > fully sanitize the input itself, at which point we have different
> >> > drivers sanitizing in different ways, all building up walls to protect
> >> > themselves from the time bombs in these leaf functions.  It's crazy.
> >> 
> >> It depends on the kind of error in the leaf function.
> >> 
> >> I suspect we're talking past each other because we got different kinds
> >> of errors in mind.
> >> 
> >> Programming is impossible without things like preconditions,
> >> postconditions, invariants.
> >> 
> >> If a section of code is entered when its precondition doesn't hold,
> >> we're toast.  This is the archetypical programming error.
> >> 
> >> If it can actually happen, the program is incorrect, and needs fixing.
> >> 
> >> Checking preconditions is often (but not always) practical.  In my
> >> opinion, checking is good practice, and the proper way to check is
> >> assert().  Makes the incorrect program fail before it can do further
> >> damage, and helps with finding the programming error.
> >> 
> >> A preconditions is part of the contract between a function and its
> >> users.  An strong precondition can make the function's job easier, but
> >> that's no use if the resulting function is inconvenient to use.  On the
> >> other hand, complicating the function to get a weaker precondition
> >> nobody actually needs is just as dumb.
> >> 
> >> Returning an error is *not* checking preconditions.  Remember, if the
> >> precondition doesn't hold, we're toast.  If we're toast when we return
> >> an error, we're clearly doing it wrong.
> >> 
> >> You are arguing for weaker preconditions.  I'm not actually disagreeing
> >> with you!  I'm merely expressing my opinion that checking preconditions
> >> with assert() is a good idea.
> >
> > I have a fairly strong dislike for asserts in qemu, and although I'm not
> > always consistent, my reasoning is mainly to do with asserts once a guest
> > is running.
> >
> > Lets imagine you have a happily running guest and then you try and do
> > something new and complex (e.g. hotplug a vfio-device); now lets say that
> > new thing has something very broken about it, do you really want the previously
> > running guest to die?
> 
> If a precondition doesn't hold, we're toast.  The best we can do is
> crash before we mess up things further.
> 
> A problematic condition we can safely recover from can be made an error
> condition.
> 
> I think the crux of our misunderstandings (I hesitate to call it an
> argument) is confusing recoverable error conditions with violated
> preconditions.  We all agree (violently, perhaps) that assert() is not
> an acceptable error handling mechanism.

I think perhaps part of the problem maybe trying to place all types of screwups
into only two categories; 'errors' and 'violations of preconditions'.
Consider some cases:
   a) The user tries to specify an out of range value to a setting;
      an error, probably not fatal (except if it was commandline)

   b) An inconsistency is found in the MMU state
      violation of precondition, fatal.

   c) A host device used for passthrough does something which according
      to the USB/PCI/SCSI specs is illegal
      violation of precondition - but you probably don't want that
      to be fatal.

   d) An inconsistency is found in a specific device emulation
      violation of precondition - but I might not want that to be fatal.

I think we agree on (a),(b), disagree on (d)  and I think this
case might be (c).

> > My view is it can very much depend on how broken you think the
> > world is; you've got to remember that crashing at this point
> > is going to lose the user a VM, and that could mean losing
> > data - so at that point you have to make a decision about whether
> > your lack of confidence in the state of the VM due to the failed
> > precondition is worse than your knowledge that the VM is going to fail.
> >
> > Perhaps giving the user an error and disabling the device lets
> > the admin gravefully shutdown the VM and walk away with all
> > their data intact.
> 
> This is risky business unless you can prove the problematic condition is
> safely isolated.  To elaborate on your device example: say some logic
> error in device emulation code put the device instance in some broken
> state.  If you detect that before the device could mess up anything
> else, fencing the device is safe.  But if device state is borked because
> some other code overran an array, continuing risks making things worse.
> Crashing the guest is bad.  Letting it first overwrite good data with
> bad data is worse.
> 
> Sadly, such proof is hardly ever possible in unrestricted C.  So we're
> down to probabilities and tradeoffs.

Agreed.

> I'd reject a claim that once the guest is running the tradeoffs *always*
> favour trying to hobble on.

Agreed; this is the difference between my case (b) and (d).
My preference is to fail the device in question if it's not a core device;
that way if it's a disk you can't write any more to it to mess it's contents
up further, and you won't read bad data from it - that's about as much
isolation as you're going to get.

However, some of it is also down to our expections of the stability of the
code in question - if the inconsistency is in some code that you know
is complex probably with untested cases and which isn't core to the VM
continuing (e.g. outgoing migration or hotplugging a host device) then
I believe it's OK to issue a scary warning, disable/error the device
in question and hobble on.

I'd say it's OK to argue that a piece of core code should be heavily
isolated from the bits you think are still a bit touchy - so it's
reasonable to me to have an assert in some core code (b) as long
as it's possible to stop any of the (c) and (d) cases triggering it
if they're coded defensively enough to error out before that assert
could be hit.  But then again someone might worry they just can't
deal with all the types of screwup (c) might present.

> If you want a less bleak isolation and recovery story, check out Erlang.
> Note that its "let it crash" philosophy is very much in accordance with
> my views on what can safely be done after detecting a programming error
> / violated precondition.
> 
> > So I wouldn't argue for weaker preconditions, just what the
> > result is if the precondition fails.
> 
> I respectfully disagree with your use of the concept "precondition".

I generally avoid using the word precondition; it's too formal for my
liking given the level we're programming at and the lack of any formal
defs.

Dave
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
Cao jin Oct. 6, 2016, 7 a.m. UTC | #11
On 09/30/2016 10:06 PM, Markus Armbruster wrote:
> Alex Williamson <alex.williamson@redhat.com> writes:
>


>
> Once there's a need to handle a certain condition as an error, we should
> do that, no argument.  This also provides a way to test the error path.
>
> However, I wouldn't buy an argument that preconditions should be made as
> weak as possible in leaf functions (let alone always) regardless of the
> cost in complexity, and non-testability of error paths.  I'm strictly a
> pay as you go person.
>
> Back to the problem at hand.  Cao jin, would you be willing to fix
> msi_init()?
>
>

Sorry for the holiday delay. Sure, will fix it, and add it as a new 
patch in this series.
diff mbox

Patch

diff --git a/hw/pci/msix.c b/hw/pci/msix.c
index 0ec1cb1..384a29d 100644
--- a/hw/pci/msix.c
+++ b/hw/pci/msix.c
@@ -253,9 +253,7 @@  int msix_init(struct PCIDevice *dev, unsigned short nentries,
         return -ENOTSUP;
     }
 
-    if (nentries < 1 || nentries > PCI_MSIX_FLAGS_QSIZE + 1) {
-        return -EINVAL;
-    }
+    assert(nentries >= 1 && nentries <= PCI_MSIX_FLAGS_QSIZE + 1);
 
     table_size = nentries * PCI_MSIX_ENTRY_SIZE;
     pba_size = QEMU_ALIGN_UP(nentries, 64) / 8;
@@ -266,7 +264,7 @@  int msix_init(struct PCIDevice *dev, unsigned short nentries,
         table_offset + table_size > memory_region_size(table_bar) ||
         pba_offset + pba_size > memory_region_size(pba_bar) ||
         (table_offset | pba_offset) & PCI_MSIX_FLAGS_BIRMASK) {
-        return -EINVAL;
+        assert(0);
     }
 
     cap = pci_add_capability(dev, PCI_CAP_ID_MSIX, cap_pos, MSIX_CAP_LENGTH);