diff mbox series

[PULL,30/63] vhost-user: call VHOST_USER_SET_VRING_ENABLE synchronously

Message ID c1ac504f912c8975a764290187bef9fa8bedb8e0.1696408966.git.mst@redhat.com
State New
Headers show
Series [PULL,01/63] pci: SLT must be RO | expand

Commit Message

Michael S. Tsirkin Oct. 4, 2023, 8:44 a.m. UTC
From: Laszlo Ersek <lersek@redhat.com>

(1) The virtio-1.2 specification
<http://docs.oasis-open.org/virtio/virtio/v1.2/virtio-v1.2.html> writes:

> 3     General Initialization And Device Operation
> 3.1   Device Initialization
> 3.1.1 Driver Requirements: Device Initialization
>
> [...]
>
> 7. Perform device-specific setup, including discovery of virtqueues for
>    the device, optional per-bus setup, reading and possibly writing the
>    device’s virtio configuration space, and population of virtqueues.
>
> 8. Set the DRIVER_OK status bit. At this point the device is “live”.

and

> 4         Virtio Transport Options
> 4.1       Virtio Over PCI Bus
> 4.1.4     Virtio Structure PCI Capabilities
> 4.1.4.3   Common configuration structure layout
> 4.1.4.3.2 Driver Requirements: Common configuration structure layout
>
> [...]
>
> The driver MUST configure the other virtqueue fields before enabling the
> virtqueue with queue_enable.
>
> [...]

(The same statements are present in virtio-1.0 identically, at
<http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html>.)

These together mean that the following sub-sequence of steps is valid for
a virtio-1.0 guest driver:

(1.1) set "queue_enable" for the needed queues as the final part of device
initialization step (7),

(1.2) set DRIVER_OK in step (8),

(1.3) immediately start sending virtio requests to the device.

(2) When vhost-user is enabled, and the VHOST_USER_F_PROTOCOL_FEATURES
special virtio feature is negotiated, then virtio rings start in disabled
state, according to
<https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#ring-states>.
In this case, explicit VHOST_USER_SET_VRING_ENABLE messages are needed for
enabling vrings.

Therefore setting "queue_enable" from the guest (1.1) is a *control plane*
operation, which travels from the guest through QEMU to the vhost-user
backend, using a unix domain socket.

Whereas sending a virtio request (1.3) is a *data plane* operation, which
evades QEMU -- it travels from guest to the vhost-user backend via
eventfd.

This means that steps (1.1) and (1.3) travel through different channels,
and their relative order can be reversed, as perceived by the vhost-user
backend.

That's exactly what happens when OVMF's virtiofs driver (VirtioFsDxe) runs
against the Rust-language virtiofsd version 1.7.2. (Which uses version
0.10.1 of the vhost-user-backend crate, and version 0.8.1 of the vhost
crate.)

Namely, when VirtioFsDxe binds a virtiofs device, it goes through the
device initialization steps (i.e., control plane operations), and
immediately sends a FUSE_INIT request too (i.e., performs a data plane
operation). In the Rust-language virtiofsd, this creates a race between
two components that run *concurrently*, i.e., in different threads or
processes:

- Control plane, handling vhost-user protocol messages:

  The "VhostUserSlaveReqHandlerMut::set_vring_enable" method
  [crates/vhost-user-backend/src/handler.rs] handles
  VHOST_USER_SET_VRING_ENABLE messages, and updates each vring's "enabled"
  flag according to the message processed.

- Data plane, handling virtio / FUSE requests:

  The "VringEpollHandler::handle_event" method
  [crates/vhost-user-backend/src/event_loop.rs] handles the incoming
  virtio / FUSE request, consuming the virtio kick at the same time. If
  the vring's "enabled" flag is set, the virtio / FUSE request is
  processed genuinely. If the vring's "enabled" flag is clear, then the
  virtio / FUSE request is discarded.

Note that OVMF enables the queue *first*, and sends FUSE_INIT *second*.
However, if the data plane processor in virtiofsd wins the race, then it
sees the FUSE_INIT *before* the control plane processor took notice of
VHOST_USER_SET_VRING_ENABLE and green-lit the queue for the data plane
processor. Therefore the latter drops FUSE_INIT on the floor, and goes
back to waiting for further virtio / FUSE requests with epoll_wait.
Meanwhile OVMF is stuck waiting for the FUSET_INIT response -- a deadlock.

The deadlock is not deterministic. OVMF hangs infrequently during first
boot. However, OVMF hangs almost certainly during reboots from the UEFI
shell.

The race can be "reliably masked" by inserting a very small delay -- a
single debug message -- at the top of "VringEpollHandler::handle_event",
i.e., just before the data plane processor checks the "enabled" field of
the vring. That delay suffices for the control plane processor to act upon
VHOST_USER_SET_VRING_ENABLE.

We can deterministically prevent the race in QEMU, by blocking OVMF inside
step (1.1) -- i.e., in the write to the "queue_enable" register -- until
VHOST_USER_SET_VRING_ENABLE actually *completes*. That way OVMF's VCPU
cannot advance to the FUSE_INIT submission before virtiofsd's control
plane processor takes notice of the queue being enabled.

Wait for VHOST_USER_SET_VRING_ENABLE completion by:

- setting the NEED_REPLY flag on VHOST_USER_SET_VRING_ENABLE, and waiting
  for the reply, if the VHOST_USER_PROTOCOL_F_REPLY_ACK vhost-user feature
  has been negotiated, or

- performing a separate VHOST_USER_GET_FEATURES *exchange*, which requires
  a backend response regardless of VHOST_USER_PROTOCOL_F_REPLY_ACK.

Cc: "Michael S. Tsirkin" <mst@redhat.com> (supporter:vhost)
Cc: Eugenio Perez Martin <eperezma@redhat.com>
Cc: German Maglione <gmaglione@redhat.com>
Cc: Liu Jiang <gerry@linux.alibaba.com>
Cc: Sergio Lopez Pascual <slp@redhat.com>
Cc: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-Id: <20230830134055.106812-8-lersek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-user.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

Comments

Laszlo Ersek Oct. 4, 2023, 10:11 a.m. UTC | #1
On 10/4/23 10:44, Michael S. Tsirkin wrote:
> From: Laszlo Ersek <lersek@redhat.com>
> 
> (1) The virtio-1.2 specification
> <http://docs.oasis-open.org/virtio/virtio/v1.2/virtio-v1.2.html> writes:
> 
>> 3     General Initialization And Device Operation
>> 3.1   Device Initialization
>> 3.1.1 Driver Requirements: Device Initialization
>>
>> [...]
>>
>> 7. Perform device-specific setup, including discovery of virtqueues for
>>    the device, optional per-bus setup, reading and possibly writing the
>>    device’s virtio configuration space, and population of virtqueues.
>>
>> 8. Set the DRIVER_OK status bit. At this point the device is “live”.
> 
> and
> 
>> 4         Virtio Transport Options
>> 4.1       Virtio Over PCI Bus
>> 4.1.4     Virtio Structure PCI Capabilities
>> 4.1.4.3   Common configuration structure layout
>> 4.1.4.3.2 Driver Requirements: Common configuration structure layout
>>
>> [...]
>>
>> The driver MUST configure the other virtqueue fields before enabling the
>> virtqueue with queue_enable.
>>
>> [...]
> 
> (The same statements are present in virtio-1.0 identically, at
> <http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html>.)
> 
> These together mean that the following sub-sequence of steps is valid for
> a virtio-1.0 guest driver:
> 
> (1.1) set "queue_enable" for the needed queues as the final part of device
> initialization step (7),
> 
> (1.2) set DRIVER_OK in step (8),
> 
> (1.3) immediately start sending virtio requests to the device.
> 
> (2) When vhost-user is enabled, and the VHOST_USER_F_PROTOCOL_FEATURES
> special virtio feature is negotiated, then virtio rings start in disabled
> state, according to
> <https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#ring-states>.
> In this case, explicit VHOST_USER_SET_VRING_ENABLE messages are needed for
> enabling vrings.
> 
> Therefore setting "queue_enable" from the guest (1.1) is a *control plane*
> operation, which travels from the guest through QEMU to the vhost-user
> backend, using a unix domain socket.
> 
> Whereas sending a virtio request (1.3) is a *data plane* operation, which
> evades QEMU -- it travels from guest to the vhost-user backend via
> eventfd.
> 
> This means that steps (1.1) and (1.3) travel through different channels,
> and their relative order can be reversed, as perceived by the vhost-user
> backend.
> 
> That's exactly what happens when OVMF's virtiofs driver (VirtioFsDxe) runs
> against the Rust-language virtiofsd version 1.7.2. (Which uses version
> 0.10.1 of the vhost-user-backend crate, and version 0.8.1 of the vhost
> crate.)
> 
> Namely, when VirtioFsDxe binds a virtiofs device, it goes through the
> device initialization steps (i.e., control plane operations), and
> immediately sends a FUSE_INIT request too (i.e., performs a data plane
> operation). In the Rust-language virtiofsd, this creates a race between
> two components that run *concurrently*, i.e., in different threads or
> processes:
> 
> - Control plane, handling vhost-user protocol messages:
> 
>   The "VhostUserSlaveReqHandlerMut::set_vring_enable" method
>   [crates/vhost-user-backend/src/handler.rs] handles
>   VHOST_USER_SET_VRING_ENABLE messages, and updates each vring's "enabled"
>   flag according to the message processed.
> 
> - Data plane, handling virtio / FUSE requests:
> 
>   The "VringEpollHandler::handle_event" method
>   [crates/vhost-user-backend/src/event_loop.rs] handles the incoming
>   virtio / FUSE request, consuming the virtio kick at the same time. If
>   the vring's "enabled" flag is set, the virtio / FUSE request is
>   processed genuinely. If the vring's "enabled" flag is clear, then the
>   virtio / FUSE request is discarded.
> 
> Note that OVMF enables the queue *first*, and sends FUSE_INIT *second*.
> However, if the data plane processor in virtiofsd wins the race, then it
> sees the FUSE_INIT *before* the control plane processor took notice of
> VHOST_USER_SET_VRING_ENABLE and green-lit the queue for the data plane
> processor. Therefore the latter drops FUSE_INIT on the floor, and goes
> back to waiting for further virtio / FUSE requests with epoll_wait.
> Meanwhile OVMF is stuck waiting for the FUSET_INIT response -- a deadlock.
> 
> The deadlock is not deterministic. OVMF hangs infrequently during first
> boot. However, OVMF hangs almost certainly during reboots from the UEFI
> shell.
> 
> The race can be "reliably masked" by inserting a very small delay -- a
> single debug message -- at the top of "VringEpollHandler::handle_event",
> i.e., just before the data plane processor checks the "enabled" field of
> the vring. That delay suffices for the control plane processor to act upon
> VHOST_USER_SET_VRING_ENABLE.
> 
> We can deterministically prevent the race in QEMU, by blocking OVMF inside
> step (1.1) -- i.e., in the write to the "queue_enable" register -- until
> VHOST_USER_SET_VRING_ENABLE actually *completes*. That way OVMF's VCPU
> cannot advance to the FUSE_INIT submission before virtiofsd's control
> plane processor takes notice of the queue being enabled.
> 
> Wait for VHOST_USER_SET_VRING_ENABLE completion by:
> 
> - setting the NEED_REPLY flag on VHOST_USER_SET_VRING_ENABLE, and waiting
>   for the reply, if the VHOST_USER_PROTOCOL_F_REPLY_ACK vhost-user feature
>   has been negotiated, or
> 
> - performing a separate VHOST_USER_GET_FEATURES *exchange*, which requires
>   a backend response regardless of VHOST_USER_PROTOCOL_F_REPLY_ACK.
> 
> Cc: "Michael S. Tsirkin" <mst@redhat.com> (supporter:vhost)
> Cc: Eugenio Perez Martin <eperezma@redhat.com>
> Cc: German Maglione <gmaglione@redhat.com>
> Cc: Liu Jiang <gerry@linux.alibaba.com>
> Cc: Sergio Lopez Pascual <slp@redhat.com>
> Cc: Stefano Garzarella <sgarzare@redhat.com>
> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> Message-Id: <20230830134055.106812-8-lersek@redhat.com>
> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Reviewed-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  hw/virtio/vhost-user.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index ae0734d461..eb983ae295 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -1214,7 +1214,21 @@ static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
>              .num   = enable,
>          };
>  
> -        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, false);
> +        /*
> +         * SET_VRING_ENABLE travels from guest to QEMU to vhost-user backend /
> +         * control plane thread via unix domain socket. Virtio requests travel
> +         * from guest to vhost-user backend / data plane thread via eventfd.
> +         * Even if the guest enables the ring first, and pushes its first virtio
> +         * request second (conforming to the virtio spec), the data plane thread
> +         * in the backend may see the virtio request before the control plane
> +         * thread sees the queue enablement. This causes (in fact, requires) the
> +         * data plane thread to discard the virtio request (it arrived on a
> +         * seemingly disabled queue). To prevent this out-of-order delivery,
> +         * don't let the guest proceed to pushing the virtio request until the
> +         * backend control plane acknowledges enabling the queue -- IOW, pass
> +         * wait_for_reply=true below.
> +         */
> +        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, true);
>          if (ret < 0) {
>              /*
>               * Restoring the previous state is likely infeasible, as well as

This is not the latest version (v3) of this set -- please see
<https://patchwork.ozlabs.org/project/qemu-devel/cover/20231002203221.17241-1-lersek@redhat.com/>.

Thanks,
Laszlo
Michael S. Tsirkin Oct. 4, 2023, 12:53 p.m. UTC | #2
On Wed, Oct 04, 2023 at 12:11:44PM +0200, Laszlo Ersek wrote:
> On 10/4/23 10:44, Michael S. Tsirkin wrote:
> > From: Laszlo Ersek <lersek@redhat.com>
> > 
> > (1) The virtio-1.2 specification
> > <http://docs.oasis-open.org/virtio/virtio/v1.2/virtio-v1.2.html> writes:
> > 
> >> 3     General Initialization And Device Operation
> >> 3.1   Device Initialization
> >> 3.1.1 Driver Requirements: Device Initialization
> >>
> >> [...]
> >>
> >> 7. Perform device-specific setup, including discovery of virtqueues for
> >>    the device, optional per-bus setup, reading and possibly writing the
> >>    device’s virtio configuration space, and population of virtqueues.
> >>
> >> 8. Set the DRIVER_OK status bit. At this point the device is “live”.
> > 
> > and
> > 
> >> 4         Virtio Transport Options
> >> 4.1       Virtio Over PCI Bus
> >> 4.1.4     Virtio Structure PCI Capabilities
> >> 4.1.4.3   Common configuration structure layout
> >> 4.1.4.3.2 Driver Requirements: Common configuration structure layout
> >>
> >> [...]
> >>
> >> The driver MUST configure the other virtqueue fields before enabling the
> >> virtqueue with queue_enable.
> >>
> >> [...]
> > 
> > (The same statements are present in virtio-1.0 identically, at
> > <http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html>.)
> > 
> > These together mean that the following sub-sequence of steps is valid for
> > a virtio-1.0 guest driver:
> > 
> > (1.1) set "queue_enable" for the needed queues as the final part of device
> > initialization step (7),
> > 
> > (1.2) set DRIVER_OK in step (8),
> > 
> > (1.3) immediately start sending virtio requests to the device.
> > 
> > (2) When vhost-user is enabled, and the VHOST_USER_F_PROTOCOL_FEATURES
> > special virtio feature is negotiated, then virtio rings start in disabled
> > state, according to
> > <https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#ring-states>.
> > In this case, explicit VHOST_USER_SET_VRING_ENABLE messages are needed for
> > enabling vrings.
> > 
> > Therefore setting "queue_enable" from the guest (1.1) is a *control plane*
> > operation, which travels from the guest through QEMU to the vhost-user
> > backend, using a unix domain socket.
> > 
> > Whereas sending a virtio request (1.3) is a *data plane* operation, which
> > evades QEMU -- it travels from guest to the vhost-user backend via
> > eventfd.
> > 
> > This means that steps (1.1) and (1.3) travel through different channels,
> > and their relative order can be reversed, as perceived by the vhost-user
> > backend.
> > 
> > That's exactly what happens when OVMF's virtiofs driver (VirtioFsDxe) runs
> > against the Rust-language virtiofsd version 1.7.2. (Which uses version
> > 0.10.1 of the vhost-user-backend crate, and version 0.8.1 of the vhost
> > crate.)
> > 
> > Namely, when VirtioFsDxe binds a virtiofs device, it goes through the
> > device initialization steps (i.e., control plane operations), and
> > immediately sends a FUSE_INIT request too (i.e., performs a data plane
> > operation). In the Rust-language virtiofsd, this creates a race between
> > two components that run *concurrently*, i.e., in different threads or
> > processes:
> > 
> > - Control plane, handling vhost-user protocol messages:
> > 
> >   The "VhostUserSlaveReqHandlerMut::set_vring_enable" method
> >   [crates/vhost-user-backend/src/handler.rs] handles
> >   VHOST_USER_SET_VRING_ENABLE messages, and updates each vring's "enabled"
> >   flag according to the message processed.
> > 
> > - Data plane, handling virtio / FUSE requests:
> > 
> >   The "VringEpollHandler::handle_event" method
> >   [crates/vhost-user-backend/src/event_loop.rs] handles the incoming
> >   virtio / FUSE request, consuming the virtio kick at the same time. If
> >   the vring's "enabled" flag is set, the virtio / FUSE request is
> >   processed genuinely. If the vring's "enabled" flag is clear, then the
> >   virtio / FUSE request is discarded.
> > 
> > Note that OVMF enables the queue *first*, and sends FUSE_INIT *second*.
> > However, if the data plane processor in virtiofsd wins the race, then it
> > sees the FUSE_INIT *before* the control plane processor took notice of
> > VHOST_USER_SET_VRING_ENABLE and green-lit the queue for the data plane
> > processor. Therefore the latter drops FUSE_INIT on the floor, and goes
> > back to waiting for further virtio / FUSE requests with epoll_wait.
> > Meanwhile OVMF is stuck waiting for the FUSET_INIT response -- a deadlock.
> > 
> > The deadlock is not deterministic. OVMF hangs infrequently during first
> > boot. However, OVMF hangs almost certainly during reboots from the UEFI
> > shell.
> > 
> > The race can be "reliably masked" by inserting a very small delay -- a
> > single debug message -- at the top of "VringEpollHandler::handle_event",
> > i.e., just before the data plane processor checks the "enabled" field of
> > the vring. That delay suffices for the control plane processor to act upon
> > VHOST_USER_SET_VRING_ENABLE.
> > 
> > We can deterministically prevent the race in QEMU, by blocking OVMF inside
> > step (1.1) -- i.e., in the write to the "queue_enable" register -- until
> > VHOST_USER_SET_VRING_ENABLE actually *completes*. That way OVMF's VCPU
> > cannot advance to the FUSE_INIT submission before virtiofsd's control
> > plane processor takes notice of the queue being enabled.
> > 
> > Wait for VHOST_USER_SET_VRING_ENABLE completion by:
> > 
> > - setting the NEED_REPLY flag on VHOST_USER_SET_VRING_ENABLE, and waiting
> >   for the reply, if the VHOST_USER_PROTOCOL_F_REPLY_ACK vhost-user feature
> >   has been negotiated, or
> > 
> > - performing a separate VHOST_USER_GET_FEATURES *exchange*, which requires
> >   a backend response regardless of VHOST_USER_PROTOCOL_F_REPLY_ACK.
> > 
> > Cc: "Michael S. Tsirkin" <mst@redhat.com> (supporter:vhost)
> > Cc: Eugenio Perez Martin <eperezma@redhat.com>
> > Cc: German Maglione <gmaglione@redhat.com>
> > Cc: Liu Jiang <gerry@linux.alibaba.com>
> > Cc: Sergio Lopez Pascual <slp@redhat.com>
> > Cc: Stefano Garzarella <sgarzare@redhat.com>
> > Signed-off-by: Laszlo Ersek <lersek@redhat.com>
> > Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> > Message-Id: <20230830134055.106812-8-lersek@redhat.com>
> > Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Reviewed-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  hw/virtio/vhost-user.c | 16 +++++++++++++++-
> >  1 file changed, 15 insertions(+), 1 deletion(-)
> > 
> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > index ae0734d461..eb983ae295 100644
> > --- a/hw/virtio/vhost-user.c
> > +++ b/hw/virtio/vhost-user.c
> > @@ -1214,7 +1214,21 @@ static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
> >              .num   = enable,
> >          };
> >  
> > -        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, false);
> > +        /*
> > +         * SET_VRING_ENABLE travels from guest to QEMU to vhost-user backend /
> > +         * control plane thread via unix domain socket. Virtio requests travel
> > +         * from guest to vhost-user backend / data plane thread via eventfd.
> > +         * Even if the guest enables the ring first, and pushes its first virtio
> > +         * request second (conforming to the virtio spec), the data plane thread
> > +         * in the backend may see the virtio request before the control plane
> > +         * thread sees the queue enablement. This causes (in fact, requires) the
> > +         * data plane thread to discard the virtio request (it arrived on a
> > +         * seemingly disabled queue). To prevent this out-of-order delivery,
> > +         * don't let the guest proceed to pushing the virtio request until the
> > +         * backend control plane acknowledges enabling the queue -- IOW, pass
> > +         * wait_for_reply=true below.
> > +         */
> > +        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, true);
> >          if (ret < 0) {
> >              /*
> >               * Restoring the previous state is likely infeasible, as well as
> 
> This is not the latest version (v3) of this set -- please see
> <https://patchwork.ozlabs.org/project/qemu-devel/cover/20231002203221.17241-1-lersek@redhat.com/>.
> 
> Thanks,
> Laszlo

Ouch. OK I will drop. Feel free to send v4 tweaking commit message - I
think you wanted to do it anyway right?
Laszlo Ersek Oct. 4, 2023, 1:28 p.m. UTC | #3
On 10/4/23 14:53, Michael S. Tsirkin wrote:
> On Wed, Oct 04, 2023 at 12:11:44PM +0200, Laszlo Ersek wrote:
>> On 10/4/23 10:44, Michael S. Tsirkin wrote:
>>> From: Laszlo Ersek <lersek@redhat.com>
>>>
>>> (1) The virtio-1.2 specification
>>> <http://docs.oasis-open.org/virtio/virtio/v1.2/virtio-v1.2.html> writes:
>>>
>>>> 3     General Initialization And Device Operation
>>>> 3.1   Device Initialization
>>>> 3.1.1 Driver Requirements: Device Initialization
>>>>
>>>> [...]
>>>>
>>>> 7. Perform device-specific setup, including discovery of virtqueues for
>>>>    the device, optional per-bus setup, reading and possibly writing the
>>>>    device’s virtio configuration space, and population of virtqueues.
>>>>
>>>> 8. Set the DRIVER_OK status bit. At this point the device is “live”.
>>>
>>> and
>>>
>>>> 4         Virtio Transport Options
>>>> 4.1       Virtio Over PCI Bus
>>>> 4.1.4     Virtio Structure PCI Capabilities
>>>> 4.1.4.3   Common configuration structure layout
>>>> 4.1.4.3.2 Driver Requirements: Common configuration structure layout
>>>>
>>>> [...]
>>>>
>>>> The driver MUST configure the other virtqueue fields before enabling the
>>>> virtqueue with queue_enable.
>>>>
>>>> [...]
>>>
>>> (The same statements are present in virtio-1.0 identically, at
>>> <http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html>.)
>>>
>>> These together mean that the following sub-sequence of steps is valid for
>>> a virtio-1.0 guest driver:
>>>
>>> (1.1) set "queue_enable" for the needed queues as the final part of device
>>> initialization step (7),
>>>
>>> (1.2) set DRIVER_OK in step (8),
>>>
>>> (1.3) immediately start sending virtio requests to the device.
>>>
>>> (2) When vhost-user is enabled, and the VHOST_USER_F_PROTOCOL_FEATURES
>>> special virtio feature is negotiated, then virtio rings start in disabled
>>> state, according to
>>> <https://qemu-project.gitlab.io/qemu/interop/vhost-user.html#ring-states>.
>>> In this case, explicit VHOST_USER_SET_VRING_ENABLE messages are needed for
>>> enabling vrings.
>>>
>>> Therefore setting "queue_enable" from the guest (1.1) is a *control plane*
>>> operation, which travels from the guest through QEMU to the vhost-user
>>> backend, using a unix domain socket.
>>>
>>> Whereas sending a virtio request (1.3) is a *data plane* operation, which
>>> evades QEMU -- it travels from guest to the vhost-user backend via
>>> eventfd.
>>>
>>> This means that steps (1.1) and (1.3) travel through different channels,
>>> and their relative order can be reversed, as perceived by the vhost-user
>>> backend.
>>>
>>> That's exactly what happens when OVMF's virtiofs driver (VirtioFsDxe) runs
>>> against the Rust-language virtiofsd version 1.7.2. (Which uses version
>>> 0.10.1 of the vhost-user-backend crate, and version 0.8.1 of the vhost
>>> crate.)
>>>
>>> Namely, when VirtioFsDxe binds a virtiofs device, it goes through the
>>> device initialization steps (i.e., control plane operations), and
>>> immediately sends a FUSE_INIT request too (i.e., performs a data plane
>>> operation). In the Rust-language virtiofsd, this creates a race between
>>> two components that run *concurrently*, i.e., in different threads or
>>> processes:
>>>
>>> - Control plane, handling vhost-user protocol messages:
>>>
>>>   The "VhostUserSlaveReqHandlerMut::set_vring_enable" method
>>>   [crates/vhost-user-backend/src/handler.rs] handles
>>>   VHOST_USER_SET_VRING_ENABLE messages, and updates each vring's "enabled"
>>>   flag according to the message processed.
>>>
>>> - Data plane, handling virtio / FUSE requests:
>>>
>>>   The "VringEpollHandler::handle_event" method
>>>   [crates/vhost-user-backend/src/event_loop.rs] handles the incoming
>>>   virtio / FUSE request, consuming the virtio kick at the same time. If
>>>   the vring's "enabled" flag is set, the virtio / FUSE request is
>>>   processed genuinely. If the vring's "enabled" flag is clear, then the
>>>   virtio / FUSE request is discarded.
>>>
>>> Note that OVMF enables the queue *first*, and sends FUSE_INIT *second*.
>>> However, if the data plane processor in virtiofsd wins the race, then it
>>> sees the FUSE_INIT *before* the control plane processor took notice of
>>> VHOST_USER_SET_VRING_ENABLE and green-lit the queue for the data plane
>>> processor. Therefore the latter drops FUSE_INIT on the floor, and goes
>>> back to waiting for further virtio / FUSE requests with epoll_wait.
>>> Meanwhile OVMF is stuck waiting for the FUSET_INIT response -- a deadlock.
>>>
>>> The deadlock is not deterministic. OVMF hangs infrequently during first
>>> boot. However, OVMF hangs almost certainly during reboots from the UEFI
>>> shell.
>>>
>>> The race can be "reliably masked" by inserting a very small delay -- a
>>> single debug message -- at the top of "VringEpollHandler::handle_event",
>>> i.e., just before the data plane processor checks the "enabled" field of
>>> the vring. That delay suffices for the control plane processor to act upon
>>> VHOST_USER_SET_VRING_ENABLE.
>>>
>>> We can deterministically prevent the race in QEMU, by blocking OVMF inside
>>> step (1.1) -- i.e., in the write to the "queue_enable" register -- until
>>> VHOST_USER_SET_VRING_ENABLE actually *completes*. That way OVMF's VCPU
>>> cannot advance to the FUSE_INIT submission before virtiofsd's control
>>> plane processor takes notice of the queue being enabled.
>>>
>>> Wait for VHOST_USER_SET_VRING_ENABLE completion by:
>>>
>>> - setting the NEED_REPLY flag on VHOST_USER_SET_VRING_ENABLE, and waiting
>>>   for the reply, if the VHOST_USER_PROTOCOL_F_REPLY_ACK vhost-user feature
>>>   has been negotiated, or
>>>
>>> - performing a separate VHOST_USER_GET_FEATURES *exchange*, which requires
>>>   a backend response regardless of VHOST_USER_PROTOCOL_F_REPLY_ACK.
>>>
>>> Cc: "Michael S. Tsirkin" <mst@redhat.com> (supporter:vhost)
>>> Cc: Eugenio Perez Martin <eperezma@redhat.com>
>>> Cc: German Maglione <gmaglione@redhat.com>
>>> Cc: Liu Jiang <gerry@linux.alibaba.com>
>>> Cc: Sergio Lopez Pascual <slp@redhat.com>
>>> Cc: Stefano Garzarella <sgarzare@redhat.com>
>>> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
>>> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
>>> Message-Id: <20230830134055.106812-8-lersek@redhat.com>
>>> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
>>> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>>> Reviewed-by: Eugenio Pérez <eperezma@redhat.com>
>>> ---
>>>  hw/virtio/vhost-user.c | 16 +++++++++++++++-
>>>  1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>>> index ae0734d461..eb983ae295 100644
>>> --- a/hw/virtio/vhost-user.c
>>> +++ b/hw/virtio/vhost-user.c
>>> @@ -1214,7 +1214,21 @@ static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
>>>              .num   = enable,
>>>          };
>>>  
>>> -        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, false);
>>> +        /*
>>> +         * SET_VRING_ENABLE travels from guest to QEMU to vhost-user backend /
>>> +         * control plane thread via unix domain socket. Virtio requests travel
>>> +         * from guest to vhost-user backend / data plane thread via eventfd.
>>> +         * Even if the guest enables the ring first, and pushes its first virtio
>>> +         * request second (conforming to the virtio spec), the data plane thread
>>> +         * in the backend may see the virtio request before the control plane
>>> +         * thread sees the queue enablement. This causes (in fact, requires) the
>>> +         * data plane thread to discard the virtio request (it arrived on a
>>> +         * seemingly disabled queue). To prevent this out-of-order delivery,
>>> +         * don't let the guest proceed to pushing the virtio request until the
>>> +         * backend control plane acknowledges enabling the queue -- IOW, pass
>>> +         * wait_for_reply=true below.
>>> +         */
>>> +        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, true);
>>>          if (ret < 0) {
>>>              /*
>>>               * Restoring the previous state is likely infeasible, as well as
>>
>> This is not the latest version (v3) of this set -- please see
>> <https://patchwork.ozlabs.org/project/qemu-devel/cover/20231002203221.17241-1-lersek@redhat.com/>.
>>
>> Thanks,
>> Laszlo
> 
> Ouch. OK I will drop.

Thanks!

> Feel free to send v4 tweaking commit message - I
> think you wanted to do it anyway right?

The v3 series (already on list) is the latest / most recent version, and
that one already includes the intended commit message tweaks. So there's
no need for me to post another version (i.e., no need for a v4); just
please replace my patches in this PR with the v3 series.

Thanks!
Laszlo
diff mbox series

Patch

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index ae0734d461..eb983ae295 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -1214,7 +1214,21 @@  static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
             .num   = enable,
         };
 
-        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, false);
+        /*
+         * SET_VRING_ENABLE travels from guest to QEMU to vhost-user backend /
+         * control plane thread via unix domain socket. Virtio requests travel
+         * from guest to vhost-user backend / data plane thread via eventfd.
+         * Even if the guest enables the ring first, and pushes its first virtio
+         * request second (conforming to the virtio spec), the data plane thread
+         * in the backend may see the virtio request before the control plane
+         * thread sees the queue enablement. This causes (in fact, requires) the
+         * data plane thread to discard the virtio request (it arrived on a
+         * seemingly disabled queue). To prevent this out-of-order delivery,
+         * don't let the guest proceed to pushing the virtio request until the
+         * backend control plane acknowledges enabling the queue -- IOW, pass
+         * wait_for_reply=true below.
+         */
+        ret = vhost_set_vring(dev, VHOST_USER_SET_VRING_ENABLE, &state, true);
         if (ret < 0) {
             /*
              * Restoring the previous state is likely infeasible, as well as