diff mbox

[RFC,2/2] spec/vhost-user spec: Add IOMMU support

Message ID 20170411101002.28451-3-maxime.coquelin@redhat.com
State New
Headers show

Commit Message

Maxime Coquelin April 11, 2017, 10:10 a.m. UTC
This patch specifies the master/slave communication to support
device IOTLB implementation in slave.

The vhost_iotlb_msg structure introduced for kernel backends is
re-used, making the design close between the two backends.

An exception is the use of the secondary channel to enable the
slave to send IOTLB miss requests to the master.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 docs/specs/vhost-user.txt | 56 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 56 insertions(+)

Comments

Peter Xu April 11, 2017, 1:20 p.m. UTC | #1
On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
> This patch specifies the master/slave communication to support
> device IOTLB implementation in slave.
> 
> The vhost_iotlb_msg structure introduced for kernel backends is
> re-used, making the design close between the two backends.
> 
> An exception is the use of the secondary channel to enable the
> slave to send IOTLB miss requests to the master.
> 
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  docs/specs/vhost-user.txt | 56 +++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 56 insertions(+)
> 
> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> index b365047..048a4d6 100644
> --- a/docs/specs/vhost-user.txt
> +++ b/docs/specs/vhost-user.txt
> @@ -97,6 +97,23 @@ Depending on the request type, payload can be:
>     log offset: offset from start of supplied file descriptor
>         where logging starts (i.e. where guest address 0 would be logged)
>  
> + * An IOTLB message
> +   ---------------------------------------------------------
> +   | iova | size | user address | permissions flags | type |
> +   ---------------------------------------------------------
> +
> +   IOVA: a 64-bit guest I/O virtual address
> +   Size: a 64-bit size
> +   User address: a 64-bit user address
> +   Permissions flags: a 8-bit bit field:
> +    - Bit 0: Read access
> +    - Bit 1: Write access
> +   Type: a 8-bit IOTLB message type:
> +    - 1: IOTLB miss
> +    - 2: IOTLB update
> +    - 3: IOTLB invalidate
> +    - 4: IOTLB access fail
> +
>  In QEMU the vhost-user message is implemented with the following struct:
>  
>  typedef struct VhostUserMsg {
> @@ -109,6 +126,7 @@ typedef struct VhostUserMsg {
>          struct vhost_vring_addr addr;
>          VhostUserMemory memory;
>          VhostUserLog log;
> +        struct vhost_iotlb_msg iotlb;
>      };
>  } QEMU_PACKED VhostUserMsg;
>  
> @@ -258,6 +276,30 @@ Once the source has finished migration, rings will be stopped by
>  the source. No further update must be done before rings are
>  restarted.
>  
> +IOMMU support
> +-------------
> +
> +When the VIRTIO_F_IOMMU_PLATFORM feature has been negotiated, the master has
> +to send IOTLB entries update & invalidation by sending VHOST_USER_IOTLB_MSG
> +requests to the slave with a struct vhost_iotlb_msg payload. For update events,
> +the iotlb payload has to be filled with the update message type (2), the I/O
> +virtual address, the size, the user virtual address, and the permissions
> +flags. For invalidation events, the iotlb payload has to be filled with the
> +update message type (3), the I/O virtual address and the size. On success, the

s/update/invalidate/?

> +slave is expected to reply with a zero payload, non-zero otherwise.

Is this ack mechanism really necessary? If not, not sure it'll be nice
to keep vhost-user/vhost-kernel aligned on this behavior. At least
that'll simplify vhost-user implementation on QEMU side (iiuc even
without introducing new functions for update/invalidate operations).

> +
> +When the VHOST_USER_PROTOCOL_F_SLAVE_REQ is supported by the slave, and the
> +master initiated the slave to master communication channel using the
> +VHOST_USER_SET_SLAVE_REQ_FD request, the slave can send IOTLB miss and access
> +failure events by sending VHOST_USER_IOTLB_MSG requests to the master with a
> +struct vhost_iotlb_msg payload. For miss events, the iotlb payload has to be
> +filled with the miss message type (1), the I/O virtual address and the
> +permissions flags. For access failure event, the iotlb payload has to be
> +filled with the access failure message type (4), the I/O virtual address and
> +the permissions flags. On success, the master is expected to reply  when the
> +request has been handled (for example, on miss requests, once the device IOTLB
> +has been updated) with a zero payload, non-zero otherwise.

Failed to understand the last sentence clearly. IIUC vhost-net will
reply with an UPDATE message when a MISS message is received. Here for
vhost-user are we going to send one extra zero payload after that?

> +
>  Protocol features
>  -----------------
>  
> @@ -524,6 +566,20 @@ Message types
>        has been negotiated, and protocol feature bit VHOST_USER_PROTOCOL_F_SLAVE_REQ
>        bit is present in VHOST_USER_GET_PROTOCOL_FEATURES.
>  
> + * VHOST_USER_IOTLB_MSG
> +
> +      Id: 22
> +      Equivalent ioctl: N/A (equivalent to VHOST_IOTLB_MSG message type)
> +      Initiator: Master or slave
> +
> +      Send IOTLB messages with struct vhost_iotlb_msg as payload.
> +      Master sends such requests to update and invalidate entries in the device
> +      IOTLB. Slave sends such requests to notify of an IOTLB miss, or an IOTLB

s/of//?

> +      access failure. The recipient has to acknowledge the request with
> +      sending zero as u64 payload for success, non-zero otherwise.

Same question here...

Thanks,

> +      This request should be send only when VIRTIO_F_IOMMU_PLATFORM feature
> +      has been successfully negotiated.
> +
>  VHOST_USER_PROTOCOL_F_REPLY_ACK:
>  -------------------------------
>  The original vhost-user specification only demands replies for certain
> -- 
> 2.9.3
>
Maxime Coquelin April 11, 2017, 3:16 p.m. UTC | #2
On 04/11/2017 03:20 PM, Peter Xu wrote:
> On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
>> This patch specifies the master/slave communication to support
>> device IOTLB implementation in slave.
>>
>> The vhost_iotlb_msg structure introduced for kernel backends is
>> re-used, making the design close between the two backends.
>>
>> An exception is the use of the secondary channel to enable the
>> slave to send IOTLB miss requests to the master.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>  docs/specs/vhost-user.txt | 56 +++++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 56 insertions(+)
>>
>> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
>> index b365047..048a4d6 100644
>> --- a/docs/specs/vhost-user.txt
>> +++ b/docs/specs/vhost-user.txt
>> @@ -97,6 +97,23 @@ Depending on the request type, payload can be:
>>     log offset: offset from start of supplied file descriptor
>>         where logging starts (i.e. where guest address 0 would be logged)
>>
>> + * An IOTLB message
>> +   ---------------------------------------------------------
>> +   | iova | size | user address | permissions flags | type |
>> +   ---------------------------------------------------------
>> +
>> +   IOVA: a 64-bit guest I/O virtual address
>> +   Size: a 64-bit size
>> +   User address: a 64-bit user address
>> +   Permissions flags: a 8-bit bit field:
>> +    - Bit 0: Read access
>> +    - Bit 1: Write access
>> +   Type: a 8-bit IOTLB message type:
>> +    - 1: IOTLB miss
>> +    - 2: IOTLB update
>> +    - 3: IOTLB invalidate
>> +    - 4: IOTLB access fail
>> +
>>  In QEMU the vhost-user message is implemented with the following struct:
>>
>>  typedef struct VhostUserMsg {
>> @@ -109,6 +126,7 @@ typedef struct VhostUserMsg {
>>          struct vhost_vring_addr addr;
>>          VhostUserMemory memory;
>>          VhostUserLog log;
>> +        struct vhost_iotlb_msg iotlb;
>>      };
>>  } QEMU_PACKED VhostUserMsg;
>>
>> @@ -258,6 +276,30 @@ Once the source has finished migration, rings will be stopped by
>>  the source. No further update must be done before rings are
>>  restarted.
>>
>> +IOMMU support
>> +-------------
>> +
>> +When the VIRTIO_F_IOMMU_PLATFORM feature has been negotiated, the master has
>> +to send IOTLB entries update & invalidation by sending VHOST_USER_IOTLB_MSG
>> +requests to the slave with a struct vhost_iotlb_msg payload. For update events,
>> +the iotlb payload has to be filled with the update message type (2), the I/O
>> +virtual address, the size, the user virtual address, and the permissions
>> +flags. For invalidation events, the iotlb payload has to be filled with the
>> +update message type (3), the I/O virtual address and the size. On success, the
>
> s/update/invalidate/?

Indeed.

>
>> +slave is expected to reply with a zero payload, non-zero otherwise.
>
> Is this ack mechanism really necessary? If not, not sure it'll be nice
> to keep vhost-user/vhost-kernel aligned on this behavior. At least
> that'll simplify vhost-user implementation on QEMU side (iiuc even
> without introducing new functions for update/invalidate operations).

I think this is necessary, and it won't complexify the vhost-user
implementation on QEMU side, since already widely used (see reply-ack
feature).

This reply-ack mechanism is used to obtain a behaviour closer to kernel
backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
is blocked in the write() while the message is being processed in the
Kernel. With user backend, QEMU is unblocked from the write() when the
backend has read the message, before it is being processed.


>> +
>> +When the VHOST_USER_PROTOCOL_F_SLAVE_REQ is supported by the slave, and the
>> +master initiated the slave to master communication channel using the
>> +VHOST_USER_SET_SLAVE_REQ_FD request, the slave can send IOTLB miss and access
>> +failure events by sending VHOST_USER_IOTLB_MSG requests to the master with a
>> +struct vhost_iotlb_msg payload. For miss events, the iotlb payload has to be
>> +filled with the miss message type (1), the I/O virtual address and the
>> +permissions flags. For access failure event, the iotlb payload has to be
>> +filled with the access failure message type (4), the I/O virtual address and
>> +the permissions flags. On success, the master is expected to reply  when the
>> +request has been handled (for example, on miss requests, once the device IOTLB
>> +has been updated) with a zero payload, non-zero otherwise.
>
> Failed to understand the last sentence clearly. IIUC vhost-net will
> reply with an UPDATE message when a MISS message is received. Here for
> vhost-user are we going to send one extra zero payload after that?

Not exactly. There are two channels, one for QEMU to backend requests
(channel A), one for backend to QEMU requests (channel B).

The backend may be multi-threaded (like DPDK), one thread for handling
QEMU initiated requests (channel A), the others to handle packet
processing (i.e. one for Rx, one for Tx).

The processing threads will need to translate iova adresses by
searching in the IOTLB cache. In case of miss, it will send an IOTLB
miss request on channel B, and then wait for the ack/nack. In case of
ack, it can search again the IOTLB cache and find the translation.

On QEMU side, when the thread handling channel B requests receives the
IOTLB miss message, it gets the translation and send an IOTLB update
message on channel A. Then it waits for the ack from the backend,
meaning that the IOTLB cache has been updated, and replies ack on
channel B.

Doing this, in backend, we have only one writer in the IOTLB cache, and
multiple readers.

>> +
>>  Protocol features
>>  -----------------
>>
>> @@ -524,6 +566,20 @@ Message types
>>        has been negotiated, and protocol feature bit VHOST_USER_PROTOCOL_F_SLAVE_REQ
>>        bit is present in VHOST_USER_GET_PROTOCOL_FEATURES.
>>
>> + * VHOST_USER_IOTLB_MSG
>> +
>> +      Id: 22
>> +      Equivalent ioctl: N/A (equivalent to VHOST_IOTLB_MSG message type)
>> +      Initiator: Master or slave
>> +
>> +      Send IOTLB messages with struct vhost_iotlb_msg as payload.
>> +      Master sends such requests to update and invalidate entries in the device
>> +      IOTLB. Slave sends such requests to notify of an IOTLB miss, or an IOTLB
>
> s/of//?

Yes.

>
>> +      access failure. The recipient has to acknowledge the request with
>> +      sending zero as u64 payload for success, non-zero otherwise.
>
> Same question here...

Thanks,
Maxime

> Thanks,
>
>> +      This request should be send only when VIRTIO_F_IOMMU_PLATFORM feature
>> +      has been successfully negotiated.
>> +
>>  VHOST_USER_PROTOCOL_F_REPLY_ACK:
>>  -------------------------------
>>  The original vhost-user specification only demands replies for certain
>> --
>> 2.9.3
>>
>
Peter Xu April 12, 2017, 7:17 a.m. UTC | #3
On Tue, Apr 11, 2017 at 05:16:19PM +0200, Maxime Coquelin wrote:
> On 04/11/2017 03:20 PM, Peter Xu wrote:
> >On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:

[...]

> >
> >>+slave is expected to reply with a zero payload, non-zero otherwise.
> >
> >Is this ack mechanism really necessary? If not, not sure it'll be nice
> >to keep vhost-user/vhost-kernel aligned on this behavior. At least
> >that'll simplify vhost-user implementation on QEMU side (iiuc even
> >without introducing new functions for update/invalidate operations).
> 
> I think this is necessary, and it won't complexify the vhost-user
> implementation on QEMU side, since already widely used (see reply-ack
> feature).

Could you provide file/function/link pointer to the "reply-ack"
feature? I failed to find it myself.

> 
> This reply-ack mechanism is used to obtain a behaviour closer to kernel
> backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
> is blocked in the write() while the message is being processed in the
> Kernel. With user backend, QEMU is unblocked from the write() when the
> backend has read the message, before it is being processed.
> 

I see. Then I agree with you that we may need a synchronized way to do
it. One thing I think of is IOMMU page invalidation - it should be a
sync operation to make sure that all the related caches were destroyed
when the invalidation command returns in QEMU vIOMMU emulation path.

> 
> >>+
> >>+When the VHOST_USER_PROTOCOL_F_SLAVE_REQ is supported by the slave, and the
> >>+master initiated the slave to master communication channel using the
> >>+VHOST_USER_SET_SLAVE_REQ_FD request, the slave can send IOTLB miss and access
> >>+failure events by sending VHOST_USER_IOTLB_MSG requests to the master with a
> >>+struct vhost_iotlb_msg payload. For miss events, the iotlb payload has to be
> >>+filled with the miss message type (1), the I/O virtual address and the
> >>+permissions flags. For access failure event, the iotlb payload has to be
> >>+filled with the access failure message type (4), the I/O virtual address and
> >>+the permissions flags. On success, the master is expected to reply  when the
> >>+request has been handled (for example, on miss requests, once the device IOTLB
> >>+has been updated) with a zero payload, non-zero otherwise.
> >
> >Failed to understand the last sentence clearly. IIUC vhost-net will
> >reply with an UPDATE message when a MISS message is received. Here for
> >vhost-user are we going to send one extra zero payload after that?
> 
> Not exactly. There are two channels, one for QEMU to backend requests
> (channel A), one for backend to QEMU requests (channel B).
> 
> The backend may be multi-threaded (like DPDK), one thread for handling
> QEMU initiated requests (channel A), the others to handle packet
> processing (i.e. one for Rx, one for Tx).
> 
> The processing threads will need to translate iova adresses by
> searching in the IOTLB cache. In case of miss, it will send an IOTLB
> miss request on channel B, and then wait for the ack/nack. In case of
> ack, it can search again the IOTLB cache and find the translation.
> 
> On QEMU side, when the thread handling channel B requests receives the
> IOTLB miss message, it gets the translation and send an IOTLB update
> message on channel A. Then it waits for the ack from the backend,
> meaning that the IOTLB cache has been updated, and replies ack on
> channel B.

If the ack on channel B is used to notify the processing thread that
"cache is ready", then... would it be faster that we just let the
processing thread poll the cache until it finds it, or let the other
thread notify it when it receives ack on channel A? Not sure whether
it'll be faster.

Thanks,
Maxime Coquelin April 12, 2017, 7:24 a.m. UTC | #4
On 04/12/2017 09:17 AM, Peter Xu wrote:
> On Tue, Apr 11, 2017 at 05:16:19PM +0200, Maxime Coquelin wrote:
>> On 04/11/2017 03:20 PM, Peter Xu wrote:
>>> On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
>
> [...]
>
>>>
>>>> +slave is expected to reply with a zero payload, non-zero otherwise.
>>>
>>> Is this ack mechanism really necessary? If not, not sure it'll be nice
>>> to keep vhost-user/vhost-kernel aligned on this behavior. At least
>>> that'll simplify vhost-user implementation on QEMU side (iiuc even
>>> without introducing new functions for update/invalidate operations).
>>
>> I think this is necessary, and it won't complexify the vhost-user
>> implementation on QEMU side, since already widely used (see reply-ack
>> feature).
>
> Could you provide file/function/link pointer to the "reply-ack"
> feature? I failed to find it myself.
>
>>
>> This reply-ack mechanism is used to obtain a behaviour closer to kernel
>> backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
>> is blocked in the write() while the message is being processed in the
>> Kernel. With user backend, QEMU is unblocked from the write() when the
>> backend has read the message, before it is being processed.
>>
>
> I see. Then I agree with you that we may need a synchronized way to do
> it. One thing I think of is IOMMU page invalidation - it should be a
> sync operation to make sure that all the related caches were destroyed
> when the invalidation command returns in QEMU vIOMMU emulation path.
>
>>
>>>> +
>>>> +When the VHOST_USER_PROTOCOL_F_SLAVE_REQ is supported by the slave, and the
>>>> +master initiated the slave to master communication channel using the
>>>> +VHOST_USER_SET_SLAVE_REQ_FD request, the slave can send IOTLB miss and access
>>>> +failure events by sending VHOST_USER_IOTLB_MSG requests to the master with a
>>>> +struct vhost_iotlb_msg payload. For miss events, the iotlb payload has to be
>>>> +filled with the miss message type (1), the I/O virtual address and the
>>>> +permissions flags. For access failure event, the iotlb payload has to be
>>>> +filled with the access failure message type (4), the I/O virtual address and
>>>> +the permissions flags. On success, the master is expected to reply  when the
>>>> +request has been handled (for example, on miss requests, once the device IOTLB
>>>> +has been updated) with a zero payload, non-zero otherwise.
>>>
>>> Failed to understand the last sentence clearly. IIUC vhost-net will
>>> reply with an UPDATE message when a MISS message is received. Here for
>>> vhost-user are we going to send one extra zero payload after that?
>>
>> Not exactly. There are two channels, one for QEMU to backend requests
>> (channel A), one for backend to QEMU requests (channel B).
>>
>> The backend may be multi-threaded (like DPDK), one thread for handling
>> QEMU initiated requests (channel A), the others to handle packet
>> processing (i.e. one for Rx, one for Tx).
>>
>> The processing threads will need to translate iova adresses by
>> searching in the IOTLB cache. In case of miss, it will send an IOTLB
>> miss request on channel B, and then wait for the ack/nack. In case of
>> ack, it can search again the IOTLB cache and find the translation.
>>
>> On QEMU side, when the thread handling channel B requests receives the
>> IOTLB miss message, it gets the translation and send an IOTLB update
>> message on channel A. Then it waits for the ack from the backend,
>> meaning that the IOTLB cache has been updated, and replies ack on
>> channel B.
>
> If the ack on channel B is used to notify the processing thread that
> "cache is ready", then... would it be faster that we just let the
> processing thread poll the cache until it finds it, or let the other
> thread notify it when it receives ack on channel A? Not sure whether
> it'll be faster.

Not sure either.
Not requiring a ack can indeed make sense in some cases, for example
with single-threaded backends.

What we can do is to remove the mandatory ack reply for
VHOST_USER_IOTLB_MSG slave requests (miss, access fail).
The backend then can just rely on the REPLY_ACK feature, and set the
VHOST_USER_NEED_REPLY flag if it want to receive such ack.

Would it be fine for you?

Thanks,
Maxime
Peter Xu April 12, 2017, 7:49 a.m. UTC | #5
On Wed, Apr 12, 2017 at 09:24:47AM +0200, Maxime Coquelin wrote:
> 
> 
> On 04/12/2017 09:17 AM, Peter Xu wrote:
> >On Tue, Apr 11, 2017 at 05:16:19PM +0200, Maxime Coquelin wrote:
> >>On 04/11/2017 03:20 PM, Peter Xu wrote:
> >>>On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
> >
> >[...]
> >
> >>>
> >>>>+slave is expected to reply with a zero payload, non-zero otherwise.
> >>>
> >>>Is this ack mechanism really necessary? If not, not sure it'll be nice
> >>>to keep vhost-user/vhost-kernel aligned on this behavior. At least
> >>>that'll simplify vhost-user implementation on QEMU side (iiuc even
> >>>without introducing new functions for update/invalidate operations).
> >>
> >>I think this is necessary, and it won't complexify the vhost-user
> >>implementation on QEMU side, since already widely used (see reply-ack
> >>feature).
> >
> >Could you provide file/function/link pointer to the "reply-ack"
> >feature? I failed to find it myself.
> >
> >>
> >>This reply-ack mechanism is used to obtain a behaviour closer to kernel
> >>backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
> >>is blocked in the write() while the message is being processed in the
> >>Kernel. With user backend, QEMU is unblocked from the write() when the
> >>backend has read the message, before it is being processed.
> >>
> >
> >I see. Then I agree with you that we may need a synchronized way to do
> >it. One thing I think of is IOMMU page invalidation - it should be a
> >sync operation to make sure that all the related caches were destroyed
> >when the invalidation command returns in QEMU vIOMMU emulation path.
> >
> >>
> >>>>+
> >>>>+When the VHOST_USER_PROTOCOL_F_SLAVE_REQ is supported by the slave, and the
> >>>>+master initiated the slave to master communication channel using the
> >>>>+VHOST_USER_SET_SLAVE_REQ_FD request, the slave can send IOTLB miss and access
> >>>>+failure events by sending VHOST_USER_IOTLB_MSG requests to the master with a
> >>>>+struct vhost_iotlb_msg payload. For miss events, the iotlb payload has to be
> >>>>+filled with the miss message type (1), the I/O virtual address and the
> >>>>+permissions flags. For access failure event, the iotlb payload has to be
> >>>>+filled with the access failure message type (4), the I/O virtual address and
> >>>>+the permissions flags. On success, the master is expected to reply  when the
> >>>>+request has been handled (for example, on miss requests, once the device IOTLB
> >>>>+has been updated) with a zero payload, non-zero otherwise.
> >>>
> >>>Failed to understand the last sentence clearly. IIUC vhost-net will
> >>>reply with an UPDATE message when a MISS message is received. Here for
> >>>vhost-user are we going to send one extra zero payload after that?
> >>
> >>Not exactly. There are two channels, one for QEMU to backend requests
> >>(channel A), one for backend to QEMU requests (channel B).
> >>
> >>The backend may be multi-threaded (like DPDK), one thread for handling
> >>QEMU initiated requests (channel A), the others to handle packet
> >>processing (i.e. one for Rx, one for Tx).
> >>
> >>The processing threads will need to translate iova adresses by
> >>searching in the IOTLB cache. In case of miss, it will send an IOTLB
> >>miss request on channel B, and then wait for the ack/nack. In case of
> >>ack, it can search again the IOTLB cache and find the translation.
> >>
> >>On QEMU side, when the thread handling channel B requests receives the
> >>IOTLB miss message, it gets the translation and send an IOTLB update
> >>message on channel A. Then it waits for the ack from the backend,
> >>meaning that the IOTLB cache has been updated, and replies ack on
> >>channel B.
> >
> >If the ack on channel B is used to notify the processing thread that
> >"cache is ready", then... would it be faster that we just let the
> >processing thread poll the cache until it finds it, or let the other
> >thread notify it when it receives ack on channel A? Not sure whether
> >it'll be faster.
> 
> Not sure either.
> Not requiring a ack can indeed make sense in some cases, for example
> with single-threaded backends.
> 
> What we can do is to remove the mandatory ack reply for
> VHOST_USER_IOTLB_MSG slave requests (miss, access fail).
> The backend then can just rely on the REPLY_ACK feature, and set the
> VHOST_USER_NEED_REPLY flag if it want to receive such ack.
> 
> Would it be fine for you?

Okay I found the REPLY_ACK feature now.

It's okay to me (actually, in either way). Thank you.
Jason Wang April 12, 2017, 8:54 a.m. UTC | #6
On 2017年04月12日 15:17, Peter Xu wrote:
> On Tue, Apr 11, 2017 at 05:16:19PM +0200, Maxime Coquelin wrote:
>> On 04/11/2017 03:20 PM, Peter Xu wrote:
>>> On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
> [...]
>
>>>> +slave is expected to reply with a zero payload, non-zero otherwise.
>>> Is this ack mechanism really necessary? If not, not sure it'll be nice
>>> to keep vhost-user/vhost-kernel aligned on this behavior. At least
>>> that'll simplify vhost-user implementation on QEMU side (iiuc even
>>> without introducing new functions for update/invalidate operations).
>> I think this is necessary, and it won't complexify the vhost-user
>> implementation on QEMU side, since already widely used (see reply-ack
>> feature).
> Could you provide file/function/link pointer to the "reply-ack"
> feature? I failed to find it myself.
>
>> This reply-ack mechanism is used to obtain a behaviour closer to kernel
>> backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
>> is blocked in the write() while the message is being processed in the
>> Kernel. With user backend, QEMU is unblocked from the write() when the
>> backend has read the message, before it is being processed.
>>
> I see. Then I agree with you that we may need a synchronized way to do
> it. One thing I think of is IOMMU page invalidation - it should be a
> sync operation to make sure that all the related caches were destroyed
> when the invalidation command returns in QEMU vIOMMU emulation path.
>

Looks not, if I understand correctly, e.g for Intel IOMMU, when QI is 
enabled, this could be done asynchronously by not waiting for the 
completion through wait descriptor. Vhost-kernel always implement the 
invalidation as a synchronous one for simplicity, but looks like this is 
not needed.

Thakns
Jason Wang April 12, 2017, 9 a.m. UTC | #7
On 2017年04月12日 15:24, Maxime Coquelin wrote:
>
>
> On 04/12/2017 09:17 AM, Peter Xu wrote:
>> On Tue, Apr 11, 2017 at 05:16:19PM +0200, Maxime Coquelin wrote:
>>> On 04/11/2017 03:20 PM, Peter Xu wrote:
>>>> On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
>>
>> [...]
>>
>>>>
>>>>> +slave is expected to reply with a zero payload, non-zero otherwise.
>>>>
>>>> Is this ack mechanism really necessary? If not, not sure it'll be nice
>>>> to keep vhost-user/vhost-kernel aligned on this behavior. At least
>>>> that'll simplify vhost-user implementation on QEMU side (iiuc even
>>>> without introducing new functions for update/invalidate operations).
>>>
>>> I think this is necessary, and it won't complexify the vhost-user
>>> implementation on QEMU side, since already widely used (see reply-ack
>>> feature).
>>
>> Could you provide file/function/link pointer to the "reply-ack"
>> feature? I failed to find it myself.
>>
>>>
>>> This reply-ack mechanism is used to obtain a behaviour closer to kernel
>>> backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
>>> is blocked in the write() while the message is being processed in the
>>> Kernel. With user backend, QEMU is unblocked from the write() when the
>>> backend has read the message, before it is being processed.
>>>
>>
>> I see. Then I agree with you that we may need a synchronized way to do
>> it. One thing I think of is IOMMU page invalidation - it should be a
>> sync operation to make sure that all the related caches were destroyed
>> when the invalidation command returns in QEMU vIOMMU emulation path.
>>
>>>
>>>>> +
>>>>> +When the VHOST_USER_PROTOCOL_F_SLAVE_REQ is supported by the 
>>>>> slave, and the
>>>>> +master initiated the slave to master communication channel using the
>>>>> +VHOST_USER_SET_SLAVE_REQ_FD request, the slave can send IOTLB 
>>>>> miss and access
>>>>> +failure events by sending VHOST_USER_IOTLB_MSG requests to the 
>>>>> master with a
>>>>> +struct vhost_iotlb_msg payload. For miss events, the iotlb 
>>>>> payload has to be
>>>>> +filled with the miss message type (1), the I/O virtual address 
>>>>> and the
>>>>> +permissions flags. For access failure event, the iotlb payload 
>>>>> has to be
>>>>> +filled with the access failure message type (4), the I/O virtual 
>>>>> address and
>>>>> +the permissions flags. On success, the master is expected to 
>>>>> reply  when the
>>>>> +request has been handled (for example, on miss requests, once the 
>>>>> device IOTLB
>>>>> +has been updated) with a zero payload, non-zero otherwise.
>>>>
>>>> Failed to understand the last sentence clearly. IIUC vhost-net will
>>>> reply with an UPDATE message when a MISS message is received. Here for
>>>> vhost-user are we going to send one extra zero payload after that?
>>>
>>> Not exactly. There are two channels, one for QEMU to backend requests
>>> (channel A), one for backend to QEMU requests (channel B).
>>>
>>> The backend may be multi-threaded (like DPDK), one thread for handling
>>> QEMU initiated requests (channel A), the others to handle packet
>>> processing (i.e. one for Rx, one for Tx).
>>>
>>> The processing threads will need to translate iova adresses by
>>> searching in the IOTLB cache. In case of miss, it will send an IOTLB
>>> miss request on channel B, and then wait for the ack/nack. In case of
>>> ack, it can search again the IOTLB cache and find the translation.
>>>
>>> On QEMU side, when the thread handling channel B requests receives the
>>> IOTLB miss message, it gets the translation and send an IOTLB update
>>> message on channel A. Then it waits for the ack from the backend,
>>> meaning that the IOTLB cache has been updated, and replies ack on
>>> channel B.
>>
>> If the ack on channel B is used to notify the processing thread that
>> "cache is ready", then... would it be faster that we just let the
>> processing thread poll the cache until it finds it, or let the other
>> thread notify it when it receives ack on channel A? Not sure whether
>> it'll be faster.
>
> Not sure either.
> Not requiring a ack can indeed make sense in some cases, for example
> with single-threaded backends.
>
> What we can do is to remove the mandatory ack reply for
> VHOST_USER_IOTLB_MSG slave requests (miss, access fail).

I don't see any requirement for ack reply unless slave want to do any 
post processing when guest want to access the forbidden area. It looks 
to me that this should be done by userspace, if it's a valid map, master 
will send the IOTLB update message. If not, it will just report to 
guest. What needs to be guaranteed is that slave can still handle other 
request e.g set_owner or other in this case.

Thanks

> The backend then can just rely on the REPLY_ACK feature, and set the
> VHOST_USER_NEED_REPLY flag if it want to receive such ack.
>
> Would it be fine for you?
>
> Thanks,
> Maxime
>
Peter Xu April 12, 2017, 9:26 a.m. UTC | #8
On Wed, Apr 12, 2017 at 04:54:25PM +0800, Jason Wang wrote:
> 
> 
> On 2017年04月12日 15:17, Peter Xu wrote:
> >On Tue, Apr 11, 2017 at 05:16:19PM +0200, Maxime Coquelin wrote:
> >>On 04/11/2017 03:20 PM, Peter Xu wrote:
> >>>On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
> >[...]
> >
> >>>>+slave is expected to reply with a zero payload, non-zero otherwise.
> >>>Is this ack mechanism really necessary? If not, not sure it'll be nice
> >>>to keep vhost-user/vhost-kernel aligned on this behavior. At least
> >>>that'll simplify vhost-user implementation on QEMU side (iiuc even
> >>>without introducing new functions for update/invalidate operations).
> >>I think this is necessary, and it won't complexify the vhost-user
> >>implementation on QEMU side, since already widely used (see reply-ack
> >>feature).
> >Could you provide file/function/link pointer to the "reply-ack"
> >feature? I failed to find it myself.
> >
> >>This reply-ack mechanism is used to obtain a behaviour closer to kernel
> >>backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
> >>is blocked in the write() while the message is being processed in the
> >>Kernel. With user backend, QEMU is unblocked from the write() when the
> >>backend has read the message, before it is being processed.
> >>
> >I see. Then I agree with you that we may need a synchronized way to do
> >it. One thing I think of is IOMMU page invalidation - it should be a
> >sync operation to make sure that all the related caches were destroyed
> >when the invalidation command returns in QEMU vIOMMU emulation path.
> >
> 
> Looks not, if I understand correctly, e.g for Intel IOMMU, when QI is
> enabled, this could be done asynchronously by not waiting for the completion
> through wait descriptor. Vhost-kernel always implement the invalidation as a
> synchronous one for simplicity, but looks like this is not needed.

IMHO, the point is guest cannot reuse that IOVA only if it sends a
invalidation wait descriptor. If without wait descriptor, the guest
should never release any IOVA range, if so, that'll be dangerous,
because the cache may still be dirty on that range on specific device.

And since guest will for sure use wait descriptor (as long as it wants
to reuse iova addresses), then we should possibly finally need a way
to synchronously invalidate IOTLB, including to vhost-user backends.
Jason Wang April 13, 2017, 7:12 a.m. UTC | #9
On 2017年04月12日 17:26, Peter Xu wrote:
> On Wed, Apr 12, 2017 at 04:54:25PM +0800, Jason Wang wrote:
>> On 2017年04月12日 15:17, Peter Xu wrote:
>>> On Tue, Apr 11, 2017 at 05:16:19PM +0200, Maxime Coquelin wrote:
>>>> On 04/11/2017 03:20 PM, Peter Xu wrote:
>>>>> On Tue, Apr 11, 2017 at 12:10:02PM +0200, Maxime Coquelin wrote:
>>> [...]
>>>
>>>>>> +slave is expected to reply with a zero payload, non-zero otherwise.
>>>>> Is this ack mechanism really necessary? If not, not sure it'll be nice
>>>>> to keep vhost-user/vhost-kernel aligned on this behavior. At least
>>>>> that'll simplify vhost-user implementation on QEMU side (iiuc even
>>>>> without introducing new functions for update/invalidate operations).
>>>> I think this is necessary, and it won't complexify the vhost-user
>>>> implementation on QEMU side, since already widely used (see reply-ack
>>>> feature).
>>> Could you provide file/function/link pointer to the "reply-ack"
>>> feature? I failed to find it myself.
>>>
>>>> This reply-ack mechanism is used to obtain a behaviour closer to kernel
>>>> backend. Indeed, when QEMU sends a vhost_msg to the kernel backend, it
>>>> is blocked in the write() while the message is being processed in the
>>>> Kernel. With user backend, QEMU is unblocked from the write() when the
>>>> backend has read the message, before it is being processed.
>>>>
>>> I see. Then I agree with you that we may need a synchronized way to do
>>> it. One thing I think of is IOMMU page invalidation - it should be a
>>> sync operation to make sure that all the related caches were destroyed
>>> when the invalidation command returns in QEMU vIOMMU emulation path.
>>>
>> Looks not, if I understand correctly, e.g for Intel IOMMU, when QI is
>> enabled, this could be done asynchronously by not waiting for the completion
>> through wait descriptor. Vhost-kernel always implement the invalidation as a
>> synchronous one for simplicity, but looks like this is not needed.
> IMHO, the point is guest cannot reuse that IOVA only if it sends a
> invalidation wait descriptor. If without wait descriptor, the guest
> should never release any IOVA range, if so, that'll be dangerous,
> because the cache may still be dirty on that range on specific device.
>
> And since guest will for sure use wait descriptor (as long as it wants
> to reuse iova addresses), then we should possibly finally need a way
> to synchronously invalidate IOTLB, including to vhost-user backends.

Yes, what I mean is technically we can implement this only for wait 
descriptor.

Thanks
diff mbox

Patch

diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index b365047..048a4d6 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -97,6 +97,23 @@  Depending on the request type, payload can be:
    log offset: offset from start of supplied file descriptor
        where logging starts (i.e. where guest address 0 would be logged)
 
+ * An IOTLB message
+   ---------------------------------------------------------
+   | iova | size | user address | permissions flags | type |
+   ---------------------------------------------------------
+
+   IOVA: a 64-bit guest I/O virtual address
+   Size: a 64-bit size
+   User address: a 64-bit user address
+   Permissions flags: a 8-bit bit field:
+    - Bit 0: Read access
+    - Bit 1: Write access
+   Type: a 8-bit IOTLB message type:
+    - 1: IOTLB miss
+    - 2: IOTLB update
+    - 3: IOTLB invalidate
+    - 4: IOTLB access fail
+
 In QEMU the vhost-user message is implemented with the following struct:
 
 typedef struct VhostUserMsg {
@@ -109,6 +126,7 @@  typedef struct VhostUserMsg {
         struct vhost_vring_addr addr;
         VhostUserMemory memory;
         VhostUserLog log;
+        struct vhost_iotlb_msg iotlb;
     };
 } QEMU_PACKED VhostUserMsg;
 
@@ -258,6 +276,30 @@  Once the source has finished migration, rings will be stopped by
 the source. No further update must be done before rings are
 restarted.
 
+IOMMU support
+-------------
+
+When the VIRTIO_F_IOMMU_PLATFORM feature has been negotiated, the master has
+to send IOTLB entries update & invalidation by sending VHOST_USER_IOTLB_MSG
+requests to the slave with a struct vhost_iotlb_msg payload. For update events,
+the iotlb payload has to be filled with the update message type (2), the I/O
+virtual address, the size, the user virtual address, and the permissions
+flags. For invalidation events, the iotlb payload has to be filled with the
+update message type (3), the I/O virtual address and the size. On success, the
+slave is expected to reply with a zero payload, non-zero otherwise.
+
+When the VHOST_USER_PROTOCOL_F_SLAVE_REQ is supported by the slave, and the
+master initiated the slave to master communication channel using the
+VHOST_USER_SET_SLAVE_REQ_FD request, the slave can send IOTLB miss and access
+failure events by sending VHOST_USER_IOTLB_MSG requests to the master with a
+struct vhost_iotlb_msg payload. For miss events, the iotlb payload has to be
+filled with the miss message type (1), the I/O virtual address and the
+permissions flags. For access failure event, the iotlb payload has to be
+filled with the access failure message type (4), the I/O virtual address and
+the permissions flags. On success, the master is expected to reply  when the
+request has been handled (for example, on miss requests, once the device IOTLB
+has been updated) with a zero payload, non-zero otherwise.
+
 Protocol features
 -----------------
 
@@ -524,6 +566,20 @@  Message types
       has been negotiated, and protocol feature bit VHOST_USER_PROTOCOL_F_SLAVE_REQ
       bit is present in VHOST_USER_GET_PROTOCOL_FEATURES.
 
+ * VHOST_USER_IOTLB_MSG
+
+      Id: 22
+      Equivalent ioctl: N/A (equivalent to VHOST_IOTLB_MSG message type)
+      Initiator: Master or slave
+
+      Send IOTLB messages with struct vhost_iotlb_msg as payload.
+      Master sends such requests to update and invalidate entries in the device
+      IOTLB. Slave sends such requests to notify of an IOTLB miss, or an IOTLB
+      access failure. The recipient has to acknowledge the request with
+      sending zero as u64 payload for success, non-zero otherwise.
+      This request should be send only when VIRTIO_F_IOMMU_PLATFORM feature
+      has been successfully negotiated.
+
 VHOST_USER_PROTOCOL_F_REPLY_ACK:
 -------------------------------
 The original vhost-user specification only demands replies for certain