diff mbox series

[v4,1/2] vdpa: Restore hash calculation state

Message ID dbf699acff8c226596136a55a6abe35ebfeac8b0.1698194366.git.yin31149@gmail.com
State New
Headers show
Series Vhost-vdpa Shadow Virtqueue Hash calculation Support | expand

Commit Message

Hawkins Jiawei Oct. 25, 2023, 1:02 a.m. UTC
This patch introduces vhost_vdpa_net_load_rss() to restore
the hash calculation state at device's startup.

Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
---
v4:
  - fix some typos pointed out by Michael
  - zero the `cfg` fields at the definition suggested by Michael

v3: https://patchwork.kernel.org/project/qemu-devel/patch/b7cd0c8d6a58b16b086f11714d2908ad35c67caa.1697902949.git.yin31149@gmail.com/
  - remove the `do_rss` argument in vhost_vdpa_net_load_rss()
  - zero reserved fields in "cfg" manually instead of using memset()
to prevent compiler "array-bounds" warning

v2: https://lore.kernel.org/all/f5ffad10699001107022851e0560cb394039d6b0.1693297766.git.yin31149@gmail.com/
  - resolve conflict with updated patch
"vdpa: Send all CVQ state load commands in parallel"
  - move the `table` declaration at the beginning of the
vhost_vdpa_net_load_rss()

RFC: https://lore.kernel.org/all/a54ca70b12ebe2f3c391864e41241697ab1aba30.1691762906.git.yin31149@gmail.com/

 net/vhost-vdpa.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 91 insertions(+)

Comments

Eugenio Pérez Nov. 3, 2023, 2:18 p.m. UTC | #1
On Wed, Oct 25, 2023 at 3:02 AM Hawkins Jiawei <yin31149@gmail.com> wrote:
>
> This patch introduces vhost_vdpa_net_load_rss() to restore
> the hash calculation state at device's startup.
>
> Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
> ---
> v4:
>   - fix some typos pointed out by Michael
>   - zero the `cfg` fields at the definition suggested by Michael
>
> v3: https://patchwork.kernel.org/project/qemu-devel/patch/b7cd0c8d6a58b16b086f11714d2908ad35c67caa.1697902949.git.yin31149@gmail.com/
>   - remove the `do_rss` argument in vhost_vdpa_net_load_rss()
>   - zero reserved fields in "cfg" manually instead of using memset()
> to prevent compiler "array-bounds" warning
>
> v2: https://lore.kernel.org/all/f5ffad10699001107022851e0560cb394039d6b0.1693297766.git.yin31149@gmail.com/
>   - resolve conflict with updated patch
> "vdpa: Send all CVQ state load commands in parallel"
>   - move the `table` declaration at the beginning of the
> vhost_vdpa_net_load_rss()
>
> RFC: https://lore.kernel.org/all/a54ca70b12ebe2f3c391864e41241697ab1aba30.1691762906.git.yin31149@gmail.com/
>
>  net/vhost-vdpa.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 91 insertions(+)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 7a226c93bc..e59d40b8ae 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -818,6 +818,88 @@ static int vhost_vdpa_net_load_mac(VhostVDPAState *s, const VirtIONet *n,
>      return 0;
>  }
>
> +static int vhost_vdpa_net_load_rss(VhostVDPAState *s, const VirtIONet *n,
> +                                   struct iovec *out_cursor,
> +                                   struct iovec *in_cursor)
> +{
> +    struct virtio_net_rss_config cfg = {};
> +    ssize_t r;
> +    g_autofree uint16_t *table = NULL;

Nitpick, I think the table should actually be introduced in [1], not
here. Otherwise, it adds unneeded complexity to review it.

However I think it is fine if both series get merged one after another, so:

Acked-by: Eugenio Pérez <eperezma@redhat.com>

[1] https://patchwork.kernel.org/project/qemu-devel/patch/cf5b78a16ed0318982ceffb195f2227f6aad4ac1.1698195059.git.yin31149@gmail.com/

> +
> +    /*
> +     * According to VirtIO standard, "Initially the device has all hash
> +     * types disabled and reports only VIRTIO_NET_HASH_REPORT_NONE.".
> +     *
> +     * Therefore, there is no need to send this CVQ command if the
> +     * driver disables the all hash types, which aligns with
> +     * the device's defaults.
> +     *
> +     * Note that the device's defaults can mismatch the driver's
> +     * configuration only at live migration.
> +     */
> +    if (!n->rss_data.enabled ||
> +        n->rss_data.hash_types == VIRTIO_NET_HASH_REPORT_NONE) {
> +        return 0;
> +    }
> +
> +    table = g_malloc_n(n->rss_data.indirections_len,
> +                       sizeof(n->rss_data.indirections_table[0]));
> +    cfg.hash_types = cpu_to_le32(n->rss_data.hash_types);
> +
> +    /*
> +     * According to VirtIO standard, "Field reserved MUST contain zeroes.
> +     * It is defined to make the structure to match the layout of
> +     * virtio_net_rss_config structure, defined in 5.1.6.5.7.".
> +     *
> +     * Therefore, we need to zero the fields in
> +     * struct virtio_net_rss_config, which corresponds to the
> +     * `reserved` field in struct virtio_net_hash_config.
> +     *
> +     * Note that all other fields are zeroed at their definitions,
> +     * except for the `indirection_table` field, where the actual data
> +     * is stored in the `table` variable to ensure compatibility
> +     * with RSS case. Therefore, we need to zero the `table` variable here.
> +     */
> +    table[0] = 0;
> +
> +    /*
> +     * Considering that virtio_net_handle_rss() currently does not restore
> +     * the hash key length parsed from the CVQ command sent from the guest
> +     * into n->rss_data and uses the maximum key length in other code, so
> +     * we also employ the maximum key length here.
> +     */
> +    cfg.hash_key_length = sizeof(n->rss_data.key);
> +
> +    const struct iovec data[] = {
> +        {
> +            .iov_base = &cfg,
> +            .iov_len = offsetof(struct virtio_net_rss_config,
> +                                indirection_table),
> +        }, {
> +            .iov_base = table,
> +            .iov_len = n->rss_data.indirections_len *
> +                       sizeof(n->rss_data.indirections_table[0]),
> +        }, {
> +            .iov_base = &cfg.max_tx_vq,
> +            .iov_len = offsetof(struct virtio_net_rss_config, hash_key_data) -
> +                       offsetof(struct virtio_net_rss_config, max_tx_vq),
> +        }, {
> +            .iov_base = (void *)n->rss_data.key,
> +            .iov_len = sizeof(n->rss_data.key),
> +        }
> +    };
> +
> +    r = vhost_vdpa_net_load_cmd(s, out_cursor, in_cursor,
> +                                VIRTIO_NET_CTRL_MQ,
> +                                VIRTIO_NET_CTRL_MQ_HASH_CONFIG,
> +                                data, ARRAY_SIZE(data));
> +    if (unlikely(r < 0)) {
> +        return r;
> +    }
> +
> +    return 0;
> +}
> +
>  static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
>                                    const VirtIONet *n,
>                                    struct iovec *out_cursor,
> @@ -843,6 +925,15 @@ static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
>          return r;
>      }
>
> +    if (!virtio_vdev_has_feature(&n->parent_obj, VIRTIO_NET_F_HASH_REPORT)) {
> +        return 0;
> +    }
> +
> +    r = vhost_vdpa_net_load_rss(s, n, out_cursor, in_cursor);
> +    if (unlikely(r < 0)) {
> +        return r;
> +    }
> +
>      return 0;
>  }
>
> --
> 2.25.1
>
diff mbox series

Patch

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 7a226c93bc..e59d40b8ae 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -818,6 +818,88 @@  static int vhost_vdpa_net_load_mac(VhostVDPAState *s, const VirtIONet *n,
     return 0;
 }
 
+static int vhost_vdpa_net_load_rss(VhostVDPAState *s, const VirtIONet *n,
+                                   struct iovec *out_cursor,
+                                   struct iovec *in_cursor)
+{
+    struct virtio_net_rss_config cfg = {};
+    ssize_t r;
+    g_autofree uint16_t *table = NULL;
+
+    /*
+     * According to VirtIO standard, "Initially the device has all hash
+     * types disabled and reports only VIRTIO_NET_HASH_REPORT_NONE.".
+     *
+     * Therefore, there is no need to send this CVQ command if the
+     * driver disables the all hash types, which aligns with
+     * the device's defaults.
+     *
+     * Note that the device's defaults can mismatch the driver's
+     * configuration only at live migration.
+     */
+    if (!n->rss_data.enabled ||
+        n->rss_data.hash_types == VIRTIO_NET_HASH_REPORT_NONE) {
+        return 0;
+    }
+
+    table = g_malloc_n(n->rss_data.indirections_len,
+                       sizeof(n->rss_data.indirections_table[0]));
+    cfg.hash_types = cpu_to_le32(n->rss_data.hash_types);
+
+    /*
+     * According to VirtIO standard, "Field reserved MUST contain zeroes.
+     * It is defined to make the structure to match the layout of
+     * virtio_net_rss_config structure, defined in 5.1.6.5.7.".
+     *
+     * Therefore, we need to zero the fields in
+     * struct virtio_net_rss_config, which corresponds to the
+     * `reserved` field in struct virtio_net_hash_config.
+     *
+     * Note that all other fields are zeroed at their definitions,
+     * except for the `indirection_table` field, where the actual data
+     * is stored in the `table` variable to ensure compatibility
+     * with RSS case. Therefore, we need to zero the `table` variable here.
+     */
+    table[0] = 0;
+
+    /*
+     * Considering that virtio_net_handle_rss() currently does not restore
+     * the hash key length parsed from the CVQ command sent from the guest
+     * into n->rss_data and uses the maximum key length in other code, so
+     * we also employ the maximum key length here.
+     */
+    cfg.hash_key_length = sizeof(n->rss_data.key);
+
+    const struct iovec data[] = {
+        {
+            .iov_base = &cfg,
+            .iov_len = offsetof(struct virtio_net_rss_config,
+                                indirection_table),
+        }, {
+            .iov_base = table,
+            .iov_len = n->rss_data.indirections_len *
+                       sizeof(n->rss_data.indirections_table[0]),
+        }, {
+            .iov_base = &cfg.max_tx_vq,
+            .iov_len = offsetof(struct virtio_net_rss_config, hash_key_data) -
+                       offsetof(struct virtio_net_rss_config, max_tx_vq),
+        }, {
+            .iov_base = (void *)n->rss_data.key,
+            .iov_len = sizeof(n->rss_data.key),
+        }
+    };
+
+    r = vhost_vdpa_net_load_cmd(s, out_cursor, in_cursor,
+                                VIRTIO_NET_CTRL_MQ,
+                                VIRTIO_NET_CTRL_MQ_HASH_CONFIG,
+                                data, ARRAY_SIZE(data));
+    if (unlikely(r < 0)) {
+        return r;
+    }
+
+    return 0;
+}
+
 static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
                                   const VirtIONet *n,
                                   struct iovec *out_cursor,
@@ -843,6 +925,15 @@  static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
         return r;
     }
 
+    if (!virtio_vdev_has_feature(&n->parent_obj, VIRTIO_NET_F_HASH_REPORT)) {
+        return 0;
+    }
+
+    r = vhost_vdpa_net_load_rss(s, n, out_cursor, in_cursor);
+    if (unlikely(r < 0)) {
+        return r;
+    }
+
     return 0;
 }