diff mbox

[v8,1/2] block/vxhs.c: Add support for a new block device type called "vxhs"

Message ID CAAo6VWNRhK6zvc+CferSd9wYQZNNoXDqCEW_G62crqsamyEC9Q@mail.gmail.com
State New
Headers show

Commit Message

Ashish Mittal March 6, 2017, 12:26 a.m. UTC
On Wed, Mar 1, 2017 at 1:18 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> On Tue, Feb 28, 2017 at 02:51:39PM -0800, ashish mittal wrote:
>> On Mon, Feb 27, 2017 at 1:22 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
>
>> >> +        ret = -EINVAL;
>> >> +        goto out;
>> >> +    }
>> >> +
>> >> +    secretid = qemu_opt_get(opts, VXHS_OPT_SECRETID);
>> >> +    if (!secretid) {
>> >> +        error_setg(&local_err, "please specify the ID of the secret to be "
>> >> +                   "used for authenticating to target");
>> >> +        qdict_del(backing_options, str);
>> >> +        ret = -EINVAL;
>> >> +        goto out;
>> >> +    }
>> >> +
>> >> +    /* check if we got password via the --object argument */
>> >> +    password = qcrypto_secret_lookup_as_utf8(secretid, &local_err);
>> >> +    if (local_err != NULL) {
>> >> +        trace_vxhs_get_creds(user, secretid, password);
>> >> +        qdict_del(backing_options, str);
>> >> +        ret = -EINVAL;
>> >> +        goto out;
>> >> +    }
>> >> +    trace_vxhs_get_creds(user, secretid, password);
>> >> +
>> >>      s->vdisk_hostinfo.host = g_strdup(server_host_opt);
>> >>
>> >>      s->vdisk_hostinfo.port = g_ascii_strtoll(qemu_opt_get(tcp_opts,
>>
>> The next thing we need consensus on, is to decide exactly what
>> additional information to pass.
>>
>> (1) Current security implementation in VxHS uses the SSL key and
>> certificate files. Do you think we can achieve all the intended
>> security goals if we pass these two files paths (and names?) from the
>> command line?
>
> Yes, that's how other parts of QEMU deal with SSL
>
> NB, QEMU needs to pass 3 paths to libqnoio - the client cert, the
> client key, and the certificate authority certlist
>

I see that the QEMU TLS code internally does expect to find cacert,
and errors out if this is missing. I did have to create one and place
it in the dir path where we are keeping the client key,cert files. The
code diff below requires all the three files.

Here are the files I had to create -
$ ll /etc/pki/qemu/vxhs
total 12
-r--r--r--. 1 root root 2094 Mar  4 18:02 ca-cert.pem
-r--r--r--. 1 root root 1899 Mar  4 18:00 client-cert.pem
-r--r--r--. 1 root root 1679 Mar  4 17:59 client-key.pem

>> (2) If we are OK to continue with the present security scheme (using
>> key and cert), then the only additional change needed might be to
>> accept these files on the command line.
>
> Yep, I think that's the minimum required change to make this mergable.
>
>> (3) If we decide to rely on file permissions and SELinux policy to
>> protect these key/cert files, then we don't need to pass the file
>> names as a secret, instead, passing them as regular qemu_opt_get()
>> options might be enough.
>
> That's correct - you can assume file permissions protect the cert
> files. I would expect the syntax to work as follows
>
>   $QEMU  -object tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client \
>          -drive  driver=vxhs,...other..opts...,tls-creds=tls0
>
> When you see the 'tls-creds' flag, you can lookup the corresponding
> QCryptoTLSCredsX509 object and extract the 'dir' property from it.
>

The code diff below works as above. Please verify.

> The include/crypto/tlscredsx509.h file has constants defined for the
> standard filenames - eg you would concatenate the directory with
> the constants QCRYPTO_TLS_CREDS_X509_CLIENT_KEY.
>
> This gives the filenames you can then pass to libqnio
>

I am using function qcrypto_tls_creds_get_path() to achieve the same.
Hope this is OK.

> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|


Example CLI accepting the new TLS credentials:

[amittal2@camshaft qemu] 2017-03-05 15:54:55$ ./qemu-io --trace
enable=vxhs* --object
tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read
66000 128k' 'json:{"server.host": "127.0.0.1", "server.port": "9999",
"vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
15116@1488758101.084355:vxhs_open_vdiskid Opening vdisk-id /test.raw
15116@1488758101.084396:vxhs_get_creds cacert
/etc/pki/qemu/vxhs/ca-cert.pem, client_key
/etc/pki/qemu/vxhs/client-key.pem, client_cert
/etc/pki/qemu/vxhs/client-cert.pem   <===== !!!! NOTE !!!!
15116@1488758101.084402:vxhs_open_hostinfo Adding host 127.0.0.1:9999
to BDRVVXHSState
15116@1488758101.092060:vxhs_get_vdisk_stat vDisk /test.raw stat ioctl
returned size 1048576
read 131072/131072 bytes at offset 66000
128 KiB, 1 ops; 0.0006 sec (188.537 MiB/sec and 1508.2956 ops/sec)
15116@1488758101.094643:vxhs_close Closing vdisk /test.raw
[amittal2@camshaft qemu] 2017-03-05 15:55:01$

NB - I am passing client-key and client-cert to iio_open() here.
libqnio changes to work with these new args via iio_open() will
follow.

         ret = -ENODEV;
@@ -355,12 +420,16 @@ out:
     QDECREF(backing_options);
     qemu_opts_del(tcp_opts);
     qemu_opts_del(opts);
+    g_free(cacert);
+    g_free(client_key);
+    g_free(client_cert);

     if (ret < 0) {
         vxhs_unref();
         error_propagate(errp, local_err);
         g_free(s->vdisk_hostinfo.host);
         g_free(s->vdisk_guid);
+        g_free(s->tlscredsid);
         s->vdisk_guid = NULL;
         errno = -ret;
     }
@@ -466,9 +535,11 @@ static void vxhs_close(BlockDriverState *bs)
     vxhs_unref();

     /*
-     * Free the dynamically allocated host string
+     * Free the dynamically allocated host string etc
      */
     g_free(s->vdisk_hostinfo.host);
+    g_free(s->tlscredsid);
+    s->tlscredsid = NULL;


Regards,
Ashish

Comments

Daniel P. Berrangé March 6, 2017, 9:23 a.m. UTC | #1
On Sun, Mar 05, 2017 at 04:26:05PM -0800, ashish mittal wrote:
> On Wed, Mar 1, 2017 at 1:18 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> >
> > Yes, that's how other parts of QEMU deal with SSL
> >
> > NB, QEMU needs to pass 3 paths to libqnoio - the client cert, the
> > client key, and the certificate authority certlist
> >
> 
> I see that the QEMU TLS code internally does expect to find cacert,
> and errors out if this is missing. I did have to create one and place
> it in the dir path where we are keeping the client key,cert files. The
> code diff below requires all the three files.

Yes, the client cert/key have to be paired with the correct CA cert. We cannot
assume that the default CA cert bundle contains the right CA, for use with the
cert/key. The default CA bundle contains all the public commercial CAs, and in
general a QEMU deployment will never use any of those - the site will use a
private internal only CA. While you could add the private CA to the global CA
bundle, that is not desirable from a security POV, as it opens the deployment
to risk from a rogue CA.

> > The include/crypto/tlscredsx509.h file has constants defined for the
> > standard filenames - eg you would concatenate the directory with
> > the constants QCRYPTO_TLS_CREDS_X509_CLIENT_KEY.
> >
> > This gives the filenames you can then pass to libqnio
> >
> 
> I am using function qcrypto_tls_creds_get_path() to achieve the same.
> Hope this is OK.

Yep, that's fine.

> Example CLI accepting the new TLS credentials:
> 
> [amittal2@camshaft qemu] 2017-03-05 15:54:55$ ./qemu-io --trace
> enable=vxhs* --object
> tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read
> 66000 128k' 'json:{"server.host": "127.0.0.1", "server.port": "9999",
> "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
> 15116@1488758101.084355:vxhs_open_vdiskid Opening vdisk-id /test.raw
> 15116@1488758101.084396:vxhs_get_creds cacert
> /etc/pki/qemu/vxhs/ca-cert.pem, client_key
> /etc/pki/qemu/vxhs/client-key.pem, client_cert
> /etc/pki/qemu/vxhs/client-cert.pem   <===== !!!! NOTE !!!!
> 15116@1488758101.084402:vxhs_open_hostinfo Adding host 127.0.0.1:9999
> to BDRVVXHSState
> 15116@1488758101.092060:vxhs_get_vdisk_stat vDisk /test.raw stat ioctl
> returned size 1048576
> read 131072/131072 bytes at offset 66000
> 128 KiB, 1 ops; 0.0006 sec (188.537 MiB/sec and 1508.2956 ops/sec)
> 15116@1488758101.094643:vxhs_close Closing vdisk /test.raw
> [amittal2@camshaft qemu] 2017-03-05 15:55:01$

That looks ok.

> @@ -315,33 +374,39 @@ static int vxhs_open(BlockDriverState *bs, QDict *options,
>      if (strlen(server_host_opt) > MAXHOSTNAMELEN) {
>          error_setg(&local_err, "server.host cannot be more than %d characters",
>                     MAXHOSTNAMELEN);
> -        qdict_del(backing_options, str);
>          ret = -EINVAL;
>          goto out;
>      }
> 
> -    s->vdisk_hostinfo.host = g_strdup(server_host_opt);
> +    /* check if we got tls-creds via the --object argument */
> +    s->tlscredsid = g_strdup(qemu_opt_get(opts, "tls-creds"));
> +    if (s->tlscredsid) {
> +        vxhs_get_tls_creds(s->tlscredsid, &cacert, &client_key,
> +                           &client_cert, &local_err);
> +        if (local_err != NULL) {
> +            ret = -EINVAL;
> +            goto out;
> +        }
> +        trace_vxhs_get_creds(cacert, client_key, client_cert);
> +    }
> 
> +    s->vdisk_hostinfo.host = g_strdup(server_host_opt);
>      s->vdisk_hostinfo.port = g_ascii_strtoll(qemu_opt_get(tcp_opts,
>                                                            VXHS_OPT_PORT),
>                                                            NULL, 0);
> 
>      trace_vxhs_open_hostinfo(s->vdisk_hostinfo.host,
> -                         s->vdisk_hostinfo.port);
> -
> -    /* free the 'server.' entries allocated by previous call to
> -     * qdict_extract_subqdict()
> -     */
> -    qdict_del(backing_options, str);
> +                             s->vdisk_hostinfo.port);
> 
>      of_vsa_addr = g_strdup_printf("of://%s:%d",
> -                                s->vdisk_hostinfo.host,
> -                                s->vdisk_hostinfo.port);
> +                                  s->vdisk_hostinfo.host,
> +                                  s->vdisk_hostinfo.port);
> 
>      /*
>       * Open qnio channel to storage agent if not opened before.
>       */
> -    dev_handlep = iio_open(of_vsa_addr, s->vdisk_guid, 0);
> +    dev_handlep = iio_open(of_vsa_addr, s->vdisk_guid, 0,
> +                           client_key, client_cert);

You need to pass  ca_cert into this method too.


Regards,
Daniel
Ashish Mittal March 8, 2017, 1:27 a.m. UTC | #2
Thanks! There is one more input I need some help with!

VxHS network library opens a fixed number of connection channels to a
given host, and all the vdisks (that connect to the same host) share
these connection channels.

Therefore, we need to open secure channels to a specific target host
only once for the first vdisk that connects to that host. All the
other vdisks that connect to the same target host will share the same
set of secure communication channels.

I hope the above scheme is acceptable?

If yes, then we have a couple of options to implement this:

(1) Accept the TLS credentials per vdisk using the previously
discussed --object tls-creds-x509 mechanism. In this case, if more
than one vdisk have the same host info, then we will use only the
first one's creds to set up the secure connection, and ignore the
others. vdisks that connect to different target hosts will use their
individual tls-creds-x509 to set up the secure channels. This is, of
course, not relevant for qemu-img/qemu-io as they can open only one
vdisk at a time.

(2) Instead of having a per-vdisk --object tls-creds-x509, have a
single such argument on the command line for vxhs block device code to
consume - if that is possible! One way to achieve this could be the
user/password authentication we discussed earlier, which we could use
to pass the directory where cert/keys are kept.

(3) Use the instance UUID, when available, to lookup the cert files
per instance (i.e. for qemu-kvm), and use the --object tls-creds-x509
mechanism, when instance UUID is NULL (i.e. qemu-io, qemu-img etc).
The cert/key files are anyway protected by file permissions in either
case, so I guess there is no additional security provided by either
method.

Regards,
Ashish

On Mon, Mar 6, 2017 at 1:23 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> On Sun, Mar 05, 2017 at 04:26:05PM -0800, ashish mittal wrote:
>> On Wed, Mar 1, 2017 at 1:18 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
>> >
>> > Yes, that's how other parts of QEMU deal with SSL
>> >
>> > NB, QEMU needs to pass 3 paths to libqnoio - the client cert, the
>> > client key, and the certificate authority certlist
>> >
>>
>> I see that the QEMU TLS code internally does expect to find cacert,
>> and errors out if this is missing. I did have to create one and place
>> it in the dir path where we are keeping the client key,cert files. The
>> code diff below requires all the three files.
>
> Yes, the client cert/key have to be paired with the correct CA cert. We cannot
> assume that the default CA cert bundle contains the right CA, for use with the
> cert/key. The default CA bundle contains all the public commercial CAs, and in
> general a QEMU deployment will never use any of those - the site will use a
> private internal only CA. While you could add the private CA to the global CA
> bundle, that is not desirable from a security POV, as it opens the deployment
> to risk from a rogue CA.
>
>> > The include/crypto/tlscredsx509.h file has constants defined for the
>> > standard filenames - eg you would concatenate the directory with
>> > the constants QCRYPTO_TLS_CREDS_X509_CLIENT_KEY.
>> >
>> > This gives the filenames you can then pass to libqnio
>> >
>>
>> I am using function qcrypto_tls_creds_get_path() to achieve the same.
>> Hope this is OK.
>
> Yep, that's fine.
>
>> Example CLI accepting the new TLS credentials:
>>
>> [amittal2@camshaft qemu] 2017-03-05 15:54:55$ ./qemu-io --trace
>> enable=vxhs* --object
>> tls-creds-x509,id=tls0,dir=/etc/pki/qemu/vxhs,endpoint=client -c 'read
>> 66000 128k' 'json:{"server.host": "127.0.0.1", "server.port": "9999",
>> "vdisk-id": "/test.raw", "driver": "vxhs", "tls-creds":"tls0"}'
>> 15116@1488758101.084355:vxhs_open_vdiskid Opening vdisk-id /test.raw
>> 15116@1488758101.084396:vxhs_get_creds cacert
>> /etc/pki/qemu/vxhs/ca-cert.pem, client_key
>> /etc/pki/qemu/vxhs/client-key.pem, client_cert
>> /etc/pki/qemu/vxhs/client-cert.pem   <===== !!!! NOTE !!!!
>> 15116@1488758101.084402:vxhs_open_hostinfo Adding host 127.0.0.1:9999
>> to BDRVVXHSState
>> 15116@1488758101.092060:vxhs_get_vdisk_stat vDisk /test.raw stat ioctl
>> returned size 1048576
>> read 131072/131072 bytes at offset 66000
>> 128 KiB, 1 ops; 0.0006 sec (188.537 MiB/sec and 1508.2956 ops/sec)
>> 15116@1488758101.094643:vxhs_close Closing vdisk /test.raw
>> [amittal2@camshaft qemu] 2017-03-05 15:55:01$
>
> That looks ok.
>
>> @@ -315,33 +374,39 @@ static int vxhs_open(BlockDriverState *bs, QDict *options,
>>      if (strlen(server_host_opt) > MAXHOSTNAMELEN) {
>>          error_setg(&local_err, "server.host cannot be more than %d characters",
>>                     MAXHOSTNAMELEN);
>> -        qdict_del(backing_options, str);
>>          ret = -EINVAL;
>>          goto out;
>>      }
>>
>> -    s->vdisk_hostinfo.host = g_strdup(server_host_opt);
>> +    /* check if we got tls-creds via the --object argument */
>> +    s->tlscredsid = g_strdup(qemu_opt_get(opts, "tls-creds"));
>> +    if (s->tlscredsid) {
>> +        vxhs_get_tls_creds(s->tlscredsid, &cacert, &client_key,
>> +                           &client_cert, &local_err);
>> +        if (local_err != NULL) {
>> +            ret = -EINVAL;
>> +            goto out;
>> +        }
>> +        trace_vxhs_get_creds(cacert, client_key, client_cert);
>> +    }
>>
>> +    s->vdisk_hostinfo.host = g_strdup(server_host_opt);
>>      s->vdisk_hostinfo.port = g_ascii_strtoll(qemu_opt_get(tcp_opts,
>>                                                            VXHS_OPT_PORT),
>>                                                            NULL, 0);
>>
>>      trace_vxhs_open_hostinfo(s->vdisk_hostinfo.host,
>> -                         s->vdisk_hostinfo.port);
>> -
>> -    /* free the 'server.' entries allocated by previous call to
>> -     * qdict_extract_subqdict()
>> -     */
>> -    qdict_del(backing_options, str);
>> +                             s->vdisk_hostinfo.port);
>>
>>      of_vsa_addr = g_strdup_printf("of://%s:%d",
>> -                                s->vdisk_hostinfo.host,
>> -                                s->vdisk_hostinfo.port);
>> +                                  s->vdisk_hostinfo.host,
>> +                                  s->vdisk_hostinfo.port);
>>
>>      /*
>>       * Open qnio channel to storage agent if not opened before.
>>       */
>> -    dev_handlep = iio_open(of_vsa_addr, s->vdisk_guid, 0);
>> +    dev_handlep = iio_open(of_vsa_addr, s->vdisk_guid, 0,
>> +                           client_key, client_cert);
>
> You need to pass  ca_cert into this method too.
>
>
> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
Daniel P. Berrangé March 8, 2017, 9:13 a.m. UTC | #3
On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
> Thanks! There is one more input I need some help with!
> 
> VxHS network library opens a fixed number of connection channels to a
> given host, and all the vdisks (that connect to the same host) share
> these connection channels.
> 
> Therefore, we need to open secure channels to a specific target host
> only once for the first vdisk that connects to that host. All the
> other vdisks that connect to the same target host will share the same
> set of secure communication channels.
> 
> I hope the above scheme is acceptable?

I don't think I'm in favour of such an approach, as it forces a single
QEMU process to use the same privileges for all disks it uses.

Consider an example where a QEMU process has two disks, one shared
readonly disk and one exclusive writable disk, both on the same host.

It is reasonable as an administrator to want to use different credentials
for each of these. ie, they might have a set of well known credentials to
authenticate to get access to the read-only disk, but have a different set
of strictly limited access credentials to get access to the writable disk

Trying to re-use the same connection for multiple cause prevents QEMU from
authenticating with different credentials per disk, so I don't think that
is a good approach - each disk should have totally independant state.

 > 
> If yes, then we have a couple of options to implement this:
> 
> (1) Accept the TLS credentials per vdisk using the previously
> discussed --object tls-creds-x509 mechanism. In this case, if more
> than one vdisk have the same host info, then we will use only the
> first one's creds to set up the secure connection, and ignore the
> others. vdisks that connect to different target hosts will use their
> individual tls-creds-x509 to set up the secure channels. This is, of
> course, not relevant for qemu-img/qemu-io as they can open only one
> vdisk at a time.
> 
> (2) Instead of having a per-vdisk --object tls-creds-x509, have a
> single such argument on the command line for vxhs block device code to
> consume - if that is possible! One way to achieve this could be the
> user/password authentication we discussed earlier, which we could use
> to pass the directory where cert/keys are kept.
> 
> (3) Use the instance UUID, when available, to lookup the cert files
> per instance (i.e. for qemu-kvm), and use the --object tls-creds-x509
> mechanism, when instance UUID is NULL (i.e. qemu-io, qemu-img etc).
> The cert/key files are anyway protected by file permissions in either
> case, so I guess there is no additional security provided by either
> method.

Regards,
Daniel
Ketan Nilangekar March 8, 2017, 1:04 p.m. UTC | #4
> On Mar 8, 2017, at 1:13 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> 
>> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
>> Thanks! There is one more input I need some help with!
>> 
>> VxHS network library opens a fixed number of connection channels to a
>> given host, and all the vdisks (that connect to the same host) share
>> these connection channels.
>> 
>> Therefore, we need to open secure channels to a specific target host
>> only once for the first vdisk that connects to that host. All the
>> other vdisks that connect to the same target host will share the same
>> set of secure communication channels.
>> 
>> I hope the above scheme is acceptable?
> 
> I don't think I'm in favour of such an approach, as it forces a single
> QEMU process to use the same privileges for all disks it uses.
> 
> Consider an example where a QEMU process has two disks, one shared
> readonly disk and one exclusive writable disk, both on the same host.
> 

This is not a use case for VxHS as a solution. We do not support sharing of vdisks across QEMU instances.

Vxhs library was thus not designed to open different connections for individual vdisks within a QEMU instance.

Implementing this will involve rewrite of significant parts of libvxhs client and server. Is this a new requirement for acceptance into QEMU?


> It is reasonable as an administrator to want to use different credentials
> for each of these. ie, they might have a set of well known credentials to
> authenticate to get access to the read-only disk, but have a different set
> of strictly limited access credentials to get access to the writable disk
> 
> Trying to re-use the same connection for multiple cause prevents QEMU from
> authenticating with different credentials per disk, so I don't think that
> is a good approach - each disk should have totally independant state.
> 
>> 
>> If yes, then we have a couple of options to implement this:
>> 
>> (1) Accept the TLS credentials per vdisk using the previously
>> discussed --object tls-creds-x509 mechanism. In this case, if more
>> than one vdisk have the same host info, then we will use only the
>> first one's creds to set up the secure connection, and ignore the
>> others. vdisks that connect to different target hosts will use their
>> individual tls-creds-x509 to set up the secure channels. This is, of
>> course, not relevant for qemu-img/qemu-io as they can open only one
>> vdisk at a time.
>> 
>> (2) Instead of having a per-vdisk --object tls-creds-x509, have a
>> single such argument on the command line for vxhs block device code to
>> consume - if that is possible! One way to achieve this could be the
>> user/password authentication we discussed earlier, which we could use
>> to pass the directory where cert/keys are kept.
>> 
>> (3) Use the instance UUID, when available, to lookup the cert files
>> per instance (i.e. for qemu-kvm), and use the --object tls-creds-x509
>> mechanism, when instance UUID is NULL (i.e. qemu-io, qemu-img etc).
>> The cert/key files are anyway protected by file permissions in either
>> case, so I guess there is no additional security provided by either
>> method.
> 
> Regards,
> Daniel
> -- 
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
Ashish Mittal March 8, 2017, 5:59 p.m. UTC | #5
On Wed, Mar 8, 2017 at 5:04 AM, Ketan Nilangekar
<Ketan.Nilangekar@veritas.com> wrote:
>
>
>> On Mar 8, 2017, at 1:13 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
>>
>>> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
>>> Thanks! There is one more input I need some help with!
>>>
>>> VxHS network library opens a fixed number of connection channels to a
>>> given host, and all the vdisks (that connect to the same host) share
>>> these connection channels.
>>>
>>> Therefore, we need to open secure channels to a specific target host
>>> only once for the first vdisk that connects to that host. All the
>>> other vdisks that connect to the same target host will share the same
>>> set of secure communication channels.
>>>
>>> I hope the above scheme is acceptable?
>>
>> I don't think I'm in favour of such an approach, as it forces a single
>> QEMU process to use the same privileges for all disks it uses.
>>
>> Consider an example where a QEMU process has two disks, one shared
>> readonly disk and one exclusive writable disk, both on the same host.
>>
>
> This is not a use case for VxHS as a solution. We do not support sharing of vdisks across QEMU instances.
>
> Vxhs library was thus not designed to open different connections for individual vdisks within a QEMU instance.
>
> Implementing this will involve rewrite of significant parts of libvxhs client and server. Is this a new requirement for acceptance into QEMU?
>
>
>> It is reasonable as an administrator to want to use different credentials
>> for each of these. ie, they might have a set of well known credentials to
>> authenticate to get access to the read-only disk, but have a different set
>> of strictly limited access credentials to get access to the writable disk
>>
>> Trying to re-use the same connection for multiple cause prevents QEMU from
>> authenticating with different credentials per disk, so I don't think that
>> is a good approach - each disk should have totally independant state.
>>

libvxhs does not make any claim to fit all the general purpose
use-cases. It was purpose-built to be the communication channel for
our block device service. As such, we do not need/support all the
general use-cases. For the same reason we changed the name of the
library from linqnio to libvxhs (v8 changelog, #2).

Having dedicated communication channels for each device, or sharing
the channels between multiple devices, should both be acceptable
choices. The latter, IO multiplexing, is also a widely adopted IO
model. It just happens to fit our use-cases better.

Binding access control to a communication channel would prevent
anybody from using the latter approach. Having a separate way to allow
access-control would permit the use of latter also.

>>>
>>> If yes, then we have a couple of options to implement this:
>>>
>>> (1) Accept the TLS credentials per vdisk using the previously
>>> discussed --object tls-creds-x509 mechanism. In this case, if more
>>> than one vdisk have the same host info, then we will use only the
>>> first one's creds to set up the secure connection, and ignore the
>>> others. vdisks that connect to different target hosts will use their
>>> individual tls-creds-x509 to set up the secure channels. This is, of
>>> course, not relevant for qemu-img/qemu-io as they can open only one
>>> vdisk at a time.
>>>
>>> (2) Instead of having a per-vdisk --object tls-creds-x509, have a
>>> single such argument on the command line for vxhs block device code to
>>> consume - if that is possible! One way to achieve this could be the
>>> user/password authentication we discussed earlier, which we could use
>>> to pass the directory where cert/keys are kept.
>>>
>>> (3) Use the instance UUID, when available, to lookup the cert files
>>> per instance (i.e. for qemu-kvm), and use the --object tls-creds-x509
>>> mechanism, when instance UUID is NULL (i.e. qemu-io, qemu-img etc).
>>> The cert/key files are anyway protected by file permissions in either
>>> case, so I guess there is no additional security provided by either
>>> method.
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org              -o-             http://virt-manager.org :|
>> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
Daniel P. Berrangé March 8, 2017, 6:11 p.m. UTC | #6
On Wed, Mar 08, 2017 at 09:59:32AM -0800, ashish mittal wrote:
> On Wed, Mar 8, 2017 at 5:04 AM, Ketan Nilangekar
> <Ketan.Nilangekar@veritas.com> wrote:
> >
> >
> >> On Mar 8, 2017, at 1:13 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> >>
> >>> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
> >>> Thanks! There is one more input I need some help with!
> >>>
> >>> VxHS network library opens a fixed number of connection channels to a
> >>> given host, and all the vdisks (that connect to the same host) share
> >>> these connection channels.
> >>>
> >>> Therefore, we need to open secure channels to a specific target host
> >>> only once for the first vdisk that connects to that host. All the
> >>> other vdisks that connect to the same target host will share the same
> >>> set of secure communication channels.
> >>>
> >>> I hope the above scheme is acceptable?
> >>
> >> I don't think I'm in favour of such an approach, as it forces a single
> >> QEMU process to use the same privileges for all disks it uses.
> >>
> >> Consider an example where a QEMU process has two disks, one shared
> >> readonly disk and one exclusive writable disk, both on the same host.
> >>
> >
> > This is not a use case for VxHS as a solution. We do not support sharing of vdisks across QEMU instances.
> >
> > Vxhs library was thus not designed to open different connections for individual vdisks within a QEMU instance.
> >
> > Implementing this will involve rewrite of significant parts of libvxhs client and server. Is this a new requirement for acceptance into QEMU?
> >
> >
> >> It is reasonable as an administrator to want to use different credentials
> >> for each of these. ie, they might have a set of well known credentials to
> >> authenticate to get access to the read-only disk, but have a different set
> >> of strictly limited access credentials to get access to the writable disk
> >>
> >> Trying to re-use the same connection for multiple cause prevents QEMU from
> >> authenticating with different credentials per disk, so I don't think that
> >> is a good approach - each disk should have totally independant state.
> >>
> 
> libvxhs does not make any claim to fit all the general purpose
> use-cases. It was purpose-built to be the communication channel for
> our block device service. As such, we do not need/support all the
> general use-cases. For the same reason we changed the name of the
> library from linqnio to libvxhs (v8 changelog, #2).

I raise these kind of points because they are relevant to apps like
OpenStack, against which you've proposed VHXS support. OpenStack
intends to allow a single volume to be shared by multiple guests,
so declare that out of scope you're crippling certain use cases
within openstack. Of course you're free to make such a decision,
but it makes VHXS a less compelling technology to use IMHO.

> Having dedicated communication channels for each device, or sharing
> the channels between multiple devices, should both be acceptable
> choices. The latter, IO multiplexing, is also a widely adopted IO
> model. It just happens to fit our use-cases better.
> 
> Binding access control to a communication channel would prevent
> anybody from using the latter approach. Having a separate way to allow
> access-control would permit the use of latter also.

Sharing access control across multiple disks does not fit in effectively
with the model used by apps that manage QEMU. Libvirt, and apps above
libvirt, like OpenStack, oVirt, and things like Kubernetes all represent
the information required to connect to a network block device, on a
per-disk basis - there's no sense of having some information that is
shared across all disks associated with a VM.

So from the POV of modelling this in QEMU, all information needs to be
specified against the individual -drive / -blockdev. If you really must,
you will just have to reject configurations which imply talking to the
same host, with incompatible parameters. Better would be to dynamically
determine if you can re-use connections, vs spawn new connection based
on the config.

Regards,
Daniel
Ashish Mittal March 11, 2017, 3:04 a.m. UTC | #7
On Wed, Mar 8, 2017 at 10:11 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> On Wed, Mar 08, 2017 at 09:59:32AM -0800, ashish mittal wrote:
>> On Wed, Mar 8, 2017 at 5:04 AM, Ketan Nilangekar
>> <Ketan.Nilangekar@veritas.com> wrote:
>> >
>> >
>> >> On Mar 8, 2017, at 1:13 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
>> >>
>> >>> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
>> >>> Thanks! There is one more input I need some help with!
>> >>>
>> >>> VxHS network library opens a fixed number of connection channels to a
>> >>> given host, and all the vdisks (that connect to the same host) share
>> >>> these connection channels.
>> >>>
>> >>> Therefore, we need to open secure channels to a specific target host
>> >>> only once for the first vdisk that connects to that host. All the
>> >>> other vdisks that connect to the same target host will share the same
>> >>> set of secure communication channels.
>> >>>
>> >>> I hope the above scheme is acceptable?
>> >>
>> >> I don't think I'm in favour of such an approach, as it forces a single
>> >> QEMU process to use the same privileges for all disks it uses.
>> >>
>> >> Consider an example where a QEMU process has two disks, one shared
>> >> readonly disk and one exclusive writable disk, both on the same host.
>> >>
>> >
>> > This is not a use case for VxHS as a solution. We do not support sharing of vdisks across QEMU instances.
>> >
>> > Vxhs library was thus not designed to open different connections for individual vdisks within a QEMU instance.
>> >
>> > Implementing this will involve rewrite of significant parts of libvxhs client and server. Is this a new requirement for acceptance into QEMU?
>> >
>> >
>> >> It is reasonable as an administrator to want to use different credentials
>> >> for each of these. ie, they might have a set of well known credentials to
>> >> authenticate to get access to the read-only disk, but have a different set
>> >> of strictly limited access credentials to get access to the writable disk
>> >>
>> >> Trying to re-use the same connection for multiple cause prevents QEMU from
>> >> authenticating with different credentials per disk, so I don't think that
>> >> is a good approach - each disk should have totally independant state.
>> >>
>>
>> libvxhs does not make any claim to fit all the general purpose
>> use-cases. It was purpose-built to be the communication channel for
>> our block device service. As such, we do not need/support all the
>> general use-cases. For the same reason we changed the name of the
>> library from linqnio to libvxhs (v8 changelog, #2).
>
> I raise these kind of points because they are relevant to apps like
> OpenStack, against which you've proposed VHXS support. OpenStack
> intends to allow a single volume to be shared by multiple guests,
> so declare that out of scope you're crippling certain use cases
> within openstack. Of course you're free to make such a decision,
> but it makes VHXS a less compelling technology to use IMHO.
>

Fair point! Sharing of the same disk across multiple guests would
require significant work from our side, and we would like to evaluate
that after OpenStack starts supporting the feature. Would adding this
support now, be a prerequisite for merging vxhs code to qemu?

>> Having dedicated communication channels for each device, or sharing
>> the channels between multiple devices, should both be acceptable
>> choices. The latter, IO multiplexing, is also a widely adopted IO
>> model. It just happens to fit our use-cases better.
>>
>> Binding access control to a communication channel would prevent
>> anybody from using the latter approach. Having a separate way to allow
>> access-control would permit the use of latter also.
>
> Sharing access control across multiple disks does not fit in effectively
> with the model used by apps that manage QEMU. Libvirt, and apps above
> libvirt, like OpenStack, oVirt, and things like Kubernetes all represent
> the information required to connect to a network block device, on a
> per-disk basis - there's no sense of having some information that is
> shared across all disks associated with a VM.
>
> So from the POV of modelling this in QEMU, all information needs to be
> specified against the individual -drive / -blockdev. If you really must,
> you will just have to reject configurations which imply talking to the
> same host, with incompatible parameters. Better would be to dynamically
> determine if you can re-use connections, vs spawn new connection based
> on the config.
>

Would this be a prerequisite for merging vxhs?

> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
Daniel P. Berrangé March 13, 2017, 9:56 a.m. UTC | #8
On Fri, Mar 10, 2017 at 07:04:38PM -0800, ashish mittal wrote:
> On Wed, Mar 8, 2017 at 10:11 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> > On Wed, Mar 08, 2017 at 09:59:32AM -0800, ashish mittal wrote:
> >> On Wed, Mar 8, 2017 at 5:04 AM, Ketan Nilangekar
> >> <Ketan.Nilangekar@veritas.com> wrote:
> >> >
> >> >
> >> >> On Mar 8, 2017, at 1:13 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> >> >>
> >> >>> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
> >> >>> Thanks! There is one more input I need some help with!
> >> >>>
> >> >>> VxHS network library opens a fixed number of connection channels to a
> >> >>> given host, and all the vdisks (that connect to the same host) share
> >> >>> these connection channels.
> >> >>>
> >> >>> Therefore, we need to open secure channels to a specific target host
> >> >>> only once for the first vdisk that connects to that host. All the
> >> >>> other vdisks that connect to the same target host will share the same
> >> >>> set of secure communication channels.
> >> >>>
> >> >>> I hope the above scheme is acceptable?
> >> >>
> >> >> I don't think I'm in favour of such an approach, as it forces a single
> >> >> QEMU process to use the same privileges for all disks it uses.
> >> >>
> >> >> Consider an example where a QEMU process has two disks, one shared
> >> >> readonly disk and one exclusive writable disk, both on the same host.
> >> >>
> >> >
> >> > This is not a use case for VxHS as a solution. We do not support sharing of vdisks across QEMU instances.
> >> >
> >> > Vxhs library was thus not designed to open different connections for individual vdisks within a QEMU instance.
> >> >
> >> > Implementing this will involve rewrite of significant parts of libvxhs client and server. Is this a new requirement for acceptance into QEMU?
> >> >
> >> >
> >> >> It is reasonable as an administrator to want to use different credentials
> >> >> for each of these. ie, they might have a set of well known credentials to
> >> >> authenticate to get access to the read-only disk, but have a different set
> >> >> of strictly limited access credentials to get access to the writable disk
> >> >>
> >> >> Trying to re-use the same connection for multiple cause prevents QEMU from
> >> >> authenticating with different credentials per disk, so I don't think that
> >> >> is a good approach - each disk should have totally independant state.
> >> >>
> >>
> >> libvxhs does not make any claim to fit all the general purpose
> >> use-cases. It was purpose-built to be the communication channel for
> >> our block device service. As such, we do not need/support all the
> >> general use-cases. For the same reason we changed the name of the
> >> library from linqnio to libvxhs (v8 changelog, #2).
> >
> > I raise these kind of points because they are relevant to apps like
> > OpenStack, against which you've proposed VHXS support. OpenStack
> > intends to allow a single volume to be shared by multiple guests,
> > so declare that out of scope you're crippling certain use cases
> > within openstack. Of course you're free to make such a decision,
> > but it makes VHXS a less compelling technology to use IMHO.
> >
> 
> Fair point! Sharing of the same disk across multiple guests would
> require significant work from our side, and we would like to evaluate
> that after OpenStack starts supporting the feature. Would adding this
> support now, be a prerequisite for merging vxhs code to qemu?

No, it isn't a requirement - just a (strong-ish) suggestion. We just need
to ensure that the CLI syntax allows us to support it in the future without
backwards incompatible changes to the CLI.

Regards,
Daniel
Daniel P. Berrangé March 13, 2017, 9:57 a.m. UTC | #9
On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
> Thanks! There is one more input I need some help with!
> 
> VxHS network library opens a fixed number of connection channels to a
> given host, and all the vdisks (that connect to the same host) share
> these connection channels.
> 
> Therefore, we need to open secure channels to a specific target host
> only once for the first vdisk that connects to that host. All the
> other vdisks that connect to the same target host will share the same
> set of secure communication channels.
> 
> I hope the above scheme is acceptable?
> 
> If yes, then we have a couple of options to implement this:
> 
> (1) Accept the TLS credentials per vdisk using the previously
> discussed --object tls-creds-x509 mechanism. In this case, if more
> than one vdisk have the same host info, then we will use only the
> first one's creds to set up the secure connection, and ignore the
> others. vdisks that connect to different target hosts will use their
> individual tls-creds-x509 to set up the secure channels. This is, of
> course, not relevant for qemu-img/qemu-io as they can open only one
> vdisk at a time.

It looks like option 1 here is the way to go. Just report an error if
there are multiple creds provided for the same host and they don't
match.

> 
> (2) Instead of having a per-vdisk --object tls-creds-x509, have a
> single such argument on the command line for vxhs block device code to
> consume - if that is possible! One way to achieve this could be the
> user/password authentication we discussed earlier, which we could use
> to pass the directory where cert/keys are kept.
> 
> (3) Use the instance UUID, when available, to lookup the cert files
> per instance (i.e. for qemu-kvm), and use the --object tls-creds-x509
> mechanism, when instance UUID is NULL (i.e. qemu-io, qemu-img etc).
> The cert/key files are anyway protected by file permissions in either
> case, so I guess there is no additional security provided by either
> method.

Regards,
Daniel
Ashish Mittal March 17, 2017, 12:29 a.m. UTC | #10
On Mon, Mar 13, 2017 at 2:57 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
>> Thanks! There is one more input I need some help with!
>>
>> VxHS network library opens a fixed number of connection channels to a
>> given host, and all the vdisks (that connect to the same host) share
>> these connection channels.
>>
>> Therefore, we need to open secure channels to a specific target host
>> only once for the first vdisk that connects to that host. All the
>> other vdisks that connect to the same target host will share the same
>> set of secure communication channels.
>>
>> I hope the above scheme is acceptable?
>>
>> If yes, then we have a couple of options to implement this:
>>
>> (1) Accept the TLS credentials per vdisk using the previously
>> discussed --object tls-creds-x509 mechanism. In this case, if more
>> than one vdisk have the same host info, then we will use only the
>> first one's creds to set up the secure connection, and ignore the
>> others. vdisks that connect to different target hosts will use their
>> individual tls-creds-x509 to set up the secure channels. This is, of
>> course, not relevant for qemu-img/qemu-io as they can open only one
>> vdisk at a time.
>
> It looks like option 1 here is the way to go. Just report an error if
> there are multiple creds provided for the same host and they don't
> match.
>

I have made changes to implement option 1 in the library (branch
per_channel_ssl).
Can you please help review it?
https://github.com/VeritasHyperScale/libqnio/compare/per_channel_ssl

Here's the changelog:
(1) Changed code to not use instance UUID for setting up SSL context.
(2) User is supposed to pass the cacert, client_key and client_cert
    files to iio_open(). These will be used to set up a per-channel secure SSL
    connection to the server. All three values are needed to set up a
    secure connection.
(3) If the secure channel to a particular host is already open, other
    block device connections to the same host will have to provide
    TLS/SSL credentials that match the original one.
(4) Set default locations for trusted client CA certificates
     based on user specified cacert file.

NB - I have given steps to test SSL communication (using the supplied
test client/server programs) in the commit log. I have not tested
using qemu binary yet. Will run more tests in the days to come.

qemu vxhs patch changes should be pretty straightforward, given that
we have already discussed how to implement passing --object
tls-creds-x509 per block device.

Thanks!

>>
>> (2) Instead of having a per-vdisk --object tls-creds-x509, have a
>> single such argument on the command line for vxhs block device code to
>> consume - if that is possible! One way to achieve this could be the
>> user/password authentication we discussed earlier, which we could use
>> to pass the directory where cert/keys are kept.
>>
>> (3) Use the instance UUID, when available, to lookup the cert files
>> per instance (i.e. for qemu-kvm), and use the --object tls-creds-x509
>> mechanism, when instance UUID is NULL (i.e. qemu-io, qemu-img etc).
>> The cert/key files are anyway protected by file permissions in either
>> case, so I guess there is no additional security provided by either
>> method.
>
> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
Ashish Mittal March 18, 2017, 1:44 a.m. UTC | #11
On Thu, Mar 16, 2017 at 5:29 PM, ashish mittal <ashmit602@gmail.com> wrote:
> On Mon, Mar 13, 2017 at 2:57 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
>> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
>>> Thanks! There is one more input I need some help with!
>>>
>>> VxHS network library opens a fixed number of connection channels to a
>>> given host, and all the vdisks (that connect to the same host) share
>>> these connection channels.
>>>
>>> Therefore, we need to open secure channels to a specific target host
>>> only once for the first vdisk that connects to that host. All the
>>> other vdisks that connect to the same target host will share the same
>>> set of secure communication channels.
>>>
>>> I hope the above scheme is acceptable?
>>>
>>> If yes, then we have a couple of options to implement this:
>>>
>>> (1) Accept the TLS credentials per vdisk using the previously
>>> discussed --object tls-creds-x509 mechanism. In this case, if more
>>> than one vdisk have the same host info, then we will use only the
>>> first one's creds to set up the secure connection, and ignore the
>>> others. vdisks that connect to different target hosts will use their
>>> individual tls-creds-x509 to set up the secure channels. This is, of
>>> course, not relevant for qemu-img/qemu-io as they can open only one
>>> vdisk at a time.
>>
>> It looks like option 1 here is the way to go. Just report an error if
>> there are multiple creds provided for the same host and they don't
>> match.
>>
>
> I have made changes to implement option 1 in the library (branch
> per_channel_ssl).
> Can you please help review it?
> https://github.com/VeritasHyperScale/libqnio/compare/per_channel_ssl
>
> Here's the changelog:
> (1) Changed code to not use instance UUID for setting up SSL context.
> (2) User is supposed to pass the cacert, client_key and client_cert
>     files to iio_open(). These will be used to set up a per-channel secure SSL
>     connection to the server. All three values are needed to set up a
>     secure connection.
> (3) If the secure channel to a particular host is already open, other
>     block device connections to the same host will have to provide
>     TLS/SSL credentials that match the original one.
> (4) Set default locations for trusted client CA certificates
>      based on user specified cacert file.
>
> NB - I have given steps to test SSL communication (using the supplied
> test client/server programs) in the commit log. I have not tested
> using qemu binary yet. Will run more tests in the days to come.
>
> qemu vxhs patch changes should be pretty straightforward, given that
> we have already discussed how to implement passing --object
> tls-creds-x509 per block device.
>
> Thanks!
>

Update -
(1) Successfully tested SSL communication using qemu-io and the test server.
(2) Minor changes to per_channel_ssl branch.
(3) Created a pull request.

Please review per convenience. Thanks!

>>>
>>> (2) Instead of having a per-vdisk --object tls-creds-x509, have a
>>> single such argument on the command line for vxhs block device code to
>>> consume - if that is possible! One way to achieve this could be the
>>> user/password authentication we discussed earlier, which we could use
>>> to pass the directory where cert/keys are kept.
>>>
>>> (3) Use the instance UUID, when available, to lookup the cert files
>>> per instance (i.e. for qemu-kvm), and use the --object tls-creds-x509
>>> mechanism, when instance UUID is NULL (i.e. qemu-io, qemu-img etc).
>>> The cert/key files are anyway protected by file permissions in either
>>> case, so I guess there is no additional security provided by either
>>> method.
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org              -o-             http://virt-manager.org :|
>> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
Daniel P. Berrangé March 20, 2017, 12:55 p.m. UTC | #12
On Fri, Mar 17, 2017 at 06:44:56PM -0700, ashish mittal wrote:
> On Thu, Mar 16, 2017 at 5:29 PM, ashish mittal <ashmit602@gmail.com> wrote:
> > On Mon, Mar 13, 2017 at 2:57 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> >> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
> >>> Thanks! There is one more input I need some help with!
> >>>
> >>> VxHS network library opens a fixed number of connection channels to a
> >>> given host, and all the vdisks (that connect to the same host) share
> >>> these connection channels.
> >>>
> >>> Therefore, we need to open secure channels to a specific target host
> >>> only once for the first vdisk that connects to that host. All the
> >>> other vdisks that connect to the same target host will share the same
> >>> set of secure communication channels.
> >>>
> >>> I hope the above scheme is acceptable?
> >>>
> >>> If yes, then we have a couple of options to implement this:
> >>>
> >>> (1) Accept the TLS credentials per vdisk using the previously
> >>> discussed --object tls-creds-x509 mechanism. In this case, if more
> >>> than one vdisk have the same host info, then we will use only the
> >>> first one's creds to set up the secure connection, and ignore the
> >>> others. vdisks that connect to different target hosts will use their
> >>> individual tls-creds-x509 to set up the secure channels. This is, of
> >>> course, not relevant for qemu-img/qemu-io as they can open only one
> >>> vdisk at a time.
> >>
> >> It looks like option 1 here is the way to go. Just report an error if
> >> there are multiple creds provided for the same host and they don't
> >> match.
> >>
> >
> > I have made changes to implement option 1 in the library (branch
> > per_channel_ssl).
> > Can you please help review it?
> > https://github.com/VeritasHyperScale/libqnio/compare/per_channel_ssl
> >
> > Here's the changelog:
> > (1) Changed code to not use instance UUID for setting up SSL context.
> > (2) User is supposed to pass the cacert, client_key and client_cert
> >     files to iio_open(). These will be used to set up a per-channel secure SSL
> >     connection to the server. All three values are needed to set up a
> >     secure connection.
> > (3) If the secure channel to a particular host is already open, other
> >     block device connections to the same host will have to provide
> >     TLS/SSL credentials that match the original one.
> > (4) Set default locations for trusted client CA certificates
> >      based on user specified cacert file.
> >
> > NB - I have given steps to test SSL communication (using the supplied
> > test client/server programs) in the commit log. I have not tested
> > using qemu binary yet. Will run more tests in the days to come.
> >
> > qemu vxhs patch changes should be pretty straightforward, given that
> > we have already discussed how to implement passing --object
> > tls-creds-x509 per block device.
> >
> > Thanks!
> >
> 
> Update -
> (1) Successfully tested SSL communication using qemu-io and the test server.
> (2) Minor changes to per_channel_ssl branch.
> (3) Created a pull request.
> 
> Please review per convenience. Thanks!

IIUC, on that branch the 'is_secure()' method is still looking for the
directory /var/lib/libvxhs/secure to exist on the host. If that is not
present, then it appears to be silently ignoring the SSL certs passed
in from QEMU.

IMHO it should enable TLS when 'cacert' passed to iio_open is not NULL,
not relying on a magic directory to exist.

Regards,
Daniel
Ashish Mittal March 23, 2017, 12:03 a.m. UTC | #13
On Mon, Mar 20, 2017 at 5:55 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
> On Fri, Mar 17, 2017 at 06:44:56PM -0700, ashish mittal wrote:
>> On Thu, Mar 16, 2017 at 5:29 PM, ashish mittal <ashmit602@gmail.com> wrote:
>> > On Mon, Mar 13, 2017 at 2:57 AM, Daniel P. Berrange <berrange@redhat.com> wrote:
>> >> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
>> >>> Thanks! There is one more input I need some help with!
>> >>>
>> >>> VxHS network library opens a fixed number of connection channels to a
>> >>> given host, and all the vdisks (that connect to the same host) share
>> >>> these connection channels.
>> >>>
>> >>> Therefore, we need to open secure channels to a specific target host
>> >>> only once for the first vdisk that connects to that host. All the
>> >>> other vdisks that connect to the same target host will share the same
>> >>> set of secure communication channels.
>> >>>
>> >>> I hope the above scheme is acceptable?
>> >>>
>> >>> If yes, then we have a couple of options to implement this:
>> >>>
>> >>> (1) Accept the TLS credentials per vdisk using the previously
>> >>> discussed --object tls-creds-x509 mechanism. In this case, if more
>> >>> than one vdisk have the same host info, then we will use only the
>> >>> first one's creds to set up the secure connection, and ignore the
>> >>> others. vdisks that connect to different target hosts will use their
>> >>> individual tls-creds-x509 to set up the secure channels. This is, of
>> >>> course, not relevant for qemu-img/qemu-io as they can open only one
>> >>> vdisk at a time.
>> >>
>> >> It looks like option 1 here is the way to go. Just report an error if
>> >> there are multiple creds provided for the same host and they don't
>> >> match.
>> >>
>> >
>> > I have made changes to implement option 1 in the library (branch
>> > per_channel_ssl).
>> > Can you please help review it?
>> > https://github.com/VeritasHyperScale/libqnio/compare/per_channel_ssl
>> >
>> > Here's the changelog:
>> > (1) Changed code to not use instance UUID for setting up SSL context.
>> > (2) User is supposed to pass the cacert, client_key and client_cert
>> >     files to iio_open(). These will be used to set up a per-channel secure SSL
>> >     connection to the server. All three values are needed to set up a
>> >     secure connection.
>> > (3) If the secure channel to a particular host is already open, other
>> >     block device connections to the same host will have to provide
>> >     TLS/SSL credentials that match the original one.
>> > (4) Set default locations for trusted client CA certificates
>> >      based on user specified cacert file.
>> >
>> > NB - I have given steps to test SSL communication (using the supplied
>> > test client/server programs) in the commit log. I have not tested
>> > using qemu binary yet. Will run more tests in the days to come.
>> >
>> > qemu vxhs patch changes should be pretty straightforward, given that
>> > we have already discussed how to implement passing --object
>> > tls-creds-x509 per block device.
>> >
>> > Thanks!
>> >
>>
>> Update -
>> (1) Successfully tested SSL communication using qemu-io and the test server.
>> (2) Minor changes to per_channel_ssl branch.
>> (3) Created a pull request.
>>
>> Please review per convenience. Thanks!
>
> IIUC, on that branch the 'is_secure()' method is still looking for the
> directory /var/lib/libvxhs/secure to exist on the host. If that is not
> present, then it appears to be silently ignoring the SSL certs passed
> in from QEMU.
>
> IMHO it should enable TLS when 'cacert' passed to iio_open is not NULL,
> not relying on a magic directory to exist.
>

I have made changes per above to the library. Please see commits -

https://github.com/VeritasHyperScale/libqnio/commit/6c3261e9c9bb1350f4433a1ae4fcd98f7692cf39
https://github.com/VeritasHyperScale/libqnio/commit/502c74278457e6ac86a4ee4ad9102e56ff3be5d4

Commit log: Enable secure mode based on the SSL/TLS args passed in iio_open()

(1) Do not use /var/lib/libvxhs/secure to enable secure SSL mode on
    the client side.
(2) Instead, enable SSL mode if the user passes TLS/SSL creds for
    the block device on the qemu CLI.

Will be posting v10 qemu vxhs patch soon. v10 will work with the
latest library changes, and will support passing tls-creds-x509 creds
for every vxhs block device.


> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org              -o-             http://virt-manager.org :|
> |: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|
diff mbox

Patch

diff --git a/block/trace-events b/block/trace-events
index f193079..7758ec3 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -126,3 +126,4 @@  vxhs_open_hostinfo(char *of_vsa_addr, int port)
"Adding host %s:%d to BDRVVXHSSt
 vxhs_open_iio_open(const char *host) "Failed to connect to storage
agent on host %s"
 vxhs_parse_uri_hostinfo(char *host, int port) "Host: IP %s, Port %d"
 vxhs_close(char *vdisk_guid) "Closing vdisk %s"
+vxhs_get_creds(const char *cacert, const char *client_key, const char
*client_cert) "cacert %s, client_key %s, client_cert %s"
diff --git a/block/vxhs.c b/block/vxhs.c
index 4f0633e..3e429e2 100644
--- a/block/vxhs.c
+++ b/block/vxhs.c
@@ -17,6 +17,9 @@ 
 #include "qemu/uri.h"
 #include "qapi/error.h"
 #include "qemu/uuid.h"
+#include "crypto/secret.h"
+#include "crypto/tlscredsx509.h"
+#include "crypto/tlscredspriv.h"

 #define VXHS_OPT_FILENAME           "filename"
 #define VXHS_OPT_VDISK_ID           "vdisk-id"
@@ -55,6 +58,7 @@  typedef struct VXHSvDiskHostsInfo {
 typedef struct BDRVVXHSState {
     VXHSvDiskHostsInfo vdisk_hostinfo; /* Per host info */
     char *vdisk_guid;
+    char *tlscredsid; /* tlscredsid */
 } BDRVVXHSState;

 static void vxhs_complete_aio_bh(void *opaque)
@@ -136,6 +140,11 @@  static QemuOptsList runtime_opts = {
             .type = QEMU_OPT_STRING,
             .help = "UUID of the VxHS vdisk",
         },
+        {
+            .name = "tls-creds",
+            .type = QEMU_OPT_STRING,
+            .help = "ID of the TLS/SSL credentials to use",
+        },
         { /* end of list */ }
     },
 };
@@ -244,6 +253,55 @@  static void vxhs_unref(void)
     }
 }

+static void vxhs_get_tls_creds(const char *id, char **cacert,
+                               char **key, char **cert, Error **errp)
+{
+    Object *obj;
+    QCryptoTLSCreds *creds = NULL;
+    QCryptoTLSCredsX509 *creds_x509 = NULL;
+
+    obj = object_resolve_path_component(
+        object_get_objects_root(), id);
+
+    if (!obj) {
+        error_setg(errp, "No TLS credentials with id '%s'",
+                   id);
+        return;
+    }
+
+    creds_x509 = (QCryptoTLSCredsX509 *)
+        object_dynamic_cast(obj, TYPE_QCRYPTO_TLS_CREDS_X509);
+
+    if (!creds_x509) {
+        error_setg(errp, "Object with id '%s' is not TLS credentials",
+                   id);
+        return;
+    }
+
+    creds = &creds_x509->parent_obj;
+
+    if (creds->endpoint != QCRYPTO_TLS_CREDS_ENDPOINT_CLIENT) {
+        error_setg(errp,
+                   "Expecting TLS credentials with a client endpoint");
+        return;
+    }
+
+    /*
+     * Get the cacert, client_cert and client_key file names.
+     */
+    if (qcrypto_tls_creds_get_path(creds, QCRYPTO_TLS_CREDS_X509_CA_CERT,
+                                   true, cacert, errp) < 0 ||
+        qcrypto_tls_creds_get_path(&creds_x509->parent_obj,
+                                   QCRYPTO_TLS_CREDS_X509_CLIENT_CERT,
+                                   false, cert, errp) < 0 ||
+        qcrypto_tls_creds_get_path(&creds_x509->parent_obj,
+                                   QCRYPTO_TLS_CREDS_X509_CLIENT_KEY,
+                                   false, key, errp) < 0) {
+        error_setg(errp,
+                   "Error retrieving information from  TLS object");
+    }
+}
+
 static int vxhs_open(BlockDriverState *bs, QDict *options,
                      int bdrv_flags, Error **errp)
 {
@@ -257,6 +315,9 @@  static int vxhs_open(BlockDriverState *bs, QDict *options,
     const char *server_host_opt;
     char *str = NULL;
     int ret = 0;
+    char *cacert = NULL;
+    char *client_key = NULL;
+    char *client_cert = NULL;

     ret = vxhs_init_and_ref();
     if (ret < 0) {
@@ -297,8 +358,7 @@  static int vxhs_open(BlockDriverState *bs, QDict *options,
     qdict_extract_subqdict(options, &backing_options, str);

     qemu_opts_absorb_qdict(tcp_opts, backing_options, &local_err);
-    if (local_err) {
-        qdict_del(backing_options, str);
+    if (local_err != NULL) {
         ret = -EINVAL;
         goto out;
     }
@@ -307,7 +367,6 @@  static int vxhs_open(BlockDriverState *bs, QDict *options,
     if (!server_host_opt) {
         error_setg(&local_err, QERR_MISSING_PARAMETER,
                    VXHS_OPT_SERVER"."VXHS_OPT_HOST);
-        qdict_del(backing_options, str);
         ret = -EINVAL;
         goto out;
     }
@@ -315,33 +374,39 @@  static int vxhs_open(BlockDriverState *bs, QDict *options,
     if (strlen(server_host_opt) > MAXHOSTNAMELEN) {
         error_setg(&local_err, "server.host cannot be more than %d characters",
                    MAXHOSTNAMELEN);
-        qdict_del(backing_options, str);
         ret = -EINVAL;
         goto out;
     }

-    s->vdisk_hostinfo.host = g_strdup(server_host_opt);
+    /* check if we got tls-creds via the --object argument */
+    s->tlscredsid = g_strdup(qemu_opt_get(opts, "tls-creds"));
+    if (s->tlscredsid) {
+        vxhs_get_tls_creds(s->tlscredsid, &cacert, &client_key,
+                           &client_cert, &local_err);
+        if (local_err != NULL) {
+            ret = -EINVAL;
+            goto out;
+        }
+        trace_vxhs_get_creds(cacert, client_key, client_cert);
+    }

+    s->vdisk_hostinfo.host = g_strdup(server_host_opt);
     s->vdisk_hostinfo.port = g_ascii_strtoll(qemu_opt_get(tcp_opts,
                                                           VXHS_OPT_PORT),
                                                           NULL, 0);

     trace_vxhs_open_hostinfo(s->vdisk_hostinfo.host,
-                         s->vdisk_hostinfo.port);
-
-    /* free the 'server.' entries allocated by previous call to
-     * qdict_extract_subqdict()
-     */
-    qdict_del(backing_options, str);
+                             s->vdisk_hostinfo.port);

     of_vsa_addr = g_strdup_printf("of://%s:%d",
-                                s->vdisk_hostinfo.host,
-                                s->vdisk_hostinfo.port);
+                                  s->vdisk_hostinfo.host,
+                                  s->vdisk_hostinfo.port);

     /*
      * Open qnio channel to storage agent if not opened before.
      */
-    dev_handlep = iio_open(of_vsa_addr, s->vdisk_guid, 0);
+    dev_handlep = iio_open(of_vsa_addr, s->vdisk_guid, 0,
+                           client_key, client_cert);
    <===== !!!! NOTE !!!!
     if (dev_handlep == NULL) {
         trace_vxhs_open_iio_open(of_vsa_addr);