[Contribution] HPE 3PAR Storage driver

Hello, FeldHost™ like to announce the availability of a new HPE 3PAR storage driver for OpenNebula.

Features

Support standard OpenNebula datastore operations:

  • datastore configuration via CLI
  • all Datastore MAD(DATASTORE_MAD) and Transfer Manager MAD(TM_MAD) functionality
  • SYSTEM datastore
  • TRIM/discard in the VM when the virtio-scsi driver is in use (require DEV_PREFIX=sd and DISCARD=unmap)
  • disk images can be fully provisioned, thin provisioned, thin deduplicated, thin compressed or thin deduplicated and compressed RAW block devices
  • support different 3PAR CPGs as separate datastores
  • support for 3PAR Priority Optimization Policy (QoS)
  • live VM snapshots
  • live VM migrations

Project repository: https://gitlab.feldhost.cz/feldhost-public/one-addon-3par

We also have sunstone integration done but in a separate project.

1 Like

This is great, Kristian!
This would be great to officially include into the OpenNebula Add-on Catalog. Take a look at the steps that we have outlined for Add-on Catalog contribution, (https://github.com/OpenNebula/one/wiki/How-to-participate-in-Add_on-Development). Once you complete the steps 1 through 4, let me know, and we will pick things up at step 5 to get the Add-on published in our Catalog and hosted on our GitHub.

Nice work!
Michael

Hi @mabdou, I think, that project has already done steps 1-4, so you can create repo in your namespace and assign me rights.

That is great @feldsam ! We have created the repository for you on our OpenNebula GitHub - https://github.com/OpenNebula/addon-3par . Now, you should work on creating the README doc and LICENSE file, as outlined in Step 6 of our documented procedures.
We will assume that you are the Add-on leader.

Best.
Michael

Hello, sources are pushed to provded repo. You can list addon in catalog

Hi @feldsam we at WEDOS.cz also going to use 3par driver.

I performed a fork of your project, since the existing driver didn’t fit our needs.

What have been done:

  • Drop Fibre Channel support (if there was any)
  • Disable soft-delete option by default
  • Remove automatic installation script
  • Switch to using Python3 instead of deprecated Python2
  • Support modern monitoring format for OpenNebula >=5.12, add LEGACY_MONITORING option.
  • Automatic setup hosts in 3par on demand, and perform iscsi login/logout operations
  • Add ssh-agent support (OpenNebula >v5.10)
  • Add feature to save/restore VM state into 3par to VM driver
  • Limit amount of connections to 3par device by specifying PORTALS_NUM
  • Add deploy/undeploy operations support to TM driver
  • Parse API_ENDPOINT and IP from datastore template
  • Support migration between multiple 3pars (DS and TM)
  • Added locking mechanism to perform some operations in bunch

The final patch contains 2,323 additions and 732 deletions.

Let me know if you’d like to merge this to upstream. We can consider the opportunities or we’ll continue developing it as a separate project.

Hi @kvaps, too many changes to merge it as is.

  • FC support dropped
  • too tight to your infrastructure - for example tm/clone clones image across 3par systems
  • if you want to merge your changes, it has to be all configurable
  • regarding locking, looks like, you can tm/clone only one VM at a time, we can provision 100 VMs at the same time because we leverage storage system clone functionality
1 Like

too many changes to merge it as is.

Yeah agree, for that reason I wanted to know your opinion if you’d like to have this merged some time.

FC support dropped

Of course this can be configurable, it only depends on iscsiadm used in the iscsi_login and iscsi_logout functions.

Also I put iscsiadm -m session --rescan into rescan_scsi_bus, because without this it was not working for some reason in my environment (Ubuntu 20.04.2 LTS (Focal Fossa))


too tight to your infrastructure - for example tm/clone clones image across 3par systems

This is not entirely true. You’re right that opportunity to copy images between 3pars were added, but in case if system_ds is located in same 3par as images_ds the simple copying in one 3par will be performed, right as it was before.

If you’re talking about the fact of changing CLONE_TARGET=SELF to SYSTEM, then my bad, I didn’t write about this change in changelog. Nevertheless I believe both options can exist.

But if I understood this correct, this option must persist in oned.conf configuration file and accordingly can’t be configurable per each datastore separately.

regarding locking, looks like, you can tm/clone only one VM at a time, we can provision 100 VMs at the same time because we leverage storage system clone functionality

not exactly, locking was needed only for tm/clone and datastore/clone to support cases when image is copied between the different 3par systems. As in this case copying is performed using dd instead of using simple API calls. In case of copying in same 3par system the locking mechanism is not required.

if you want to merge your changes, it has to be all configurable

Okay. I will clarify this question with my supervisors. And if they agree to invest my time in this I will do this :slight_smile:

One more question: How would you like to get and review these changes: with one piece or should I devide them into many small patches?
In case of success can you help me with testing this driver with FC?

I prefer to split changes into smaller patches, so we can merge and test them gradually. Regarding FC, yes I can help, we have a new 3PAR on the way for the DR site. We also plan to extend this driver by Peer Persistence/Remote Copy support.

Just to note that this option was introduced in addon-storpool with the release of one-5.12. Later, I’ve figured out that the monitoring compatibility could be “auto-detected”. You could take a look at the latest https://github.com/OpenNebula/addon-storpool/blob/master/tm/storpool/monitor#L47 :wink:

@atodorov_storpool, such an elegant solution, thanks for pointing out! :hugs:
I’m going to implement it in my linstor_un and 3par drivers

@feldsam, is there any specific reason to require user_friendly_names: no setting for multipath?
I just found that /dev/disk/by-id/wwn-0x$WWN paths are working fine to me:

proof:

# multipath -l | awk '/^3/ {print $1}' | sed 's/^3//' | xargs -I {} ls -lh /dev/mapper/3{} /dev/disk/by-id/wwn-0x{}
lrwxrwxrwx 1 root root 11 Nov  5 15:41 /dev/disk/by-id/wwn-0x60002ac000000000000000a000027bbd -> ../../dm-36
lrwxrwxrwx 1 root root  8 Nov  5 15:41 /dev/mapper/360002ac000000000000000a000027bbd -> ../dm-36
lrwxrwxrwx 1 root root 11 Nov  5 15:54 /dev/disk/by-id/wwn-0x60002ac000000000000001a300027ba6 -> ../../dm-47
lrwxrwxrwx 1 root root  8 Nov  5 15:54 /dev/mapper/360002ac000000000000001a300027ba6 -> ../dm-47
lrwxrwxrwx 1 root root 11 Nov  5 15:59 /dev/disk/by-id/wwn-0x60002ac0000000000000018200027ba6 -> ../../dm-23
lrwxrwxrwx 1 root root  8 Nov  5 15:59 /dev/mapper/360002ac0000000000000018200027ba6 -> ../dm-23
lrwxrwxrwx 1 root root 11 Nov  5 15:41 /dev/disk/by-id/wwn-0x60002ac000000000000000e600027bbd -> ../../dm-26
lrwxrwxrwx 1 root root  8 Nov  5 15:41 /dev/mapper/360002ac000000000000000e600027bbd -> ../dm-26
lrwxrwxrwx 1 root root 11 Nov  5 15:54 /dev/disk/by-id/wwn-0x60002ac0000000000000017f00027ba6 -> ../../dm-21
lrwxrwxrwx 1 root root  8 Nov  5 15:54 /dev/mapper/360002ac0000000000000017f00027ba6 -> ../dm-21
lrwxrwxrwx 1 root root 11 Nov  5 15:40 /dev/disk/by-id/wwn-0x60002ac0000000000000009f00027bbd -> ../../dm-29
lrwxrwxrwx 1 root root  8 Nov  5 15:40 /dev/mapper/360002ac0000000000000009f00027bbd -> ../dm-29
lrwxrwxrwx 1 root root 11 Nov  5 15:54 /dev/disk/by-id/wwn-0x60002ac000000000000000d800027ba6 -> ../../dm-14
lrwxrwxrwx 1 root root  8 Nov  5 15:54 /dev/mapper/360002ac000000000000000d800027ba6 -> ../dm-14
lrwxrwxrwx 1 root root 11 Nov  5 15:41 /dev/disk/by-id/wwn-0x60002ac000000000000000c900027bbd -> ../../dm-40
lrwxrwxrwx 1 root root  8 Nov  5 15:41 /dev/mapper/360002ac000000000000000c900027bbd -> ../dm-40
lrwxrwxrwx 1 root root 11 Oct 23 17:08 /dev/disk/by-id/wwn-0x60002ac000000000000001a200027ba6 -> ../../dm-42
lrwxrwxrwx 1 root root  8 Oct 23 17:08 /dev/mapper/360002ac000000000000001a200027ba6 -> ../dm-42
lrwxrwxrwx 1 root root 11 Nov  5 15:41 /dev/disk/by-id/wwn-0x60002ac000000000000000c200027bbd -> ../../dm-22
lrwxrwxrwx 1 root root  8 Nov  5 15:41 /dev/mapper/360002ac000000000000000c200027bbd -> ../dm-22
lrwxrwxrwx 1 root root 11 Nov  5 15:39 /dev/disk/by-id/wwn-0x60002ac0000000000000006700027bbd -> ../../dm-17
lrwxrwxrwx 1 root root  8 Nov  5 15:39 /dev/mapper/360002ac0000000000000006700027bbd -> ../dm-17
lrwxrwxrwx 1 root root 11 Oct 21 17:53 /dev/disk/by-id/wwn-0x60002ac000000000000000d900027ba6 -> ../../dm-15
lrwxrwxrwx 1 root root  8 Oct 21 17:53 /dev/mapper/360002ac000000000000000d900027ba6 -> ../dm-15
lrwxrwxrwx 1 root root 11 Nov  5 15:41 /dev/disk/by-id/wwn-0x60002ac000000000000000c700027bbd -> ../../dm-24
lrwxrwxrwx 1 root root  8 Nov  5 15:41 /dev/mapper/360002ac000000000000000c700027bbd -> ../dm-24
lrwxrwxrwx 1 root root 11 Nov  5 15:41 /dev/disk/by-id/wwn-0x60002ac000000000000000bd00027bbd -> ../../dm-32
lrwxrwxrwx 1 root root  8 Nov  5 15:41 /dev/mapper/360002ac000000000000000bd00027bbd -> ../dm-32
lrwxrwxrwx 1 root root 11 Nov  5 15:52 /dev/disk/by-id/wwn-0x60002ac000000000000000bf00027ba6 -> ../../dm-16
lrwxrwxrwx 1 root root  8 Nov  5 15:52 /dev/mapper/360002ac000000000000000bf00027ba6 -> ../dm-16
lrwxrwxrwx 1 root root 11 Nov  5 15:41 /dev/disk/by-id/wwn-0x60002ac000000000000000ad00027bbd -> ../../dm-48
lrwxrwxrwx 1 root root  8 Nov  5 15:41 /dev/mapper/360002ac000000000000000ad00027bbd -> ../dm-48
lrwxrwxrwx 1 root root 11 Nov  5 15:54 /dev/disk/by-id/wwn-0x60002ac0000000000000019200027ba6 -> ../../dm-28
lrwxrwxrwx 1 root root  8 Nov  5 15:54 /dev/mapper/360002ac0000000000000019200027ba6 -> ../dm-28

hmm, interesting, on my system these symlinks are pointing to …/sd**

lrwxrwxrwx 1 root root 10 Sep 22 09:27 /dev/disk/by-id/wwn-0x60002ac000000000000108330001ec48 -> ../../sdkc
lrwxrwxrwx 1 root root  9 Sep 19 23:59 /dev/mapper/360002ac000000000000108330001ec48 -> ../dm-128
lrwxrwxrwx 1 root root 10 Nov  8 20:30 /dev/disk/by-id/wwn-0x60002ac0000000000000f8d70001ec48 -> ../../sdhm
lrwxrwxrwx 1 root root  8 Aug 16 16:05 /dev/mapper/360002ac0000000000000f8d70001ec48 -> ../dm-95
lrwxrwxrwx 1 root root 10 Aug 10 09:31 /dev/disk/by-id/wwn-0x60002ac0000000000000f6340001ec48 -> ../../sdbl
lrwxrwxrwx 1 root root  8 Aug 10 09:31 /dev/mapper/360002ac0000000000000f6340001ec48 -> ../dm-24
lrwxrwxrwx 1 root root 10 Mar 17  2021 /dev/disk/by-id/wwn-0x60002ac0000000000000a9f70001ec48 -> ../../sdjq
lrwxrwxrwx 1 root root  9 Mar 17  2021 /dev/mapper/360002ac0000000000000a9f70001ec48 -> ../dm-126
lrwxrwxrwx 1 root root 10 Oct  4 11:58 /dev/disk/by-id/wwn-0x60002ac000000000000109010001ec48 -> ../../sdkx
lrwxrwxrwx 1 root root  9 Sep 21 12:10 /dev/mapper/360002ac000000000000109010001ec48 -> ../dm-139
lrwxrwxrwx 1 root root 11 Sep 30 16:23 /dev/disk/by-id/wwn-0x60002ac000000000000046d80001ec48 -> ../../sdauk
lrwxrwxrwx 1 root root  9 Mar 17  2021 /dev/mapper/360002ac000000000000046d80001ec48 -> ../dm-607
lrwxrwxrwx 1 root root 11 Sep 19 23:33 /dev/disk/by-id/wwn-0x60002ac000000000000013690001ec48 -> ../../sdaye
lrwxrwxrwx 1 root root  9 Mar 17  2021 /dev/mapper/360002ac000000000000013690001ec48 -> ../dm-653