Skip to content

IBM/ibm-spectrum-scale-install-infra

Repository files navigation

Important: You are viewing the main branch of this repository. If you've previously used the master branch in your own playbooks then you will need to make some changes in order to switch to the main branch. See MIGRATING.md for details.


IBM Storage Scale (GPFS) Deployment using Ansible Roles

Ansible project with multiple roles for installing and configuring IBM Storage Scale (GPFS) software defined storage.

Table of Contents

Features

Infrastructure minimal tested configuration

  • Pre-built infrastructure (using a static inventory file)
  • Dynamic inventory file

OS support

  • Support for RHEL 7 on x86_64, PPC64 and PPC64LE
  • Support for RHEL 8 on x86_64 and PPC64LE
  • Support for UBUNTU 20 on x86_64 and PPC64LE
  • Support for SLES 15 on x86_64 and PPC64LE

Common prerequisites

  • Disable SELinux (scale_prepare_disable_selinux: true), by default false
  • Disable firewall (scale_prepare_disable_firewall: true), by default false.
  • Install and start NTP
  • Create /etc/hosts mappings
  • Open firewall ports
  • Generate SSH keys
  • User must set up base OS repositories

Core IBM Storage Scale prerequisites

  • Install yum-utils package
  • Install gcc-c++, kernel-devel, make
  • Install elfutils,elfutils-devel (RHEL8 specific)

Core IBM Storage Scale Cluster features

  • Install core IBM Storage Scale packages on Linux nodes
  • Install IBM Storage Scale license package on Linux nodes
  • Compile or install pre-compiled Linux kernel extension (mmbuildgpl)
  • Configure client and server license
  • Assign default quorum (maximum 7 quorum nodes) if user has not defined in the inventory
  • Assign default manager nodes (all nodes will act as manager nodes) if user has not defined in the inventory
  • Create new cluster (mmcrcluster -N /var/mmfs/tmp/NodeFile -C {{ scale_cluster_clustername }})
  • Create cluster with profiles
  • Create cluster with daemon and admin network
  • Add new node into existing cluster
  • Configure node classes
  • Define configuration parameters based on node classes
  • Configure NSDs and file system
  • Configure NSDs without file system
  • Add NSDs
  • Add disks to existing file system

IBM Storage Scale Management GUI features

  • Install IBM Storage Scale management GUI packages on designated GUI nodes
  • Maximum 3 GUI nodes to be configured
  • Install performance monitoring sensor packages on all Linux nodes
  • Install performance monitoring collector on all designated GUI nodes
  • Configure performance monitoring and collectors
  • Configure HA federated mode collectors

IBM Storage Scale Call Home features

  • Install IBM Storage Scale Call Home packages on all cluster nodes
  • Configure Call Home

IBM Storage Scale CES (SMB and NFS) Protocol supported features

  • Install IBM Storage Scale SMB or NFS on selected cluster nodes (5.0.5.2 and above)
  • Install IBM Storage Scale Object on selected cluster nodes (5.1.1.0 and above)
  • CES IPV4 or IPV6 support
  • CES interface mode support

Minimal tested Versions

The following Ansible versions are tested:

The following IBM Storage Scale versions are tested:

  • 5.0.4.0 and above
  • 5.0.5.2 and above for CES (SMB and NFS)
  • 5.1.1.0 and above for CES (Object)
  • Refer to the Release Notes for details

Specific OS requirements:

  • For CES (SMB/NFS) on SLES15: Python 3 is required.
  • For CES (Object): RhedHat 8.x is required.

Prerequisites

Users need to have a basic understanding of the Ansible concepts for being able to follow these instructions. Refer to the Ansible User Guide if this is new to you.

  • Install Ansible on any machine (control node)

    $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
    $ python get-pip.py
    $ pip3 install ansible==2.9

    Refer to the Ansible Installation Guide for detailed installation instructions.

    Note that Python 3 is required for certain functionality of this project to work. Ansible should automatically detect and use Python 3 on managed machines, refer to the Ansible documentation for details and workarounds.

  • Download IBM Storage Scale packages

  • Create password-less SSH keys between all nodes in the cluster

    A pre-requisite for installing IBM Storage Scale is that password-less SSH must be configured among all nodes in the cluster. Password-less SSH must be configured and verified with FQDN, hostname, and IP of every node to every node.

    Example:

    $ ssh-keygen
    $ ssh-copy-id -oStrictHostKeyChecking=no node1.gpfs.net
    $ ssh-copy-id -oStrictHostKeyChecking=no node1
    $ ssh-copy-id -oStrictHostKeyChecking=no

    Repeat this process for all nodes to themselves and to all other nodes.

Installation Instructions

  • Create project directory on Ansible control node

    The preferred way of accessing the roles provided by this project is by placing them inside the collections/ansible_collections/ibm/spectrum_scale directory of your project, adjacent to your Ansible playbook. Simply clone the repository to the correct path:

    $ mkdir my_project
    $ cd my_project
    $ git clone -b main https://github.com/IBM/ibm-spectrum-scale-install-infra.git collections/ansible_collections/ibm/spectrum_scale

    Be sure to clone the project under the correct subdirectory:

    my_project/
    ├── collections/
    │   └── ansible_collections/
    │       └── ibm/
    │           └── spectrum_scale/
    │               └── ...
    ├── hosts
    └── playbook.yml
  • Create Ansible inventory

    Define IBM Storage Scale nodes in the Ansible inventory (e.g. hosts) in the following format:

    # hosts:
    [cluster01]
    scale01  scale_cluster_quorum=true   scale_cluster_manager=true
    scale02  scale_cluster_quorum=true   scale_cluster_manager=true
    scale03  scale_cluster_quorum=true   scale_cluster_manager=false
    scale04  scale_cluster_quorum=false  scale_cluster_manager=false
    scale05  scale_cluster_quorum=false  scale_cluster_manager=false

    The above is just a minimal example. It defines Ansible variables directly in the inventory. There are other ways to define variables, such as host variables and group variables.

    Numerous variables are available which can be defined in either way to customize the behavior of the roles. Refer to VARIABLES.md for a full list of all supported configuration options.

  • Create Ansible playbook

    The basic Ansible playbook (e.g. playbook.yml) looks as follows:

    # playbook.yml:
    ---
    - hosts: cluster01
      collections:
        - ibm.spectrum_scale
      vars:
        - scale_install_localpkg_path: /path/to/Spectrum_Scale_Standard-5.0.4.0-x86_64-Linux-install
      roles:
        - core_prepare
        - core_install
        - core_configure
        - core_verify

    Again, this is just a minimal example. There are different installation methods available, each offering a specific set of options:

    Refer to VARIABLES.md for a full list of all supported configuration options.

  • Run the playbook to install and configure the IBM Storage Scale cluster

    • Using the ansible-playbook command:

      $ ansible-playbook -i hosts playbook.yml
    • Using the automation script:

      $ cd samples/
      $ ./ansible.sh

      Note: An advantage of using the automation script is that it will generate log files based on the date and the time in the /tmp directory.

  • Playbook execution screen

    Playbook execution starts here:

    $ ./ansible.sh
    Running #### ansible-playbook -i hosts playbook.yml
    
    PLAY #### [cluster01]
    **********************************************************************************************************
    
    TASK #### [Gathering Facts]
    **********************************************************************************************************
    ok: [scale01]
    ok: [scale02]
    ok: [scale03]
    ok: [scale04]
    ok: [scale05]
    
    TASK [common : check | Check Spectrum Scale version]
    *********************************************************************************************************
    ok: [scale01]
    ok: [scale02]
    ok: [scale03]
    ok: [scale04]
    ok: [scale05]
    
    ...

    Playbook recap:

    #### PLAY RECAP
    ***************************************************************************************************************
    scale01                 : ok=0   changed=65    unreachable=0    failed=0    skipped=0   rescued=0    ignored=0
    scale02                 : ok=0   changed=59    unreachable=0    failed=0    skipped=0   rescued=0    ignored=0
    scale03                 : ok=0   changed=59    unreachable=0    failed=0    skipped=0   rescued=0    ignored=0
    scale04                 : ok=0   changed=59    unreachable=0    failed=0    skipped=0   rescued=0    ignored=0
    scale05                 : ok=0   changed=59    unreachable=0    failed=0    skipped=0   rescued=0    ignored=0

Optional Role Variables

Users can define variables to override default values and customize behavior of the roles. Refer to VARIABLES.md for a full list of all supported configuration options.

Additional functionality can be enabled by defining further variables. Browse the examples in the samples/ directory to learn how to:

Available Roles

The following roles are available for you to reuse when assembling your own playbook:

  • Core GPFS (roles/core_*)*
  • GUI (roles/gui_*)
  • SMB (roles/smb_*)
  • NFS (roles/nfs_*)
  • Object (roles/obj_*)
  • HDFS (roles/hdfs_*)
  • Call Home (roles/callhome_*)
  • File Audit Logging (roles/fal_*)
  • ...

Note that Core GPFS is the only mandatory role, all other roles are optional. Each of the optional roles requires additional configuration variables. Browse the examples in the samples/ directory to learn how to:

Cluster Membership

All hosts in the play are configured as nodes in the same IBM Storage Scale cluster. If you want to add hosts to an existing cluster then add at least one node from that existing cluster to the play.

You can create multiple clusters by running multiple plays. Note that you will need to reload the inventory to clear dynamic groups added by the IBM Storage Scale roles:

- name: Create one cluster
  hosts: cluster01
  roles: ...

- name: Refresh inventory to clear dynamic groups
  hosts: localhost
  connection: local
  gather_facts: false
  tasks:
    - meta: refresh_inventory

- name: Create another cluster
  hosts: cluster02
  roles: ...

Limitations

The roles in this project can (currently) be used to create new clusters or extend existing clusters. Similarly, new file systems can be created or extended. But this project does not remove existing nodes, disks, file systems or node classes. This is done on purpose — and this is also the reason why it can not be used, for example, to change the file system pool of a disk. Changing the pool requires you to remove and then re-add the disk from a file system, which is not currently in the scope of this project.

Furthermore, upgrades are not currently in scope of this role. IBM Storage Scale supports rolling online upgrades (by taking down one node at a time), but this requires careful planning and monitoring and might require manual intervention in case of unforeseen problems.

Troubleshooting

The roles in this project store configuration files in /var/mmfs/tmp on the first host in the play. These configuration files are kept to determine if definitions have changed since the previous run, and to decide if it's necessary to run certain IBM Storage Scale commands (again). When experiencing problems one can simply delete these configuration files from /var/mmfs/tmp in order to clear the cache — doing so forces re-application of all definitions upon the next run. As a downside, the next run may take longer than expected as it might re-run unnecessary IBM Storage Scale commands. This will automatically re-generate the cache.

Reporting Issues and Feedback

Please use the issue tracker to ask questions, report bugs and request features.

Contributing Code

We welcome contributions to this project, see CONTRIBUTING.md for more details.

Disclaimer

Please note: all roles / playbooks / modules / resources in this repository are released for use "AS IS" without any warranties of any kind, including, but not limited to their installation, use, or performance. We are not responsible for any damage or charges or data loss incurred with their use. You are responsible for reviewing and testing any scripts you run thoroughly before use in any production environment. This content is subject to change without notice.

Copyright and License

Copyright IBM Corporation, released under the terms of the Apache License 2.0.