Help me refine my disk configuration database.

Management of storage subsystems: SAN, volume shadowing, logical disks, file systems, and more.
Post Reply

Topic author
jonesd
Contributor
Posts: 14
Joined: Mon Aug 09, 2021 7:59 pm
Reputation: 0
Status: Offline

Help me refine my disk configuration database.

Post by jonesd » Thu May 12, 2022 3:44 pm

For the past 18 months, I've been using a program to mount my disks at
system startup that is database driven. I'm wondering if there is a better
way to organize the data than with the 4 tables described below. (The
table listings shown in the code sections are abridged).

The first table, which is named device_types, defines the known device types and
their precedence for mounting at startup (PHYSICAL first, SHADOW second, etc.).

Code: Select all

rowid  type      description      
-----  --------  ----------------
1      PHYSICAL  Raw device       
2      SHADOW    Shadow set       
3      LOGICAL   LD device        
4      NFS       NFS client mount
===============================================================================
The second table, volume, describes each volume or volume set that we may want
to mount. Each volume set is given a setname, which by default is the disk label
(or volsetnam if a volume set). Each relative volume in a volume set has a
separate row in the volume table.

Code: Select all

setname     rvn  type      device        label        container
----------  ---  --------  ------------  -----------  ----------------------------------
USERS       0    PHYSICAL  PSXP1$DKA100  USERS
ALPHASYS    0    PHYSICAL  PSXP1$DKA0    ALPHASYS
ALTSYS73    0    PHYSICAL  $12$DQB0      ALTSYS73
MEDIA3      0    SHADOW    DSA0          DS10LXXX     $12$DKA0,$12$DKA100
VSET_TRIAL  1    LOGICAL   LDA80         VSET_TRIALA  $12$DQB0:[VDISKS]VSETX.LD80
VSET_TRIAL  2    LOGICAL   LDA81         VSET_TRIALB  $12$DQB0:[VDISKS]VSETX.LD81
VSET_TRIAL  3    LOGICAL   LDA82         VSET_TRIALC  $12$DQB0:[VDISKS]VSETX.LD82
VSET_TRIAL  4    LOGICAL   LDA83         VSET_TRIALD  $12$DQB0:[VDISKS]VSETX.LD83
VMSSRC      0    LOGICAL   LDA1          VMSSRC0721   $12$DQB0:[VDISKS]VMSSRC0721.LDA1
PI_HOME     0    NFS       DNFS0         PIHOME       DNFS31:[HOME]
PI_HVAC     0    NFS       DNFS0         PIHVAC       DNFS31:[HVAC_DATA]
PI2_HOME    0    NFS       DNFS0         PI2HOME      DNFS33:[HOME]
JFPPY0510A  0    LOGICAL   LDA103        JFPPY0510A   $12$DQB0:[VDISKS]JFPPY0510A.LDA103
DEV         0    PHYSICAL  PSXP1$DKA200  DEV
The value in the container column is interpreted based on the type column:
  • PHYSICAL
  • The container value is not used.

    SHADOW
  • Comma-separated list of the shadow set members, which must be
    physical drives. The shadow member drives named in the list are
    tested and only those that exist are included in the mount request.

    LOGICAL
  • Logical Disk File for the LD device (1st argument to LD connect).
    We spawn ld commands to do the connects, then perform mounts on
    the connected LDAnnn devices.

    NFS
  • Mount point for NFS client mount (TCPIP mount command), additional
    parameters for the mount are retrieved from the nfs_client table
    using this mount point as a key. Note that the device column value
    for this type is always DNFS0. The setname becomes the logical
    name for the NFS mount.
===============================================================================
The third table, nfs_client, defines some of the common parameters needed for
adding a mount point via the TCPIP mount command, primarily the remote host
and remote path. Qualifiers for the mount that govern how the remote data is
mapped are handled by the qualifiers saved in the mount_control table described
below.

Code: Select all

mount_point         host          server  path                  uid   gid   automount_tmo
------------------  -----------  ------  --------------------  ----  ----  -------------
DNFS31:[HOME]       10.110.6.31  unix    /home                 1000  1000  00:30:00.0
DNFS31:[HVAC_DATA]  10.110.6.31  unix    /var/local/hvac_data  1000  1000  00:30:00.0
DNFS33:[HOME]       10.110.6.34  unix    /home                 1000  1000  00:30:00.0
===============================================================================
Finally, the mount_control table is used to control which setnames are to be
mounted at boot (1 in column E) and if so, any additional qualifiers for the
mount operation. Note that for non-NFS setnames, /SYSTEM is the default.

Code: Select all

setname     E  tmo  mount qualifiers          description
----------  -  ---  ------------------------  ---------------------------------
USERS       1  1                              User files
ALPHASYS    1  1                              Cluster system disk
ALTSYS73    1  0                              IDE drive on DS10L.
MEDIA3      1  0                              Project files on DS10L shadow set
VMSSRC      1  0    /NOWRITE                  VMS 7.2-1 source listings
VSET_TRIAL  1  0                              Volume set for testing
PI_HOME     1  0    /write/system/struct=5    Raspberry PI #1 user dir
PI_HVAC     1  0    /nowrite/system/struct=5  HVAC monitor logs
PI2_HOME    1  0    /write/system/struct=5    Raspberry PI #2 user dir
JFPPY0510A  1  0                              JPF python distribution
DEV         1  1                              Secondary user files


tim.stegner
VSI Expert
Member
Posts: 7
Joined: Wed Jul 21, 2021 9:14 am
Reputation: 0
Status: Offline

Re: Help me refine my disk configuration database.

Post by tim.stegner » Mon May 16, 2022 8:20 am

how many disks across all four types are involved here? how many [VMS] systems?
-TJ


Topic author
jonesd
Contributor
Posts: 14
Joined: Mon Aug 09, 2021 7:59 pm
Reputation: 0
Status: Offline

Re: Help me refine my disk configuration database.

Post by jonesd » Mon May 16, 2022 11:57 am

tim.stegner wrote:
Mon May 16, 2022 8:20 am
how many disks across all four types are involved here? how many [VMS] systems?
-TJ
20 or so 'disks' for 2 Alphas and 3 Raspberry Pis. The NFS mounts and LD devices tend to proliferate, it was mosly the logical disks driving the desire for a consolidated disk startup. I have an account on a 3 (sometimes 5) node cluster in Europe that has 13 shadow drives and 28 logical disks, I don't know what their startup procedure is.

Post Reply