system startup that is database driven. I'm wondering if there is a better
way to organize the data than with the 4 tables described below. (The
table listings shown in the code sections are abridged).
The first table, which is named device_types, defines the known device types and
their precedence for mounting at startup (PHYSICAL first, SHADOW second, etc.).
Code: Select all
rowid type description
----- -------- ----------------
1 PHYSICAL Raw device
2 SHADOW Shadow set
3 LOGICAL LD device
4 NFS NFS client mount
The second table, volume, describes each volume or volume set that we may want
to mount. Each volume set is given a setname, which by default is the disk label
(or volsetnam if a volume set). Each relative volume in a volume set has a
separate row in the volume table.
Code: Select all
setname rvn type device label container
---------- --- -------- ------------ ----------- ----------------------------------
USERS 0 PHYSICAL PSXP1$DKA100 USERS
ALPHASYS 0 PHYSICAL PSXP1$DKA0 ALPHASYS
ALTSYS73 0 PHYSICAL $12$DQB0 ALTSYS73
MEDIA3 0 SHADOW DSA0 DS10LXXX $12$DKA0,$12$DKA100
VSET_TRIAL 1 LOGICAL LDA80 VSET_TRIALA $12$DQB0:[VDISKS]VSETX.LD80
VSET_TRIAL 2 LOGICAL LDA81 VSET_TRIALB $12$DQB0:[VDISKS]VSETX.LD81
VSET_TRIAL 3 LOGICAL LDA82 VSET_TRIALC $12$DQB0:[VDISKS]VSETX.LD82
VSET_TRIAL 4 LOGICAL LDA83 VSET_TRIALD $12$DQB0:[VDISKS]VSETX.LD83
VMSSRC 0 LOGICAL LDA1 VMSSRC0721 $12$DQB0:[VDISKS]VMSSRC0721.LDA1
PI_HOME 0 NFS DNFS0 PIHOME DNFS31:[HOME]
PI_HVAC 0 NFS DNFS0 PIHVAC DNFS31:[HVAC_DATA]
PI2_HOME 0 NFS DNFS0 PI2HOME DNFS33:[HOME]
JFPPY0510A 0 LOGICAL LDA103 JFPPY0510A $12$DQB0:[VDISKS]JFPPY0510A.LDA103
DEV 0 PHYSICAL PSXP1$DKA200 DEV
- PHYSICAL
- The container value is not used.
SHADOW - Comma-separated list of the shadow set members, which must be
physical drives. The shadow member drives named in the list are
tested and only those that exist are included in the mount request.
LOGICAL - Logical Disk File for the LD device (1st argument to LD connect).
We spawn ld commands to do the connects, then perform mounts on
the connected LDAnnn devices.
NFS - Mount point for NFS client mount (TCPIP mount command), additional
parameters for the mount are retrieved from the nfs_client table
using this mount point as a key. Note that the device column value
for this type is always DNFS0. The setname becomes the logical
name for the NFS mount.
The third table, nfs_client, defines some of the common parameters needed for
adding a mount point via the TCPIP mount command, primarily the remote host
and remote path. Qualifiers for the mount that govern how the remote data is
mapped are handled by the qualifiers saved in the mount_control table described
below.
Code: Select all
mount_point host server path uid gid automount_tmo
------------------ ----------- ------ -------------------- ---- ---- -------------
DNFS31:[HOME] 10.110.6.31 unix /home 1000 1000 00:30:00.0
DNFS31:[HVAC_DATA] 10.110.6.31 unix /var/local/hvac_data 1000 1000 00:30:00.0
DNFS33:[HOME] 10.110.6.34 unix /home 1000 1000 00:30:00.0
Finally, the mount_control table is used to control which setnames are to be
mounted at boot (1 in column E) and if so, any additional qualifiers for the
mount operation. Note that for non-NFS setnames, /SYSTEM is the default.
Code: Select all
setname E tmo mount qualifiers description
---------- - --- ------------------------ ---------------------------------
USERS 1 1 User files
ALPHASYS 1 1 Cluster system disk
ALTSYS73 1 0 IDE drive on DS10L.
MEDIA3 1 0 Project files on DS10L shadow set
VMSSRC 1 0 /NOWRITE VMS 7.2-1 source listings
VSET_TRIAL 1 0 Volume set for testing
PI_HOME 1 0 /write/system/struct=5 Raspberry PI #1 user dir
PI_HVAC 1 0 /nowrite/system/struct=5 HVAC monitor logs
PI2_HOME 1 0 /write/system/struct=5 Raspberry PI #2 user dir
JFPPY0510A 1 0 JPF python distribution
DEV 1 1 Secondary user files