Shared SCSI gone after upgrade

Having difficulties when installing the system? Your system runs slowly and requires some tweaking? You can get help here.
Post Reply

Topic author
willemgrooters
Valued Contributor
Posts: 87
Joined: Fri Jul 12, 2019 1:59 pm
Reputation: 0
Location: Netherlands
Status: Offline
Contact:

Shared SCSI gone after upgrade

Post by willemgrooters » Fri Dec 31, 2021 5:28 am

There are a few issues you have to be aware of after upgrading HP[E]OpenVMS to VSI-OpenVMS - at least, I found them in the Community license version - they may have been solved in later versions.
All of these issues require a conversational - and possibly minimal - boot to solve.

* If you're running a cluster system that you have once set up as an IP-cluster member, but reversed that to LAN, or even remove the system from a clustered environment, this IP-data will interfere if you specify this system as a cluster member - even if NON-IP cluster; probably by AUTOGEN run after upgrade. (see later), the system will not boot but (indefinitively?) wait until it gets an answer: boot seems to hang in that case. There is a file on the system that is added into the configuration - but I lost what name it has; something with IP in it....

* If you're running a system in a cluster that once had a quorum disk, but this has been disabled (parameter DISK_QUORUM) is empty), this disk will be recognized in AUTOGEN (I guess) and it will be added to the mix, and will cause the cluster to hang if the system crashes during boot afterwards; I got this error when after boot, the system was added to the cluster and immediately bug-checked - it seems SYSINIT crashed because a disk was either not found, or mounted with the same label on another cluster member (Which was the case - see later). There are a number of causes possible:

- re-labelling the system disk in the upgrade (one of the first questions in the installation procedure)
- Doing upgrade while disk is mounted remotely
- changing allocation class
- Disk not found (see next)

* If the system you are upgrading uses shared SCSI (mine is set up that way, but it's the only one connected on moment of upgrade) this setting is LOST in the process; Although the quorum disk of the previous issue was located as part of that storage environment (it has the allocation class as set up before) it now has the allocation class of the node, and so boot will either crash, of the cluster will hang because this disk cannot be located: The system parameter DEVICE_NAMING is set to 0 where it should be 1, and the controller class seems to be lost and needs to be redefined; in SYSBOOT, parameter DEVICE_NAMING to be set to 1, and class of controller to be set to the right value you once have defined).

Post Reply