Non-system disk queue manager issue
-
Topic author - Visitor
- Posts: 2
- Joined: Wed Mar 06, 2024 3:48 pm
- Reputation: 0
- Status: Offline
Non-system disk queue manager issue
I have a brand new V9.2-2 two-node cluster, and have configured the queue manager to be on a non-system disk, shared across the cluster via MSCP.
The problem: The print queues work properly on the node where the queue manager is actually running. In the other node, the print jobs error out when trying to print:
Example error message:
Job PRINTER-TEST (queue X_PR01, entry 1) terminated with error status
%PSM-E-OPENIN, error opening !AS as input
I have the logical set up on both nodes:
"QMAN$MASTER" = "$1$DKA100:[CLUSTER$CONFIG.Q]"
I have the queue manager configured for the non-system disk $1$DKA100:
Master file: $1$DKA100:[CLUSTER$CONFIG.Q]QMAN$MASTER.DAT;
Queue manager SYS$QUEUE_MANAGER, running, on VZ001T::
/ON=(*)
Database location: $1$DKA100:[CLUSTER$CONFIG.Q]
I've read through the manuals and just feel like I am just missing one setting somewhere.
If someone has some hints where I may be missing something, let me know, thanks!
The problem: The print queues work properly on the node where the queue manager is actually running. In the other node, the print jobs error out when trying to print:
Example error message:
Job PRINTER-TEST (queue X_PR01, entry 1) terminated with error status
%PSM-E-OPENIN, error opening !AS as input
I have the logical set up on both nodes:
"QMAN$MASTER" = "$1$DKA100:[CLUSTER$CONFIG.Q]"
I have the queue manager configured for the non-system disk $1$DKA100:
Master file: $1$DKA100:[CLUSTER$CONFIG.Q]QMAN$MASTER.DAT;
Queue manager SYS$QUEUE_MANAGER, running, on VZ001T::
/ON=(*)
Database location: $1$DKA100:[CLUSTER$CONFIG.Q]
I've read through the manuals and just feel like I am just missing one setting somewhere.
If someone has some hints where I may be missing something, let me know, thanks!
-
- VSI Expert
- Active Contributor
- Posts: 31
- Joined: Thu Jun 20, 2019 11:48 am
- Reputation: 0
- Status: Offline
Re: Non-system disk queue manager issue
I have no idea what's failing, so I'll start asking questions.
Whilst logged onto the node where things DO NOT work, let's see
1) $ SHOW DEV/FULL of $1$DKA100.
2) $ SHOW QUE/FULL X_PR01
3) $ SHOW QUE/MAN/FULL
Whilst logged onto the node where things DO work, let's see
1) $ SHOW QUE/FULL X_PR01
2) $ SHOW QUE/MAN/FULL
What's the allocation class of each node?
What are the values of VOTES and EXPECTED_VOTES for each node?
Do you have a batch queue set up?
-- Rob
Whilst logged onto the node where things DO NOT work, let's see
1) $ SHOW DEV/FULL of $1$DKA100.
2) $ SHOW QUE/FULL X_PR01
3) $ SHOW QUE/MAN/FULL
Whilst logged onto the node where things DO work, let's see
1) $ SHOW QUE/FULL X_PR01
2) $ SHOW QUE/MAN/FULL
What's the allocation class of each node?
What are the values of VOTES and EXPECTED_VOTES for each node?
Do you have a batch queue set up?
-- Rob
--
-- Rob
-- Rob
Re: Non-system disk queue manager issue
Code: Select all
> I have the logical set up on both nodes:
> "QMAN$MASTER" = "$1$DKA100:[CLUSTER$CONFIG.Q]"
Ok, but are the files therein accessible from all the nodes? Is that
disk mounted on all the cluster members? Is that logical name so
defined on all nodes?
Does the problem affect batch jobs, too, or only print jobs?
Do SHOW QUEUE commands work on all nodes? For example:
SHOW QUEUE /ALL /BATCH
SHOW QUEUE /ALL /DEVI = PRINT
-
- Master
- Posts: 154
- Joined: Fri Jun 28, 2019 8:45 am
- Reputation: 0
- Location: South Tyneside, UK
- Status: Offline
- Contact:
Re: Non-system disk queue manager issue
Are all the disks accessible from both systems so when the print symbiont runs it can access the disk where the file to be printed is?
Ian Miller
[ personal opinion only. usual disclaimers apply. Do not taunt happy fun ball ].
[ personal opinion only. usual disclaimers apply. Do not taunt happy fun ball ].
-
Topic author - Visitor
- Posts: 2
- Joined: Wed Mar 06, 2024 3:48 pm
- Reputation: 0
- Status: Offline
Re: Non-system disk queue manager issue
Thank you all for your valued input.
That was the nudge that I needed - the node where the queue manager is running DID NOT have access to the disk/file being printed from the other node.
That was the nudge that I needed - the node where the queue manager is running DID NOT have access to the disk/file being printed from the other node.
-
- Master
- Posts: 199
- Joined: Fri Aug 14, 2020 11:31 am
- Reputation: 0
- Status: Offline
Re: Non-system disk queue manager issue
If I'm not totally wrong, it is not the problem of the QUEUE_MANAGER, but the Print-Symbiont failing to access the file to be printed. And those are 2 different entities.
Volker.
-
- Valued Contributor
- Posts: 73
- Joined: Tue Mar 22, 2022 6:47 pm
- Reputation: 0
- Location: England
- Status: Offline
Re: Non-system disk queue manager issue
Would setting the printer spooled not solve the problem? A program writing directly to the printer will spool its output on the spool device until it is ready for printing. I would have guessed (warning!) that a simple print command would behave in the same way.
Martin
- Retired System Manager: VMS/UNIX/UNICOS/Linux.
- Started on a VAX 11/782 in 1984 with VMS 3.6.
Re: Non-system disk queue manager issue
Code: Select all
> [...] it is not the problem of the QUEUE_MANAGER, but the
> Print-Symbiont failing to access the file to be printed. [...]
Certainly the "PSM" in "%PSM-E-OPENIN" is suggestive. No doubt, the
smart move would have been:
help /message /facility = PSM OPENIN
(Or it might have been if that worked.)
> [...] the node where the queue manager is running DID NOT have access
> to the disk/file being printed from the other node.
Around here, there are (30-year-old) DCL scripts on every node to
help with that kind of thing. SYS$MANAGER:MOUNT_DISKS_LOCAL.COM mounts
the local disks. It gets run very early in
SYS$MANAGER:SYSTARTUP_VMS.COM. Then, after DECnet has been started,
SYS$MANAGER:MOUNT_DISKS_REMOTE.COM gets the MOUNT_DISKS_LOCAL.COM
scripts from all the other cluster nodes, and runs them on the
up-starting system. It may not be good, but it's old:
ITS $ type SYS$MANAGER:MOUNT_DISKS_REMOTE.COM
$! 30 January 1991. SMS.
$!
$! Mount all disks on other cluster members, using the other cluster members'
$! "MOUNT_DISKS_LOCAL.COM" command procedures. This procedure depends on proxy
$! logins for SYSTEM on all other nodes in the cluster.
$!
$! The lexical function F$CSID() is new in VMS V5.4.
$!
$ SET NOON
$!
$ THIS_NODE = F$EDIT( F$GETSYI( "NODENAME"), "TRIM")
$ CONTEXT = ""
$!
$! Get a cluster member node name.
$!
$ NODE_LOOP:
$!
$ NODE_ID = F$CSID( CONTEXT)
$!
$! Quit after the last node in the cluster.
$!
$ IF NODE_ID .EQS. "" THEN GOTO END_NODE_LOOP
$!
$! Get the cluster node's node name.
$!
$ NODE = F$GETSYI( "NODENAME", , NODE_ID)
$!
$! If the cluster node is not this node, run its MOUNT_DISKS_LOCAL.COM.
$!
$ IF NODE .NES. THIS_NODE
$ THEN
$!
$! Form the remote MOUNT_DISKS_LOCAL.COM file name.
$!
$ MOUNT_FILE_NAME = "''NODE'::MOUNT_DISKS_LOCAL.COM"
$!
$! Loop for a while, if the file is not accessible. The network may not be
$! ready immediately.
$!
$ COUNT = 30
$!
$ SEARCH_LOOP:
$!
$ IF F$SEARCH( MOUNT_FILE_NAME) .EQS. ""
$ THEN
$ COUNT = COUNT- 1
$ IF COUNT .GT. 0
$ THEN
$ WAIT 00:00:01
$ GOTO SEARCH_LOOP
$ ENDIF
$ ENDIF
$!
$ @ 'MOUNT_FILE_NAME' 'NODE'
$!
$! Print a message, if the file (access) fails.
$!
$ IF .NOT. $STATUS
$ THEN
$ STATUS = $STATUS
$ WRITE SYS$OUTPUT ""
$ WRITE SYS$OUTPUT " Error mounting disks for node ''NODE'."
$ WRITE SYS$OUTPUT " "+ F$MESSAGE( STATUS)+ "."
$ WRITE SYS$OUTPUT ""
$ ENDIF
$!
$ ENDIF
$!
$ GOTO NODE_LOOP
$ END_NODE_LOOP:
$!
$ EXIT
$!
-
- Master
- Posts: 154
- Joined: Fri Jun 28, 2019 8:45 am
- Reputation: 0
- Location: South Tyneside, UK
- Status: Offline
- Contact:
Re: Non-system disk queue manager issue
to be exact, it's the node running the print symbiont which is processing the queue that needs access to the disk where the fil to be printed is stored. Which node is running the queue manager is not relevant.
I've seen this quite a few times in heterogenous clusters.
Ian Miller
[ personal opinion only. usual disclaimers apply. Do not taunt happy fun ball ].
[ personal opinion only. usual disclaimers apply. Do not taunt happy fun ball ].