OpenVMS 9.2-1 on real hardware success.
-
Topic author - Active Contributor
- Posts: 26
- Joined: Sat Apr 15, 2023 11:53 pm
- Reputation: 0
- Status: Offline
OpenVMS 9.2-1 on real hardware success.
After a successful installation of OpenVMS into ESXi 7.0.3, and knowing of the DL380 being something that was targeted for running on bare metal I decided to try it on a machine I owned that was fairly similar albeit imperfect.
The system board is an ASRock X99 WS, CPU is a E5-2609 v3 and there is 32GB of memory in the machine. I am using 1068E based LSI SAS controller as something had OpenVMS allergic to the AHCI controllers on the board on this machine. Two disks are attached and enumeration in the bootloader seems to be reliable. The first NIC, a I217 is picked up by the driver stack from a clean install via netboot. One thing I found strange about this board is having two serial ports, and both work in OpenVMS, one as OPA0:. and one as TTA0:.
With a trusty VT520 for a OPA0, and manually keying in the OPENVMS-X86-BOE PAK to enable TCPIP and SSH, things seem to work well. There's a lot yet to play with on the system and I'm still learning VMS... I'm hoping to make it run a few game servers once the OpenJDK port is complete.
Thanks for reading the story, and I'm happy to entertain any questions.
The system board is an ASRock X99 WS, CPU is a E5-2609 v3 and there is 32GB of memory in the machine. I am using 1068E based LSI SAS controller as something had OpenVMS allergic to the AHCI controllers on the board on this machine. Two disks are attached and enumeration in the bootloader seems to be reliable. The first NIC, a I217 is picked up by the driver stack from a clean install via netboot. One thing I found strange about this board is having two serial ports, and both work in OpenVMS, one as OPA0:. and one as TTA0:.
With a trusty VT520 for a OPA0, and manually keying in the OPENVMS-X86-BOE PAK to enable TCPIP and SSH, things seem to work well. There's a lot yet to play with on the system and I'm still learning VMS... I'm hoping to make it run a few game servers once the OpenJDK port is complete.
Thanks for reading the story, and I'm happy to entertain any questions.
-
Topic author - Active Contributor
- Posts: 26
- Joined: Sat Apr 15, 2023 11:53 pm
- Reputation: 0
- Status: Offline
Re: OpenVMS 9.2-1 on real hardware success.
Not that I was expecting things to go differently, but it's even happier now with a E5-2680V4. I'm going to try out some additional nics today and see what works in a vanilla 9.2-1.
Intel Pro PT 1000 did have all four nics appear.
Intel Pro PT 1000 did have all four nics appear.
Last edited by pocketprobe on Fri Apr 21, 2023 11:59 am, edited 1 time in total.
Re: OpenVMS 9.2-1 on real hardware success.
hi,
thanks for the report ! I'm not x86 HW expert, sorry for some basic questions:
jwtb
thanks for the report ! I'm not x86 HW expert, sorry for some basic questions:
- I assume https://www.asrock.com/mb/Intel/X99%20WS/ board
- per Intel info https://www.intel.com/content/www/us/en ... ducts.html I guess you have e1000 driver (I217) running and igb (i210) was not available ? Could you check devices seen in OS ?
- could you elaborate on netboot process ? I understand it was Web Server based per VSI OpenVMS x86-64 Boot Manager User Guide ? Started with VMS_KITBOOT.EFI from USB ? Looks like interesting exercise ...
- so, you had SAS disk running and no luck with builtin SATA ?
- any VGA card used ?
jwtb
-
Topic author - Active Contributor
- Posts: 26
- Joined: Sat Apr 15, 2023 11:53 pm
- Reputation: 0
- Status: Offline
Re: OpenVMS 9.2-1 on real hardware success.
- These are only personal experiences and I don't have a mountain of other (appropriate) hardware to test with.
- Yes, that is the correct motherboard.
- Yes, it is the i217, but not the i210 interface working. Show dev output below
- Yes, disks on the onboard SATA controller would allow Boot Manager to start up and offer to boot the machine as that would be done by the firmware, as well as loading the kernel and initial memdisk. What would happen shortly after is the machine starting to bugcheck with procgone once the kernel drivers took over cause the disk disappears.
- VGA card is an old, fantless amd 6450 with efi rom
Device list as requested.
Code: Select all
$ show dev
Device Device Error Volume Free Trans Mnt
Name Status Count Label Blocks Count Cnt
HELDEN$DMM0: Offline 0
HELDEN$DKA100: Mounted 0 X86SYS 463440416 396 1
HELDEN$DKA200: Mounted 0 USERS 468667328 1 1
HELDEN$LDM0: Online 0
HELDEN$LDM3143: Mounted 0 MD231052F18A 971 25 1
Device Device Error
Name Status Count
OPA0: Online 0
FTA0: Offline 0
FTA1: Online 0
TTA0: Online 0
TNA0: Online 0
Device Device Error
Name Status Count
PKA0: Online 0
SRA0: Online 0
MPA0: Online 0
EIA0: Online 0
SMA0: Online 0
WSA0: Offline 0
Last edited by pocketprobe on Fri Apr 21, 2023 2:21 pm, edited 2 times in total.
-
- Newbie
- Posts: 4
- Joined: Thu Apr 13, 2023 4:07 pm
- Reputation: 0
- Location: København S
- Status: Offline
Re: OpenVMS 9.2-1 on real hardware success.
I was added this evening at 8:02 PM to the VMS Software, Inc. Welcomes Community Members to the OpenVMS on x86 Field Test Program. and new to Oracle Virtual Box, but managed to get all software downloaded and installed Virtual box on a PC and after more than 24 years without working with OpenVMS, I actually managed to get OpenVMS installed same evening.
I have to get used to access OpenVMS though Putty and get license PAK's ect, installed without typing them manually and next to get networking fully working. However nice to get OpenVMS up running in less than 5 hours (Most time used with Virtual Box and Putty configuration), and only 1 hour after I could boot the software from Virtual Box.
$ show sys
OpenVMS E9.2-1 on node OVESEN 22-APR-2023 02:08:07.98 Uptime 0 00:46:45
Pid Process Name State Pri I/O CPU Page flts Pages
00000401 SWAPPER HIB 16 0 0 00:00:00.05 0 17
00000404 LANACP HIB 14 58 0 00:00:00.03 217 268
00000406 FASTPATH_SERVER HIB 10 8 0 00:00:00.02 161 202
00000407 IPCACP HIB 10 8 0 00:00:00.01 138 180
00000408 ERRFMT HIB 8 80 0 00:00:00.03 186 231
0000040A OPCOM HIB 8 59 0 00:00:00.01 178 184
0000040B AUDIT_SERVER HIB 10 129 0 00:00:00.09 242 302
0000040C JOB_CONTROL HIB 10 25 0 00:00:00.02 177 243
0000040E SECURITY_SERVER HIB 8 33 0 00:00:00.04 307 363 M
00000410 TP_SERVER HIB 10 195 0 00:00:00.03 358 220
00000411 SMHANDLER HIB 8 53 0 00:00:00.02 269 293
00000412 SYSTEM CUR 0 7 1177 0 00:00:03.02 1797 279
$
$ show mem
System Memory Resources on 22-APR-2023 02:09:01.07
Physical Memory Usage (pages): Total Free In Use Modified
Main Memory (4.72GB) 619785 556764 62628 393
Extended File Cache (Time of last reset: 22-APR-2023 01:21:23.01)
Allocated (MBytes) 28.50 Maximum size (MBytes) 2421.03
Free (MBytes) 0.73 Minimum size (MBytes) 3.12
In use (MBytes) 27.76 Percentage Read I/Os 98%
Read hit rate 58% Write hit rate 0%
Read I/O count 3964 Write I/O count 69
Read hit count 2328 Write hit count 0
Reads bypassing cache 18 Writes bypassing cache 1
Files cached open 136 Files cached closed 123
Vols in Full XFC mode 0 Vols in VIOC Compatible mode 2
Vols in No Caching mode 0 Vols in Perm. No Caching mode 0
Granularity Hint Regions (pages): Total Free In Use Released
S0 Execlet data 2048 1871 177 0
S0 Executive data 3584 141 3443 0
S0 Executive RO data 1024 838 186 0
S0 Resident image code 1536 1491 45 0
S0 Resident image data 512 512 0 0
S0 Resident RO image data 1024 1024 0 0
S2 Execlet code 4096 2023 2073 0
S2 Execlet data 4096 4096 0 0
S2 Executive data 1024 0 1024 0
S2 Resident image code 4096 2513 1583 0
S2 Resident image data 512 512 0 0
Slot Usage (slots): Total Free Resident Swapped
Process Entry Slots 850 837 13 0
Dynamic Memory Usage: Total Free In Use Largest
Nonpaged Dynamic Memory (MB) 16.00 14.97 1.02 14.92
USB Addressable Memory (KB) 1024.00 1022.87 1.12 1022.87
Paged Dynamic Memory (MB) 10.79 6.80 3.99 6.80
Lock Manager Dyn Memory (KB) 592.00 186.03 405.96
S2 Dynamic Memory Usage (MB) 7.96 7.80 0.15 7.80
Buffer Object Usage (pages): In Use Peak
32-bit System Space Windows (S0/S1) 0 0
64-bit System Space Windows (S2) 0 0
Physical pages locked by buffer objects 0 0
Memory Reservations (pages): Group Reserved In Use Type
Total (0 bytes reserved) 0 0
Paging File Usage (8KB pages): Index Free Size
DISK$SYSTEMDISK:[SYS0.SYSEXE]PAGEFILE.SYS;1
254 280 280
Total committed paging file usage: 2406
Of the physical pages in use, 65458 pages are permanently allocated to OpenVMS.
I have to get used to access OpenVMS though Putty and get license PAK's ect, installed without typing them manually and next to get networking fully working. However nice to get OpenVMS up running in less than 5 hours (Most time used with Virtual Box and Putty configuration), and only 1 hour after I could boot the software from Virtual Box.
$ show sys
OpenVMS E9.2-1 on node OVESEN 22-APR-2023 02:08:07.98 Uptime 0 00:46:45
Pid Process Name State Pri I/O CPU Page flts Pages
00000401 SWAPPER HIB 16 0 0 00:00:00.05 0 17
00000404 LANACP HIB 14 58 0 00:00:00.03 217 268
00000406 FASTPATH_SERVER HIB 10 8 0 00:00:00.02 161 202
00000407 IPCACP HIB 10 8 0 00:00:00.01 138 180
00000408 ERRFMT HIB 8 80 0 00:00:00.03 186 231
0000040A OPCOM HIB 8 59 0 00:00:00.01 178 184
0000040B AUDIT_SERVER HIB 10 129 0 00:00:00.09 242 302
0000040C JOB_CONTROL HIB 10 25 0 00:00:00.02 177 243
0000040E SECURITY_SERVER HIB 8 33 0 00:00:00.04 307 363 M
00000410 TP_SERVER HIB 10 195 0 00:00:00.03 358 220
00000411 SMHANDLER HIB 8 53 0 00:00:00.02 269 293
00000412 SYSTEM CUR 0 7 1177 0 00:00:03.02 1797 279
$
$ show mem
System Memory Resources on 22-APR-2023 02:09:01.07
Physical Memory Usage (pages): Total Free In Use Modified
Main Memory (4.72GB) 619785 556764 62628 393
Extended File Cache (Time of last reset: 22-APR-2023 01:21:23.01)
Allocated (MBytes) 28.50 Maximum size (MBytes) 2421.03
Free (MBytes) 0.73 Minimum size (MBytes) 3.12
In use (MBytes) 27.76 Percentage Read I/Os 98%
Read hit rate 58% Write hit rate 0%
Read I/O count 3964 Write I/O count 69
Read hit count 2328 Write hit count 0
Reads bypassing cache 18 Writes bypassing cache 1
Files cached open 136 Files cached closed 123
Vols in Full XFC mode 0 Vols in VIOC Compatible mode 2
Vols in No Caching mode 0 Vols in Perm. No Caching mode 0
Granularity Hint Regions (pages): Total Free In Use Released
S0 Execlet data 2048 1871 177 0
S0 Executive data 3584 141 3443 0
S0 Executive RO data 1024 838 186 0
S0 Resident image code 1536 1491 45 0
S0 Resident image data 512 512 0 0
S0 Resident RO image data 1024 1024 0 0
S2 Execlet code 4096 2023 2073 0
S2 Execlet data 4096 4096 0 0
S2 Executive data 1024 0 1024 0
S2 Resident image code 4096 2513 1583 0
S2 Resident image data 512 512 0 0
Slot Usage (slots): Total Free Resident Swapped
Process Entry Slots 850 837 13 0
Dynamic Memory Usage: Total Free In Use Largest
Nonpaged Dynamic Memory (MB) 16.00 14.97 1.02 14.92
USB Addressable Memory (KB) 1024.00 1022.87 1.12 1022.87
Paged Dynamic Memory (MB) 10.79 6.80 3.99 6.80
Lock Manager Dyn Memory (KB) 592.00 186.03 405.96
S2 Dynamic Memory Usage (MB) 7.96 7.80 0.15 7.80
Buffer Object Usage (pages): In Use Peak
32-bit System Space Windows (S0/S1) 0 0
64-bit System Space Windows (S2) 0 0
Physical pages locked by buffer objects 0 0
Memory Reservations (pages): Group Reserved In Use Type
Total (0 bytes reserved) 0 0
Paging File Usage (8KB pages): Index Free Size
DISK$SYSTEMDISK:[SYS0.SYSEXE]PAGEFILE.SYS;1
254 280 280
Total committed paging file usage: 2406
Of the physical pages in use, 65458 pages are permanently allocated to OpenVMS.
-
Topic author - Active Contributor
- Posts: 26
- Joined: Sat Apr 15, 2023 11:53 pm
- Reputation: 0
- Status: Offline
Re: OpenVMS 9.2-1 on real hardware success.
Following your lead, here is the show mem on my machine.
Code: Select all
$ show mem
System Memory Resources on 21-APR-2023 23:01:43.35
Physical Memory Usage (pages): Total Free In Use Modified
Main Memory (31.96GB) 4190115 4011901 177827 387
Extended File Cache (Time of last reset: 21-APR-2023 15:27:39.97)
Allocated (MBytes) 66.80 Maximum size (MBytes) 16367.63
Free (MBytes) 0.01 Minimum size (MBytes) 3.12
In use (MBytes) 66.78 Percentage Read I/Os 98%
Read hit rate 77% Write hit rate 0%
Read I/O count 19064 Write I/O count 261
Read hit count 14754 Write hit count 0
Reads bypassing cache 27 Writes bypassing cache 0
Files cached open 282 Files cached closed 209
Vols in Full XFC mode 0 Vols in VIOC Compatible mode 2
Vols in No Caching mode 0 Vols in Perm. No Caching mode 0
Granularity Hint Regions (pages): Total Free In Use Released
S0 Execlet data 2048 1716 332 0
S0 Executive data 5120 477 4643 0
S0 Executive RO data 1024 833 191 0
S0 Resident image code 3072 2989 83 0
S0 Resident image data 512 512 0 0
S0 Resident RO image data 1024 1024 0 0
S2 Execlet code 4096 1459 2637 0
S2 Execlet data 4096 4096 0 0
S2 Executive data 1024 0 1024 0
S2 Resident image code 4096 1511 2585 0
S2 Resident image data 512 512 0 0
Slot Usage (slots): Total Free Resident Swapped
Process Entry Slots 997 972 25 0
Dynamic Memory Usage: Total Free In Use Largest
Nonpaged Dynamic Memory (MB) 24.00 17.64 6.35 17.45
USB Addressable Memory (KB) 1024.00 1022.87 1.12 1022.87
Paged Dynamic Memory (MB) 12.13 6.99 5.14 6.98
Lock Manager Dyn Memory (KB) 992.00 4.46 987.53
S2 Dynamic Memory Usage (MB) 7.94 7.78 0.15 7.78
Buffer Object Usage (pages): In Use Peak
32-bit System Space Windows (S0/S1) 1 2
64-bit System Space Windows (S2) 0 0
Physical pages locked by buffer objects 1 0
Memory Reservations (pages): Group Reserved In Use Type
Total (0 bytes reserved) 0 0
Paging File Usage (8KB pages): Index Free Size
DISK$X86SYS:[SYS0.SYSEXE]PAGEFILE.SYS;1
254 280 280
Total committed paging file usage: 15179
Of the physical pages in use, 155660 pages are permanently allocated to OpenVMS.
-
Topic author - Active Contributor
- Posts: 26
- Joined: Sat Apr 15, 2023 11:53 pm
- Reputation: 0
- Status: Offline
Re: OpenVMS 9.2-1 on real hardware success.
I will list the cards I have tested in the machine and if they worked or not.
Works:
Intel Pro PT/1000 NIC works. EIA2-5 appear in show dev.
LSI-1068E works. This is the card that I presently boot HELDEN from.
Radeon 6450 with EFI ROM (Even boots/autoboots headless!)
Radeon WX2100 (BOOTMGR Doesn't load without attached display.)
Integrated i217-Incluiding netboot
Doesn't:
HP P222: Controller and disks don't appear.
Integrated i210
Various m.2 nvme stroage devices.
Awaiting arrival
lsi 3xx controller (cisco branded)
lsi 9207-8i
Works:
Intel Pro PT/1000 NIC works. EIA2-5 appear in show dev.
LSI-1068E works. This is the card that I presently boot HELDEN from.
Radeon 6450 with EFI ROM (Even boots/autoboots headless!)
Radeon WX2100 (BOOTMGR Doesn't load without attached display.)
Integrated i217-Incluiding netboot
Doesn't:
HP P222: Controller and disks don't appear.
Integrated i210
Various m.2 nvme stroage devices.
Awaiting arrival
lsi 3xx controller (cisco branded)
lsi 9207-8i
-
Topic author - Active Contributor
- Posts: 26
- Joined: Sat Apr 15, 2023 11:53 pm
- Reputation: 0
- Status: Offline
Re: OpenVMS 9.2-1 on real hardware success.
9207-8i doesn't work. Doesn't appear as a pk device.
Re: OpenVMS 9.2-1 on real hardware success.
If you go open SYS$SYSROOT:[SYSEXE]SYS$CONFIG.DAT you can see all the devices OpenVMS can recognize. For instance
-------------
----------
... but it looks like it includes devices from Alpha and Itanium, and if you search [SYS$LDR] for the driver you won't find it there. The only two ethernet drivers that seem to exist are EIX550 and EI1000x
-------------
Code: Select all
! 10G 2P X550-t Adapter X550
! 2P Intel X550-t Adapter (10 Gigabit Ethernet)
device = "2P Intel X550-t Adapter (10 Gb)"
name = EI
driver = SYS$EIX550
adapter = PCIE
id = 0x15638086
boot_flags = HW_CTRL_LTR, UNIT_0
flags = BOOT, NETWORK
end_device
... but it looks like it includes devices from Alpha and Itanium, and if you search [SYS$LDR] for the driver you won't find it there. The only two ethernet drivers that seem to exist are EIX550 and EI1000x
Re: OpenVMS 9.2-1 on real hardware success.
I have a DL380 Gen9 with 441 raid controller
And previously I have installed onto this one E9.2 and V9.2.
Embedded HPE Ethernet 1Gb 4-port 331i Adapter - NIC N/A N/A N/A N/A 20.19.51 OK
Embedded Smart Array P440ar Controller N/A N/A N/A N/A 7.20 Unknown
Attempt to go to E9.2-1 fails no matter what I do. I get to E9.2-1 VMS_BOOTMGR but then attempt to boot to actual installation mode causes eventual system crash:
_______________________________________________
GRAPHICAL OUTPUT HAS BEEN SUSPENDED
USE A TERMINAL UTILITY FOR ACCESS
_______________________________________________
VSI Primary Kernel SYSBOOT Jan 23 2023 14:03:45
VMS Software, Inc. OpenVMS (TM) x86_64 Operating System, E9.2-1
Copyright 2023 VMS Software, Inc.
MDS Mitigation active, variant verw(MD_CLEAR)
%SMP-I-CPUTRN, CPU #3 has joined the active set.
%SMP-I-CPUTRN, CPU #1 has joined the active set.
%SMP-I-CPUTRN, CPU #7 has joined the active set.
%SMP-I-CPUTRN, CPU #2 has joined the active set.
%SMP-I-CPUTRN, CPU #6 has joined the active set.
%SMP-I-CPUTRN, CPU #4 has joined the active set.
%SMP-I-CPUTRN, CPU #5 has joined the active set.
Installing required known files...
Configuring devices...
VSI Dump Kernel SYSBOOT Jan 23 2023 14:03:45
And it looks it is related to the raid controller. If I disable it, I can boot the installation media but obviously it is a no go because then I dont have the "to be system disk" any more
Any ideas?
_veli
And previously I have installed onto this one E9.2 and V9.2.
Embedded HPE Ethernet 1Gb 4-port 331i Adapter - NIC N/A N/A N/A N/A 20.19.51 OK
Embedded Smart Array P440ar Controller N/A N/A N/A N/A 7.20 Unknown
Attempt to go to E9.2-1 fails no matter what I do. I get to E9.2-1 VMS_BOOTMGR but then attempt to boot to actual installation mode causes eventual system crash:
_______________________________________________
GRAPHICAL OUTPUT HAS BEEN SUSPENDED
USE A TERMINAL UTILITY FOR ACCESS
_______________________________________________
VSI Primary Kernel SYSBOOT Jan 23 2023 14:03:45
VMS Software, Inc. OpenVMS (TM) x86_64 Operating System, E9.2-1
Copyright 2023 VMS Software, Inc.
MDS Mitigation active, variant verw(MD_CLEAR)
%SMP-I-CPUTRN, CPU #3 has joined the active set.
%SMP-I-CPUTRN, CPU #1 has joined the active set.
%SMP-I-CPUTRN, CPU #7 has joined the active set.
%SMP-I-CPUTRN, CPU #2 has joined the active set.
%SMP-I-CPUTRN, CPU #6 has joined the active set.
%SMP-I-CPUTRN, CPU #4 has joined the active set.
%SMP-I-CPUTRN, CPU #5 has joined the active set.
Installing required known files...
Configuring devices...
VSI Dump Kernel SYSBOOT Jan 23 2023 14:03:45
And it looks it is related to the raid controller. If I disable it, I can boot the installation media but obviously it is a no go because then I dont have the "to be system disk" any more
Any ideas?
_veli