Increasing Max record size on a variable length RMS Indexed file

Everything about buying, using, and managing OpenVMS systems not covered by other sections.
Post Reply

Topic author
bboyczu
Visitor
Posts: 1
Joined: Thu Apr 15, 2021 9:50 pm
Reputation: 0
Status: Offline

Increasing Max record size on a variable length RMS Indexed file

Post by bboyczu » Thu Apr 15, 2021 11:53 pm

We have a few hundred different RMS files as part of a mission critical COBOL application running on OpenVMS 8.3 on ALPHA Servers. We need to expand the maximum record length of a RMS indexed file with variable length records from 4500 to 4600 bytes. The file contains two different records, one that is 4500 bytes which is a header and one that is 700 bytes which are the details associated with the header. It is a one to many relationship. The 100 bytes is being added to the end of the Header record as "FILLER" for now. All the COBOL programs accessing the file will be re-complied using the expanded record layout. The application will be modified in the near future to add and start populating new fields.

When we increase the record size of a file with fixed length records we use the "/PAD" qualifier on the CONVERT command to expand short records for the new file. This option is not allowed when converting an indexed file with variable length records. Therefore the new file will allow 4600 byte records but will only have 4500 bytes records immediately after the CONVERT.

Current file contains almost 20 Million records and Current Bucket Size in Area 0 is 9. All DATA and INDEX COMPRESSION is "YES" and FILL is "100".

Used output of $ANAL/RMS/FDL in $EDIT/FDL to change record size to 4600, then used INVOKE-OPTIMIZE to generate new FDL. New Bucket Size in Area 0 is 10.

We plan to use the convert utility with the new FDL to create the file with the 4600 max record size.

Is there any issue doing it this way or anything to watch out for?

Especially performance related as this file is central to the application. For example, when the application updates an existing record in the new file it will Read in 4500 byte records and REWRITE out 4600 byte records.


hein
Active Contributor
Posts: 41
Joined: Fri Dec 25, 2020 5:20 pm
Reputation: 0
Status: Offline

Re: Increasing Max record size on a variable length RMS Indexed file

Post by hein » Fri Apr 16, 2021 10:01 am

This is cross-posted in Comp.os.vms.
I'm in a call now, but I'll post a summary answer from the work there in a short while to build on as needed

Hein.

Added in 2 hours 48 minutes 34 seconds:
Note - this question is cross-posted in Comp.Os.Vms with several interesting reactions.

>>> The file contains two different records, one that is 4500 bytes which is a header and one that is 700 bytes which are the details associated with the header. It is a one to many relationship. The 100 bytes is being added to the end of the Header record as "FILLER" for now.

Normally all one needs to do is just 'allow' 4600 byte records with : SET FILE/ATTR=MRS=4600
However, the old bucket size was 9 blocks (4608 bytes) which would NOT allow for a 4600 byte record to be written due to BUCKET OVERHEAD (15 bytes) and RECORD OVERHEAD (11 bytes). A ​CONVERT is needed.

>>> When we increase the record size of a file with fixed length records we use the "/PAD" qualifier on the CONVERT command to expand short records for the new file. This option is not allowed when converting an indexed file with variable length records.

You wouldn't want that to be allowed anyway because (without pre-filtering steps) it would also pad the 700 byte detail records to 4600 bytes.

>>> Used output of $ANAL/RMS/FDL in $EDIT/FDL to change record size to 4600, then used INVOKE-OPTIMIZE to generate new FDL. New Bucket Size in Area 0 is 10.

That will work, but will only just fit with little room for (unlikely) further expansions in the future.
Ok, with compression you can probably fit one header and one related detail, but not much more.
Without further details I'd recommend going straight to 16 or 24 or so and catch all the details records for a header record?!

>>> We plan to use the convert utility with the new FDL to create the file with the 4600 max record size.
>>> Is there any issue doing it this way or anything to watch out for?

Not really, it is only a minor increment.
Please consider allowing a larger number still, like 8000 bytes, for the maximum record size if selecting an 16 block bucket as suggested in C.O.V
Perhaps even set no maximum (MRS=0), although that can confuse some (SORT) programs in allocating reasonable record buffers.

>>> Especially performance related as this file is central to the application. For example, when the application updates an existing record in the new file it will Read in 4500 byte records and REWRITE out 4600 byte records.

Well, you don't want each update to immediately run out of remaining spare space in a bucket causing bucket splits.
Please select a 90% datafill, not 100% to make sure there is enough room for anticipate growth.
Note, since it is just 'fill' initially those 100 bytes will need less than 10 bytes with data compression on, but eventually real bytes will be needed.

Some folks in C.O.V correctly observed that it might be better to have 2 files, one for headers one for details.
Regardless, the OP has inherited this design, and must work within his constraints. Too bad.
fwiw, a single file is perfectly explainable and even reasonable 'back in the days'
1) When you read a header record, you are likely to also already have read the associated detail records
notably when a good (over) sized bucket is selected.
2) Primary Key compression (for the 44 byte PK in this example!) is at it's most effective, and essentially the index structure is shared.
The details records probably/possibly just have 2 or 3 unique bytes, using most of the header key bytes as base.
3) Back in the day memory was limited and fewer files with minimal bucket sizes was all the rage.
Some languages (BASIC, FORTRAN) explicitly had a maximum of 99 files, others environments implicitly could not handle more.
4) Cobol and Datatrieve and the likes made it really easy to have alternate records defintions with redefines and structures variants depending on a control column or not
It is a nightmare for modern (less than 20 years old :-) SQL access through JDBC, ODBC and such.

Hein.

User avatar

arne_v
Master
Posts: 299
Joined: Fri Apr 17, 2020 7:31 pm
Reputation: 0
Location: Rhode Island, USA
Status: Offline
Contact:

Re: Increasing Max record size on a variable length RMS Indexed file

Post by arne_v » Thu Jul 08, 2021 8:02 pm

Variant records (to use the Pascal term) is a bit cumbersome to manage via an SQL API.

But with an OO language and an ORM that can map a class hierarchy to tables on top of the SQL API then it works nicely,
Arne
arne@vajhoej.dk
VMS user since 1986

Post Reply