VM regions are often used as sharable address spaces in VOS applications. This is a very efficient way of allowing multiple processes to access the same set of addresses. The main disadvantage is that the space is restricted to well under 2 GB due to VOS’s virtual memory limits, and this usage cuts into VM available for other purposes.
Address space can be also shared via the file system by mapping addresses onto a binary file and accessing those addresses via File I/O operations. This eliminates VM region size restrictions and frees up otherwise dedicated VM. It also allows for coordination via region locking, but is far more expensive than direct access of shared VM, particularly if actual disk I/O is involved.
Posix applications sometimes use stream files in binary mode for this purpose, establishing the desired size of the address space via ftruncate (used for its ability to extend the location of EOF) and then position to areas within the file from which data is read or written. In this way, the file serves as backing store for the address space and processes share the space using file oriented interfaces such as fseek/fread/fwrite.
Prior to 64-bit stream files (introduced in Release 17.2), this could be very expensive on VOS because ordinary stream files cannot be sparse, and extending EOF involves explicitly allocating and writing blocks of binary zeros. For example, if the desired address space was say 2 GB, then VOS would require 524,288 blocks of disk storage, even though only a small amount of that storage might ever contain values other than binary zeros. With 64-bit stream files, these type of applications can now run efficiently on VOS, requiring only as much disk space as is actually needed. Posix applications automatically get the benefit of 64-bit stream files when built for large file awareness; you should do this to get these performance benefits, even if files are not expected to grow to more than 2 GB. (See OpenVOS POSIX.1 Reference manual, R502, “Porting Existing Applications to the 64-bit Stream File Environment” for more information).
Similar use of file-backed shared address space is possible in native VOS applications, i.e., those using s$ interfaces. VOS provides a number of features which can greatly reduce disk I/O when using this technique, essentially making CPU usage the primary cost. The introduction of sparse 64-bit stream files in Release 17.2 makes this approach to shared address space even more attractive.
Memory Resident and RAM files
A memory resident file is identified using the set_open_options command. A settable portion of the disk cache is reserved for memory resident files. Depending on physical memory available and other uses of cache, this can be up to 9-10 GB. Blocks of memory resident files once in cache will not incur subsequent disk reads as long as the total number doesn’t exceed that portion of the cache. If it does, then the most recently referenced blocks retain this advantage.
RAM files are files containing non-persistent data and are useful if the file contains data which does not need to be committed to disk when the application is done using it. You can use the set_ram_file command, but any file for which s$delete_file_on_close is called is automatically treated as a RAM file from that point on.
While memory resident files do not incur subsequent disk reads, blocks are still written at regular intervals, and the number of modified blocks allowed in cache is limited just as with any other file. Modified block limits prevent the situation where millions of blocks may need to be written when the file is closed or flushed – a 4 GB file occupies a million blocks. This limitation can slow down an application which modifies data faster than it can be written. RAM files avoid this type of throttling since their data never needs to be written to disk, even when the file is deactivated.
Using memory resident RAM files provides for an address space in cache memory avoiding most disk I/O, an address space which is limited only by cache size which in turn is based on available physical memory, not virtual memory (the cache manager shares VM addresses to access physical memory). Note: a single file-based address space can grow up to 512 GB, but when larger than the memory resident portion of cache, it loses I/O advantages, at least for those blocks which have not been recently referenced.
The contents of a stream file can be accessed using s$seq_position with byte oriented opcodes and then examined or modified using s$read_raw/s$write_raw. 64-bit stream files can be up to 512 GB without requiring any significant disk space except for regions in the file which are used, i.e., set to be non-zero.
create_file scratch -organization stream64 -extent_size 256
This creates a DAE-256 file called “scratch”
This allows this file to have unlimited access to cache avoiding any throttling related to the number of modified blocks, and to avoid disk writes altogether except in the background or when cache is exhausted and needed for other purposes. This can be done programmatically as well via s$set_ram_file. When the last opener closes the file, the data in cache is discarded and never entails disk writes. The file must be empty when this command is used.
set_open_options scratch -cache_mode memory_resident
This causes as many as possible of this file’s blocks to be retained in cache indefinitely. The actual number is a factor of the cache size and memory residence percentage, as set in the set_tuning_parameters command.
The following sequence shows an example of programmatic usage (illustrated using test_system_calls):
tsc: s$attach_port p scratch
tsc: s$open p -io_type update
Now, provide an address space of around 512 GB:
tsc: extend_stream_file p 549235720192 (s$control EXTEND_STREAM_FILE)
and use it to store data:
tsc: s$seq_position_x p bwd_by_bytes 3 (current position after extend is EOF)
tsc: s$write_raw p END
tsc: s$seq_position_x p bof
tsc: s$write_raw p START
tsc: s$seq_position_x p fwd_by_bytes 2000
tsc: s$write_raw p 2000
Note: s$seq_position supports opcodes to position to absolute byte offsets as well.
The result looks like this, with the file occupying just two data blocks on disk:
..dump_file scratch -brief
%swsle#Raid4>otto>d-3>new>scratch 15-02-25 16:04:17 est
Block number 1
000 53544152 54000000 00000000 00000000 |START………..|
010 00000000 00000000 00000000 00000000 |…………….|
7D0 00000000 00323030 30000000 00000000 |…..2000…….|
7E0 00000000 00000000 00000000 00000000 |…………….|
FF0 00000000 00000000 00000000 00000000 |…………….|
Block number 131870736
000 00000000 00000000 00000000 00000000 |…………….|
FF0 00000000 00000000 00000000 00454E44 |………….END|
Note: blocks 1 and 131870736 are in cache and typically will not have been written to disk, although disk space is reserved for them, should they need to be if cache resources are strained. The dump_file command above is seeing the cached blocks, not reading disk.
When a RAM file is deactivated (is no longer open in any process), VOS truncates the file (avoiding writing modified blocks to disk) and releases all disk space. Under typical circumstances, the above sequence would result in no disk I/O at all except writes of the two file map blocks, which are eventually discarded.
tsc: s$close p
Now the file is empty and occupies no disk space:
tsc: ..dump_file scratch
%swsle#Raid4>otto>d-3>new>scratch 15-02-25 16:07:12 est