VMMC Communication Model
User’s Guide
and
Library Developer’s Guide
Release 1.0
The Shrimp Project
Department of Computer Science
Princeton University
February 1999
About this Document
Welcome to VMMC! The goal of this document is to describe three items:
Users of VMMC are encouraged to join the email list used for VMMC announcements. Join the list to keep up with the latest VMMC developments. To join the email list, send a message to
majordomo@cs.princeton.edu with "subscribe vmmc-announce youremail@yourcompany.com" in the body of the message. Email traffic is minimal.We welcome input about functionality, bugs, and possible extensions to the API. Please send your comments or questions to
vmmc-support@cs.princeton.edu. For more information on The SHRIMP Project (including technical papers) please visit our web site at http://www.cs.princeton.edu/shrimp.
Contents
Introduction
The SHRIMP Project at Princeton University studies ways to integrated commodity desktop computers, such as PCs and workstations, into inexpensive, high-performance multicomputers. The goal is to build an inexpensive system from off-the-shelf components with minimal custom-designed hardware. Ideally, such a system should offer a performance competitive with, or better than, the performance of specially designed multicomputers for both message passing and shared-memory programming models.
In the course of our research we found that network interfaces (NI) of existing multi-computers and workstation networks introduce large software overheads for communication. The main reason for this overhead is that these network interfaces require a significant number of instructions at the operating system and user levels to provide protection and buffer management. Motivated by this fact, we designed two custom network interfaces (SHRIMP-I and SHRIMP-II) for low-latency, high-bandwidth user-to-user communication. These network interfaces implement our model of user-level communication called VMMC (virtual memory-mapped communication) which provides direct data transfer between the sender's and receiver's virtual address spaces. Additionally, this model eliminates operating system involvement in communication, provides full protection, supports user-level buffer management, zero-copy protocols, and minimizes software communication overhead.
The goal of VMMC is to allow for the creation of libraries implementing a variety of new and old APIs for parallel and distributed programming. Examples of such libraries include message passing APIs like PVM, NX/2, distributed shared memory and client-server APIs like RPC and stream sockets.
The introduction of commodity programmable network interfaces enabled us to transfer most of the functionality of our custom network interface to off-the-shelf NI hardware. We achieved this by implementing support for our user-level communication model in the NI firmware. The firmware together with a device driver and a user-level library implement the VMMC communication model.
Figure 1. The software components of VMMC
Administrator’s Guide
Introduction
The VMMC Cluster Service (VMMCSVR) provides basic support for remote process creation and management. On each VMMC Cluster node, an instance of VMMCSVR runs as a Windows NT service. A user program can request the VMMCSVR service to create and destroy processes on that node. The VMMCSVR service is started during the system boot time. An administrator can manually stop and start this service using the NET.EXE command provided by Windows NT.
In order to create processes through VMMSVR, a user must have an active VMMC "session" on a VMMC node. A session captures the environment such as a mounted network share and the output logging directory in which the user processes are created. Information required for process creation and destruction is managed on a per-session basis. A VMMC session consists of a session name, a user name, a user password, a local drive, a network share, and an output logging directory. The session name is required; while others can be determined by the VMMCSVR. The same session, identified by the session name, must be established on the VMMC cluster nodes in order for remote process creation (via vmmc_Spawn API) to work.
The VMMC distribution contains a utility, CFGVMMC, for session management and remote process management.
Session
A VMMC session consists of a session name, a user name, a user password, a local drive, a network share, and an output logging directory. The user must specify a non-empty string for the session name. The string can be of any characters. On any VMMC node, at most one session can exist with a given session name.
Each session requires a drive letter on which the VMMCSVR sets a user program’s current working directory when it creates the process. The current working directory is a full path relative to the drive, without the drive letter attached.
When a user deletes a session, all processes created within the session are killed automatically by VMMCSVR.
The session drive
The drive letter can represent a local file system, for example, C:, or represent a mounted network share. In the latter case, there are two possibilities. The network share can be mounted to a drive prior to session creation. Or, the user can instruct the VMMCSVR to mount a specific network share during session creation. Please refer to the section on the CFGVMMC utility for command syntax.
When letting VMMCSVR mount the network share, the user need not specify a local drive letter. In this case, VMMCSVR selects an available drive letter to use. The drive letter for a given session need not be the same on all nodes. VMMCSVR keeps track of the drive letter for each session and sets a process’ working directory correctly during process creation.
If the network share is mounted by VMMCSVR during session creation, VMMCSVR will delete the mount when the user deletes the session. Otherwise, the mount survives the session deletion.
VMMCSVR Service
The sessions are managed by VMMCSVR, implemented as a Windows NT service. VMMCSVR is installed as an auto-start service during VMMC system installation. By default, VMMCSVR runs in the LocalSystem account. The user can use the Service Control Manger (available from the Control Panel) to enable VMMCSVR interaction with a logged-on user, so that a process created by VMMCSVR can appear on a logged Windows desktop and can interact with the user just like other applications.
However, the Windows NT 4.0 operating system places severe restrictions on services running under the LocalSystem account. In particular, such services cannot mount a network share. This is why the CFGVMMC utility allows a user to create a session with a pre-mounted network share.
If the user does not care about running interactive programs, the VMMCSVR service can be configured to run under a user account. The Service Control Manager lets a user configure the account attribute of a service. Note that the account that VMMCSVR runs under need not be a network account. The VMMCSVR still needs a network account name and password from the user in order to mount a network share onto a local drive letter.
Output logging
Utilities: CFGVMMC.EXE
The utility program, CFGVMMC.EXE, can be run on any Windows NT workstation, including non-VMMC cluster nodes. It uses RPC to contact VMMCSVRs running on VMMC cluster nodes for session creation and deletion, as well as process creation and destruction
Syntax:
CFGVMMC hostname add session user password drive network_share output_dir
CFGVMMC hostname del session
CFGVMMC hostname run session program_path working_dir [args]
CFGVMMC hostname kill session [pid]
Where:
Session is a non-empty string of charaters
User is a string, it can be empty, e.g., vmmc or "".
Password is a string, and could be empty, e.g., guestit or "".
Drive is a letter [A-Z] or the character ‘-‘ in which case VMMCSVR allocates the drive.
Network_share is a string that represents a UNC path, e.g., \\fs\test, or a null string "".
Output_dir is a string representing the absolute path for output logging directory, e.g., \vmmcout. It cannot be an null string.
Program_path is the path name for the executable program, it can either relative to the working directory or absolute, e.g., \mytest\bin\test.exe or bin\test.exe.
Working_dir is the absolute path name.
Examples:
Suppose under \\newfs\snd\release\vmmc-dev.doc, there are two directories:
Under \bin, there is a test.exe whose syntax is: test.exe arg1 arg2 arg3.
CFGVMMC host1 add TestSession guest passwd K \\FS\Guest \tmp
Mounts \\FS\Guest on K: using passwd for account guest.
CFGVMMC host1 add TestSession guest passwd - \\FS\Guest \tmp
VMMCSVR selects a drive to mount \\FS\Guest.
CFGVMMC hosta add TestSession "" "" K "" \tmp
K is either a local disk drive or an already-mounted drive.
CFGVMMC host1 del TestSession
CFGVMMC host1 run TestSession \bin\test.exe \ arg1 arg2 arg3
CFGVMMC host1 run TestSession test.exe \bin arg1 arg2 arg3
CFGVMMC host1 kill TestSession
Kill all programs created in TestSession
CFGVMMC host1 kill TestSession 111
Kill process with Pid=111 in TestSession.
Starting and stopping VMMC
Administering the VMMC cluster involves enabling and disabling VMMC system, checking log files, and session and process management. The active components of the VMMC runtime system on each cluster node include the cluster server VMMCSVR, the device driver VMMCDRV, and the firmware running on the Myrinet network interface.
VMMCSVR: The VMMCSVR is configured to start up at system boot time. The VMMCSVR runs under the local "vmmc" account. Its type registry setting (\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VMMCSVR\type) is 0x110, which allows VMMCSVR to interact with a logged-on user. An administrator can manually stop and re-start VMMCSVR with Windows NT’s NET command as follows:
NET start VMMCSVR
NET stop VMMCSVR
VMMCDRV
: The VMMCDRV are started and stopped manually, using the NET command.Myrinet: An administrator can enable the Myrinet network interface by loading the MCP (Myrinet Control Program). This can be only after the VMMCDRV is properly started. The VMMC system software on each cluster node includes a Windows script (batch file) MCPSTART.BAT for loading the MCP and a script MCPSTOP.BAT for disabling the Myrinet network interface. They are located in %SystemRoot%\vmmc\bin.
System log files
On each VMMC cluster node, the VMMC system produces two log files, server.log and driver.log. Both reside in %SystemRoot%\vmmc\logs directory. The server.log file contains information and error logs from VMMCSVR, mostly regarding session and process management. The driver.log file contains logs for the VMMCDRV device driver and the Myrinet network interface.
Since the %SystemRoot%\vmmc directory is exported as VMMC share during the setup, an administrator can view the log files remotely accessing the VMMC share on a different Windows NT workstation. For example, the following command enable accesses to the log file on vmmc-node-1:
NET use \\vmmc-node-1\vmmc vmmc /user:vmmc
TYPE \\vmmc-node-1\vmmc\logs\server.log
VMMC system account and network share
During VMMC system installation, a local account vmmc is created on each cluster node. We have chosen "vmmc" as the password. An administrator can modify it using the User Manager program. However, the administrator must make sure that the VMMC service can still run under the local vmmc account. The Service Control Manager (invoked from the Control Panel) can be used for this purpose.
The home directory for the vmmc account is %SystemRoot%\vmmc. If UWIN is installed on the cluster node, the .rhosts file must be located in this directory in order for the rsh service to work properly.
User’s Guide
In this section we give an overview of the components involved in the Windows NT implementation of VMMC. We also described how user’s can run VMMC applications using our GUI interface.
Programming Model
This section describes the VMMC programming model.
Virtual memory-mapped communication is a mechanism for protected data transfer from the sender's virtual address space to the receiver's virtual address space. Communication is protected because it may take place only after the receiver gives the sender permission to transfer data to a given area of the receiver's address space. The receiving process expresses this permission by exporting areas of its address space as receive buffers where it is willing to accept incoming data. A sending process must import remote buffers which it will use as destinations for transferred data. An exporter can restrict possible importers of a buffer; VMMC enforces the restrictions when an import is attempted. After a successful import, the sender can transfer data from its virtual memory into the imported receive buffer. VMMC makes sure that transferred data do not overwrite receiver's addresses outside the destination receive buffer.
VMMC supports two data transfer modes: deliberate update and automatic update (NOTE: automatic update is only available on SHRIMP network interfaces and will not be discussed here.) Deliberate update is an explicitly initiated transfer of a contiguous block of data from any (readable) virtual address in the caller's process to a previously imported receive buffer. VMMC guarantees in-order, reliable delivery of deliberate update messages.
When a message arrives at its destination, it is transferred directly into the memory of the receiving process, without interrupting the receiver's CPU. Thus there is no explicit receive operation in VMMC.
VMMC supports user-level buffer management because transfer is performed between user-level memory locations. Buffer management is divorced from the data movement mechanism and becomes the responsibility of the communicating parties. Zero copy protocols are possible because a direct application-to-application transfer of data can occur.
The CPU overhead to send data is very small, only a few user-level instructions are needed for deliberate update. The model does not impose any CPU overhead to receive data, as there is no explicit receive operation. CPU involvement in receiving data can be as little as checking a flag; moreover program logic can be used to reason about what data has already arrived (since messages are delivered in order).
Our model as described in this section can be applied to communication between processes executing on one uniprocessor machine, on separate processors of a shared memory multiprocessor, or between processes executing on different nodes in a local area network. In the former two cases, VMMC is a special restricted case of shared memory communication with deliberate update added for bulk transfer. The LAN case is discussed in the next section.
SHRIMP machine is a set of UNIX nodes. Each node is identified with a unique Internet address. VMMC uses these addresses for node identification. The
vmmc_Hosts() call can be used to obtain node ids of a SHRIMP machine.Processes are identified with squids. A squid (Shrimp quad-word ID) is a kind of process ID which is unique within a given node and is guaranteed not to be recycled in the lifetime of a given VMMC machine (UNIX process ids are recycled, so they cannot be used for identification across nodes). Note that full process identification is a (node, squid) pair.
vmmc_GetSquid() returns squid of a calling process. vmmc_MyNode() returns the id for the node of the calling process.User address space is divided in VMMC pages, which size is given by
vmmc_PageSize() (currently 4096 bytes). Each such page is a multiple of virtual memory page. Each VMMC page contains integer number of VMMC words. The size of a VMMC word is given by vmmc_WordSize()(currently 4 bytes).Communication in the VMMC model is based on receive buffers. A receive buffer is a contiguous region of process memory used for receiving data from other processes using VMMC. Each receive buffer is identified with user-selected buffer id (unsigned integer). A receiver process makes a receive buffer available for senders with
vmmc_ExportRecvBuf() call which takes as arguments buffer id, buffer starting address and buffer length. Buffer id must be unique among all ids of receive buffers exported by a given process. Receive buffers cannot overlap.
Figure 2. Destination Address Space (dspace)
Destination proxy space (DestSpace} in short) is a logically separate special address space in each sender process; it is used for addressing imported receive buffers. This new address space is a subspace of the sender's virtual address space, but it is not backed by local memory and its addresses do not refer to code or local data. Instead, they are used only to specify destinations for data transfer. After sender imports a remote receive buffer, a representation of the imported buffer is mapped into the sender's destination proxy space. This representation is simply a range of addresses in destination proxy space. An address belonging to a given range can be translated by the VMMC into a destination machine, process, and virtual address. Figure 1 shows the mapping of a receive buffer allocated on node Receiver into destination proxy space of node Sender.
A sender process has to import a given receive buffer before it can send any data. The import operation is implemented with
vmmc_ImportRecvBuf() call. Import takes as arguments receive buffer id and full identification of a process (nodeId, pid) which exported this receive buffer. Import succeeds only after export call has been completed for this receive buffer. There is also an asynchronous version of the import call, vmmc_ImportRecvBufReq(), which issues only an import request and returns immediately.For a given process, ids of exported and imported buffers belong to two disjoint name spaces. As a result, one buffer id can be used in both export and import calls. However, for a given process, ids of exported buffers have to be unique. The full identification of an imported buffer is not its buffer id, but a triple
(buffid, nodeId, pid). As a result it is possible to import two buffers with the same buffer id, if two different processes exported them.With one export call, a process can export exactly one receive buffer to potentially unlimited number of processes. With one import call, a process can import only one receive buffer.
Imported receive buffers are mapped into destination proxy space (Figure 1). The import call returns an address in local DestSpace, which corresponds to the imported buffer. If we import a buffer of size nwords (each word is
vmmc_WordSize() bytes, currently four) and its address assigned by the import call is raddr, then the range raddr,...,raddr+nwords*4-1 in DestSpace corresponds to imported receive buffer. This address range represents a proxy buffer in sender's address space for this receive buffer. Given address in DestSpace can belong to no more than one proxy. The terms proxy and imported receive buffer denote the same thing: a local representation of a receive buffer which has been imported from remote node and mapped into local DestSpace.We say that a successful import establishes an import-export link between a receive buffer and its local proxy. Establishing an import-export mapping requires a trusted third party (operating system kernel or daemon process) to verify protection. Therefore, creating mappings can be relatively expensive. However, it should occur infrequently, as import is required only once for a given receive buffer; afterward messages can be sent directly from user-level.
Receive buffers need not begin or end on a page boundary. Although the SHRIMP library respects the true boundaries of a receive buffer, the SHRIMP hardware enforces protection on a page granularity.
Thus, if a process exports a receive buffer and a malicious process imports it, the importing process will be able, by bypassing the SHRIMP library, to send data to locations that are on the same page as the buffer but are not actually part of the buffer.
If you must communicate with a process you don't trust, you can assure absolute safety by aligning your buffer at the beginning of a page, and making its size a multiple of the page size. The size of VMMC page is returned by
vmmc_PageSize().Deliberate update is an explicit request to transfer data from anywhere in virtual memory of a sender to a previously imported receive buffer. Deliberate update requires explicit operation,
vmmc_SendData(), to initiate communication on the sender's side. vmmc_SendData() can transfer data from any memory in sender's address space (excluding DestSpace) to previously imported receive buffer on remote node. vmmc_SendData() takes as arguments address in DestSpace, which identifies receive buffer to be used, local "standard" (i.e. not in DestSpace) address which identifies data to be send, and nwords which gives the size of the message. If no error occurs, vmmc_SendData() returns after all data has been sent out to the network.VMMC also provides a non-blocking variant of
vmmc_SendData() called vmmc_SendDataAsync(). The non-blocking send is designed to minimize the CPU overhead required to start a data transfer.The basic virtual memory-mapped communication model provides protected, user-level communication to move data directly from a send buffer to a receive buffer without any copying. Since it requires the sender to know the receiver buffer address, it does not provide good support for connection-oriented high-level communication APIs.
The VMMC includes a mechanism called transfer redirection. The basic idea is to use a default, redirectable receive buffer when a sender does not know the final receive buffer addresses.
Redirection is a local operation affecting only the receiving process. The sender does not have to be aware of a redirection and always sends data to the default buffer. When the data arrives at the receive side, the redirection mechanism checks to see whether a redirection address has been posted. If no redirection address has been posted, the data will be moved to the default buffer. Later, when the receiver posts the receive buffer address, the data will be copied from the default buffer to the receive buffer, as shown in Figure 3(a).
Figure 3. Transfer redirection uses a default buffer to hold data in case receiver posts no buffer address or posts it too late, and moves data directly from the network to the user buffer if the receiver posts its buffer address before the data arrives.
If the receiver posts its buffer address before the message arrives, the message will be put into the user buffer directly from the network without any copying, as shown in Figure 3(b). If the receiver posts its buffer address during message arrival, the message will be partially placed in the default buffer and partially placed in the posted receive buffer. The redirection mechanism tells the receiver exactly how much and what part of a message is redirected. When partial redirection occurs, this information allows the receiver to copy the part of the message that is placed in the default buffer.
For redirectable receive buffers, VMMC provides additional functionality to help user processes figure out data arrival. When last chunk of a message arrives, a receive-buffer specific memory location in user space is updated with the receive buffer offset corresponding to the last word transferred.
There are two calls for a receiver to perform redirection on the redirectable buffer. The first is:
The second call is:
The transfer redirection mechanism naturally extends the basic communication model and it is very simple. It is controlled entirely by the receiver side; the sender does not need to know what the actual destination address is nor whether a transfer redirection takes place.
Our implementation allows redirection to happen at most once for each
vmmc_PostRedir(). As soon as a message is redirected from a redirectable buffer to a receive buffer, it cancels the redirection automatically. We choose this design to simplify vmmc_EndRedir(). Otherwise, it would have to return multiple redirection regions.The transfer redirection is designed for a single pair of sender and receiver. This design decision is based on the fact that most connection-based high-level APIs deal with a single pair of sender and receiver per connection.. To avoid interleaving messages from different senders, a redirectable receive buffer should be imported by just one sender which is easy to enforce when the buffer is exported. If one really needs to support multiple senders and the communicating processes trust each other, a high-level protocol can be used to ensure that only one sender can send a message to a redirected buffer at a time.
How much data is actually redirected depends on the relative timing of
vmmc_PostRedir() call and data arrival time, assuming vmmc_EndRedir() does not interfere with data redirection. If a redirection is posted before message arrival, the data will be redirected. If a redirection is posted after data starts arriving but before its transfer finishes, the remainder of arriving message will be redirected.The actual redirection registration and data structures reside on the network interface memory which is mapped to the user address space. Each user process has a set of redirection data structures allocated, one for each registered redirectable buffer. To post a transfer redirection, a user process invokes the translation library to translate the destination user buffer address into an index This process fills in the UTLB if necessary. Next, the user process fills the translation results as well as other arguments of the
vmmc_PostRedir() call into the redirection structure including index, offset, a receive buffer offset where redirection should start, and the size to redirect.On arrival of each chunk of data from the network, the LANai control program (in the firmware) checks if any bytes of this chunk should be redirected by consulting the redirection structure associated with the default receive buffer of the data. Data that are not redirected are moved into the default buffer. For redirection, the data chunk may be scattered into multiple subchunks because DMA from the network interface to the host memory cannot cross page boundaries or redirection boundaries. Upon the completion of the redirection, the LANai control program will update the redirection status by adding the number of bytes redirected. The LANai control program will also cancel the redirection after processing the last (as indicated by a special header flag) chunk of the message.
vmmc_EndRedir()
cancels the posted redirection. If the redirection has been canceled already by the LANai, the call simply collects redirection status from this receive buffer redirection structure in the SRAM. If the redirection has not been canceled yet, this call will set the size in the redirection structure to zero. Since both the LANai and the host can cancel a redirection, race conditions are possible. Our implementation avoids the race conditions by using a redirectionActive flag in the redirection structure. This flag is set by the LANai and read by the host. The LANai sets the flag whenever a redirection is in progress and resets it when the transfer completes. The host can cancel the redirection only after the flag is reset. This method avoids race conditions and ensures that there will be no more transfer of data to the user buffer after the cancellation is completed.Our implementation of transfer redirection is safe even in the presence of malicious users. Since each redirection structure is mapped by only one user, only its owner can write it. If a redirection structure owner does not follow the protocol described above, the worst thing which can happen is that
vmmc_EndRedir() returns wrong size and location of the redirection data which has been deposited to some random physical memory pages taken from this user's redirection TLB. However, the invariant of this TLB is that all such pages are writable by this user, so no harm can be done to other users.When sending a message we have the choice of transferring data only or data and control. The mechanism we use to transfer control is called a notification. Notifications are similar to UNIX signals. When a message with a notification attached arrives at destination receive buffer a user-level notification handler is invoked after the message data in user memory.
Handlers can be associated with receive buffers during the export operation. Each receive buffer can have zero or one handler. If a message with a notification arrives at receive buffer with no handler attached, this notification has no effect.
vmmc_SendDataNotify() sends message with notification in deliberate update mode. This call takes the same arguments as vmmc_SendData().Each handler has the same function signature (i.e. number and type of arguments). The first argument is the address of the last word of data transferred by the message that generated this notification, the second argument is the value of this word. Since VMMC continues to receive incoming messages between the arrival of a message with notification and the call to the associated user handler, the data of a notification message can be overwritten by subsequent messages even before the handler is called. However, VMMC makes sure that the handler is called with the value of the last word, as delivered by the message with notification.
VMMC provides two calls:
vmmc_BlockNotifications() and vmmc_UnblockNotifications() to control the delivery of notifications. Blocking notifications is useful to ensure consistency of data structures modified by both user-level handlers and main thread of execution. Blocking notifications is global, i.e. it affects all receive buffers of a given process. When notifications are blocked, they are queued by the system. After they are unblocked, they are delivered in-order to the appropriate user-level handlers. Since there is limited space to store queued notifications, they should not be blocked for too long.For each
vmmc_BlockNotifications() there should be a call to vmmc_UnblockNotifications(). Pairs of these calls can be nested. vmmc_UnblockNotifications() unblocks notifications only if it is called at the first nesting level. In this case, the call returns a positive integer, otherwise it returns zero. To make sure that notifications are unblocked unconditionally, one can call vmmc_UnblockNotifications() in the loop until it returns positive integer. If notifications are unblocked, further calls to vmmc_UnblockNotifications()have no effect.While user-level notification handler is executing, notifications remain blocked. Notification handler should not block or wait spinning. Not all SBL calls can be used from within notification handler. Both
vmmc_BlockNotifications() and vmmc_UnblockNotifications() can be called from within the handler, provided they are paired.However, any attempt to unblock notifications unconditionally by repeated calls to
vmmc_UnblockNotifications() will eventually return an error (as notifications must remain blocked within a handler).NOTE: these calls are not implemented.
There are two calls provided to undo export and import operations. The importer of a given buffer executes
vmmc_UnimportRecvBuf(). This call undoes a previous vmmc_ImportRecvBuf() call by breaking the connection to the remote receive buffer, and de-allocating the local proxy memory.vmmc_UnexportRecvBuf()
undoes a previous call to vmmc_ExportRecvBuf(). All existing connections to the buffer are forcibly broken, and the buffer is made unavailable for further connections. In particular all importers of this buffer cannot send any messages to this buffer after this call completes.
VMMC API Reference
This section describes all of the VMMC calls that are available to user applications.
VMMC Error Return Values
vmmc_AllHosts()
*vmmc_AsyncStatus()
*vmmc_BlockNotifications()
*vmmc_ClearDataEnd()
*vmmc_DataEnd()
*vmmc_EndRedir()
*vmmc_EqualNode()
*vmmc_ErrorStr()
*vmmc_ExportRecvBuf()
*vmmc_GetData()
*vmmc_GetDataAsync()
*vmmc_ImportRecvBuf()
*vmmc_ImportRecvBufAsync()
*vmmc_ImportRecvBufStatus()
*vmmc_MyHostName()
*vmmc_MyNode()
*vmmc_MyPid()
*vmmc_NameToNode()
*vmmc_NodeToName()
*vmmc_PageSize()
*vmmc_Parent()
*vmmc_PostRedir()
*vmmc_SendData()
*vmmc_SendDataAsync()
*vmmc_SendDataAsyncNotify()
*vmmc_SendDataNotify()
*vmmc_SessionHosts()
*vmmc_SetDebugLevel()
*vmmc_Spawn()
*vmmc_UnblockNotifications()
*vmmc_UnexportRecvBuf()
*vmmc_UnimportRecvBuf()
*vmmc_Version()
*vmmc_WordSize()
*Most VMMC calls return error status. Negative integer indicates an error, while zero means no error occurred. Errors can be reported with
vmmc_Error().The following errors are possible:
Returns the names of all hosts that are part of the multi-computer. This includes hosts that may not be part of the user’s current session (see
vmmc_SessionHosts()).Returns the status of an asynchronous request.
Blocks the delivery of notifications.
Resets the value of the "end-of-data" word index associated with a redirectable exported buffer.
Returns the value of the "end-of-data" word index associated with a redirectable exported buffer.
Terminates redirection on a specified buffer.
Determines if two VMMC nodes are the same.
vmmc_ErrorStr()
Returns a string describing a VMMC error code.
Exports a receive buffer.
Gets data from a remote buffer.
Asynchronously gets data from a remote buffer.
imports a receive buffer.
imports a receive buffer
imports a receive buffer
returns the squid of a calling process.
returns the squid of a calling process.
returns the squid of a calling process.
returns the squid of a calling process.
returns the squid of a calling process.
returns the squid of a calling process.
returns squid and node of the parent process.
returns the squid of a calling process.
sends message with deliberate update
Note: Page faults are possible if
sendAddr or destAddr do not correspond to valid addresses.sends message with deliberate update
Note: The send buffer cannot be reused until this message completion status is checked with
vmmc_AsyncSendStatus(). Page faults are possible if sendAddr or destAddr does not correspond to valid addresses.sends message with deliberate update
Note: The send buffer cannot be reused until this message completion status is checked with
vmmc_AsyncSendStatus(). Page faults are possible if sendAddr or destAddr does not correspond to valid addresses.Sends a deliberate update message with notification.
Note: Page faults are possible if
sendAddr or destAddr does not correspond to valid addresses.returns the names of all hosts that are part of the multi-computer.
returns the squid of a calling process.
spawns a process
Conditionally unblocks delivery of notifications.
Example:
The following loop unconditionally unblocks notifications (with error set if it is called inside handler):
int status;
while (status = vmmc_UnblockNotifications()) == 0)
;
if (status < 0)
vmmc_Error(status, "unconditional unblocking");
unexports a receive buffer.
returns the squid of a calling process.
returns the squid of a calling process.