Why use U* over VMS

terryl at sail.LABS.TEK.COM terryl at sail.LABS.TEK.COM
Wed Oct 31 06:10:13 AEST 1990


In article <1809.272c3135 at dcs.simpact.com> kquick at dcs.simpact.com (Kevin Quick, Simpact Assoc., Inc.) writes:
+Drivers:
+--------
+
+Because the OS's are considerably different, driver writing is as well.  A
+driver is, by definition, a three-way bridge between the operating system,
+a device, and the application program.  If you are writing a device driver,
+there are several significant differences to be aware of.  My general
+impression is that the Unix environment for a driver is much simpler and
+therefore easier to write to, whereas the VMS environment is more
+complicated, but provides better tools.
+
+1. VMS has the concept of allocatable virtual memory, which may be obtained
+   by calling system routines; most VMS device drivers use this technique
+   for buffering data, etc.
+
+   Unix (usually) also has the concept of allocatable virtual memory (implying
+   non-pageable, kernel space), but few Unix drivers actually use this
+   technique.  The Unix drivers (that I've seen) usually pre-allocate a large
+   chunk of memory at compile time and use that memory as needed.
+
+   The problem arises in that, while VMS returns an error indicating when
+   no memory is available, Unix simply "sleeps".  This is effectively a
+   suspending of the current process until memory becomes available.
+   Unfortunately, the VMS driver does not depend on process context to
+   execute, which brings us to point 2:
+
+2. In Unix, when an application issues a driver request, the PC is transferred
+   to the appropriate driver routine in kernel and remains there, in execution,
+   until the request completes, at which time the routine exits and the user
+   level code resumes.  There is no Unix implementation of "no-wait",
+   "asynchronous",  or "background" IO.
+
+   In VMS, the application issues a driver request.  That process is then
+   placed in a wait state after a VMS structure is initialized to describe
+   the request.  That structure is then passed through several parts of the
+   device driver in several stages and interrupt levels to accomplish the
+   request.  Each intermediary routine is free to do its work and then exit
+   which returns to the OS.  When the user's request is finally completed,
+   a specific "wakeup" is issued to the process with an output status.

    Actually, no, the Unix driver does NOT "remain in execution" in the driver
until the request completes. For disk drivers, as an example, what happens is
that a request for a transfer is queued, and then the higher level OS code
will wait for the transfer to complete, thus giving up the processor to another
process so it may run. While it is true that some device drivers may do a busy
wait, waiting for some command to complete while in the driver, these are usu-
ally commands that are known to complete in a very short amount of time, but
they are usually the exeception, and not the rule (like clearing error condi-
tions).

     As for the "no-wait", "asynchronous",  or "background" IO, at the user
level, yes, that is true, but at the kernel level, it is possible to do this.

+3. Everything except interrupt handlers in Unix are written in "user"
+   context, whereas only the preliminary portion of a VMS driver (the
+   FDT routines) are in user context.
+
+   This means that all buffer validation and copying from the user space
+   must be done from the FDT routines in VMS; copyout of data after the
+   request completes is done by VMS based on addresses and flags in the
+   structure describing the request.
+
+   This also means that, whereas a Unix driver has a relatively straight-
+   forward code flow, with perhaps a sleep to await later rescheduling,
+   the VMS environment is more versatile at  the cost of a much more
+   complex code flow.
+
+   Care must be taken in a VMS driver not to access user level data and
+   such from most of the driver, whereas a Unix driver must insure user
+   context for kernel memory allocation, sleep requests, and much more.
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

     I'm a little confused here on what you call "user" context. To me, a
"user" context consists of a transfer direction (eg a read or a write), a
transfer count, a starting block number, and a buffer address (to use my
previous disk analogy). That's it; nothing more, nothing less. Also, your
comment about Unix "must insure user context for kernel memory allocation,
sleep requests, and much more" is a little cryptic. All the driver has to
do is validate the user's buffer address, and that the transfer is valid
with respect to the disk geometry. Now Unix does provide both direct-from-
user-space I/O, and also from the internal kernel buffers into the user-
provided buffer address, but I'm still not sure what you mean by the above
quoted remarks. For transfers into/out of the internal kernel buffers men-
tioned above, the user context doesn't even come into play. It is taken care
of at a higher level. For transfers directly into/out of the user's buffer
address, again, most of that is taken care of at a higher level. By the time
it gets down to the driver level, all the driver sees is a transfer direction,
a transfer count, a starting block, and a buffer address. As far as the driver
is concerned, there isn't much of a distinction between a kernel buffer address
and a user buffer address.



More information about the Comp.unix.programmer mailing list