Shared Lib Question (ISC)

Alex Goykhman goykhman_a at apollo.HP.COM
Thu May 23 08:22:00 AEST 1991


Reply-To: goykhman_a at apollo.HP.COM (Alex Goykhman)
Organization: Hewlett-Packard Apollo Division - Chelmsford, MA

In article <7616 at segue.segue.com> jim at segue.segue.com (Jim Balter) writes:
>In article <519a6ad6.20b6d at apollo.HP.COM> goykhman_a at apollo.HP.COM (Alex Goykhman) writes:
>>    Fclose(NULL) is only defined within the context of a single process, all it 
>>    needs to do is to go through the per-process fd table and and close every fd that
>>    remains open.  It should be easy to implement this call as a part of a shared
>>    library, and I am not sure what kind of overhead you are referring to.
>
>The subject at hand was global data, not ease of implementation.  In order to
>implement fclose(NULL), there must be a global pointer to the head of a list of
>FILE's, or a global table of FILE's.  Of course, given the global data, the
>implementation is trivial.  The fd table is in the u-structure and isn't really
>relevant to a discussion of stdio routines (unless you want to provide a system
>call to allow a shared lirbary to access global data saved in the u-structure;
>a conceptually intresting but non-pragmatic approach).  Note that I brought up
>fclose(NULL) because it is contrary to jfh's point about pointers to state info
>(FILE *) being explicitly passed to stdio routines.
>
>The overhead referred to is the overhead of a wrapper routine to pass the
>global data (maintained per-process) to the "real" routine in the shared
>library.  This was all pretty evident from a careful reading of the thread.

    What I was getting at is this: the issue is not the presence of "global"
    data vs. the lack of it, nor is it "pure" vs. "impure" routines.

    The issue here is static vs. dynamic linking.  Another word, the issue is 
    whether external address references are resolved the old fashioned way 
    by statically linking everything into one big happy executable, or they
    are resolved at the run time via a dynamic linking (call it "shared library")
    mechanism, or via a combination of both.

    It should not matter if a routine is in a shared library or not, as long as 
    every external reference the routine employs is resolved to the same value in
    both cases.  In fact, there is no reason for a routine to know if it is a part 
    of a shared library.   While a particular shared library mechanism is bound
    to be influenced by the underlying hardware, I can't think of a modern
    computer architecture that would dictate individual "wrappers" for "impure" 
    shared library routines.
    
    As to overhead, every time we are moving from a compiler (linker) to an 
    interpreter (shared libraries mechanism), we are trading speed for memory.
    Given the current level of technology where cpus are already well ahead of 
    memories, and getting more so as compilers (RISC) are replacing interpreters
    (CISC),  shared libraries can play an important role in balancing a system's
    workload, therefore increasing the system's throughput.



More information about the Comp.unix.internals mailing list