"bad ulimit" on all non-root "at" & "batch" jobs under 386/ix

James Van Artsdalen james at bigtex.cactus.org
Tue Feb 6 13:54:09 AEST 1990


In <1990Feb3.003112.22012 at virtech.uucp>, cpcahil at virtech.UUCP
	(Conor P. Cahill) wrote:

> In article <28606 at bigtex.cactus.org> james at bigtex.cactus.org (me) wrote:

| If the AT&T programmer had thought correctly, they would have realized
| that the correct thing to do would be to (1) count blocks actually
| allocated so that a program would really be allowed to write it's full
| ulimit. 

> To implement this change they would have to do one of the following:
> 
> 	1. change the file system inode so that it now has a slot to 
> 	   keep the "real" size of the file (# of used datablocks).
> 	   and make the appropriate kernel stuff to keep track of this
> 	   stuff and update the disk inode.

This sounds like much too much effort.  When I said I didn't care how
big the files were, just how much was written, and I meant it: file
size has nothing to do with limiting a user's disk consumption.

Keep a vector, indexed on uid, of how many blocks each uid allocates,
incrementing the count each time a new block is allocated,
decrementing on unlink.  I'll go out on a limb, not having looked at
this source, but I'll bet there's one place in the generic write(2)
and unlink(2) code that can do this (arguably directory files should
be handled too).  No need to muck with the inode structure.  You do
need a way to initialize this structure, via login(1) I suppose.  Gee,
sounds more and more like quota.
-- 
James R. Van Artsdalen          james at bigtex.cactus.org   "Live Free or Die"
Dell Computer Co    9505 Arboretum Blvd Austin TX 78759         512-338-8789



More information about the Comp.unix.i386 mailing list