fast, portable, flexible replacement for malloc

Piercarlo Grandi pcg at aber-cs.UUCP
Fri Mar 17 04:19:13 AEST 1989


Appended you will find a shell archive with a very sophisticated and
efficient store allocation library, a clone of malloc.

I designed it to replace the bankrupt malloc(3) and malloc(3x) in System 5.2
for the 286. It is especially indicated for segmented machines in general,
and has a layered internal structure that ought to help with portability.

Performance is excellent, and while being larger than malloc(3x), it is
tipically faster than most other alternatives. In particular on segmented
machines with a swapping kernel, where it tries to minimize interactions
with the kernel and swaps.

It has been been written quite carefully as to style; I hope you like it,
even if it is unconvetional. It is very disciplined and ordered. I have
put comments wherever something unobvious is going on; there are a lot
of comments, as there are a lot of subtle considerations.

It is reliable. I have not find bugs for a while.

Finally, a bit of history. An earlier version of this library has been
posted to BIX; this version has a completely restructured and simplified
interface to the kernel, that removes a subtle bug that had nagged me on
the version psoted to BIX.

Even more finally :-], a disclaimer: in no way the University College of
Wales, my employer, has supported or helped the development of this program.
It has been entirely developed on my own time with my own resources, and
is an adjunct to my own personal research, and in no way has anything to
do with the research of the University College of Wales.

I thank the University College of Wales for allowing me to
post this program through the use of their computer resources.

----------------- cut here ----------------
#! /bin/sh
# This is a shell archive, meaning:
# 1. Remove everything above the #! /bin/sh line.
# 2. Save the resulting text in a file.
# 3. Execute the file with /bin/sh (not csh) to create the files:
#	malloc.3
#	malloc.c
#	malloc.h
# This archive created: Thu Mar 16 15:30:04 1989
export PATH; PATH=/bin:$PATH
if test -f 'malloc.3'
then
	echo shar: will not over-write existing file "'malloc.3'"
else
cat << \SHAR_EOF > 'malloc.3'
.TH 3 MALLOC
.SH NAME
malloc \- memory allocation library (-lmalloc)
.SH SYNOPSYS
.nf
.B "#include <malloc.h>"

.BI "char *malloc(" bytes ");"
.BI "unsigned " bytes ";"

.BI "char *realloc(" block "," bytes ");"
.BI "char *" block ";"
.BI "unsigned " bytes ";"

.BI "void free(" block ");"
.BI "char *" block ";"

.BI "char *calloc(" count "," bytes ");"
.BI "unsigned " count ";"
.BI "unsigned " bytes ";"

.BI "void cfree(" block ");"
.BI "char *" block ";"

.BI "int mallopt(" option "," value1 "," value2 ");"
.BI "int " option ";"
.BI "unsigned " value1 "," value2 ";"
.fi
.SH DESCRIPTION
This is a library that allocates and frees memory blocks. It was designed
to be compatible with the old malloc library, and its newer version from
Berkeley.
.PP
Its main advantages are that it is very efficient, in space and in time, (it
uses a carefully coded next first boundary tag method with coalescing at free
time), and works especially well with segmented architecture machines, on
which other allocation libraries are not very good.
.PP
It also has the rare feature of releasing to the operating system some of
the allocated memory is possible, and this means that programs using this
library can expand and contract their use of memory accorrding to effective
usage, especially if they tend to allocate memory ina stacklike fashion.
This is a big win on swapping machines.
.IP "\fBmalloc()\fP"
will return a pointer to a memory block of at least the given number of
bytes.  The block may actually be a little larger, or somewhat larger,
depending on parameters. For each block allocated the overhead is two
pointers (near pointers on a segmented machine). If memory could not be
allocated, the null pointer is returned.
.IP "\fBrealloc()\fP"
will change the size of the given block to the new number of bytes. It will
return the new addres of that block, as the block may be moved in memory to
find it a sufficiently large slot. In any case the contents of the block are
preserved, up to the smaller of the new and old sizes. If reallocation was
not possible, the null pointer is returned.
.IP "\fBfree()\fP"
returns
.B immediately
to the pool of available storage the given block. Since coalescing of blocks
is performed at freeing time, this procedure will immediately clobber the
contents of the block, and is thus incompatible with some fairly obnoxious
practices (like reallocating a just freed block) from the past.
.IP "\fBcalloc()\fP"
will return a block capable of holding the an array with the given count
of elements each of the given size. The block is initialized to binary
zeros. If the block could not be allocated, the null pointer is returned.
.IP "\fBcfree()\fP"
is exactly the same as
.BR free() ,
and is provided only because older version of the library did have it
to pair with
.BR calloc() .
.IP "\fBmallopt()\fP"
allows specifying some tuning parameters of the library. Its syntax is
virtually compatible with that of the Berkeley allocator, but the semantics
have been changed slightly. The first parameter may be either
.RS
.IP "\fBM_MXFAST\fP"
The first value after the option is the core allocator's
minimum expansion, and the second value is its minimum contraction. In other
words, when core expansion must be requested to the operating system, at
least the first parameter's worth of bytes will be requested, and when memory
is to be released to the operating system, at least the second parameter's
worth of bytes will be released. If either value is the integer
.B -1
the correpsonding watermark is not changed; as a special case, if the second
value is zero, releasing of core to the operating system is disabled.
.IP "\fBM_GRAIN\fP"
The first value after the option will be taken as the minimum block size
to allocate; all blocks of smaller size will be roundep up to (at least)
this size.
.IP "\fINOTE\fP"
Other values for the option are recognized but then rejected. These are
nearly compatible with the Berkeley library semantics, but not entirely. Be
careful. In particular, option
.B M_KEEP
will be always rejected, even if it is defined in the header file.
.RE

This procedure will return zero if the option and values were valid, and
non zero otherwise.
.IP "\fBmallinfo()\fP"
In this library this procedure does absolutely nothing, and is provided
only for syntactic compatibility with other versions.
.SH "SEE ALSO"
.IR brk (2),
.IR malloc (3X) .
.SH BUGS
Some of the code that deals with optimizing some cases of
.B free()
or
.B realloc()
is probably too convoluted. Compile time options exist to ignore it,
probably at small cost.
SHAR_EOF
fi # end of overwriting check
if test -f 'malloc.c'
then
	echo shar: will not over-write existing file "'malloc.c'"
else
cat << \SHAR_EOF > 'malloc.c'
/*
    $Header: /aware0/aware/piercarl/Src./Lib./Block./Block.c,v 3.3 89/03/15 17:36:54 piercarl Exp $
*/

static char Notice[] =
    "Copyright (C) 1987,1989 Piercarlo Grandi. All rights reserved.";

/*
    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the  Free Software Foundation; either version 1, or (at your option)
    any later version.

    This  program is distributed in the hope that it will be useful, but
    WITHOUT   ANY   WARRANTY;  without  even  the  implied  warranty  of
    MERCHANTABILITY  or  FITNESS  FOR  A PARTICULAR PURPOSE. See the GNU
    General Public License for more details.

    You  should  have  received a copy of the GNU General Public License
    along  with  this  program;  if  not,  write  to  the  Free Software
    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/

/*
    An introduction to this program.

    This is a library of core allocation/deallocation functions, similar
    in  scope  to  the  malloc(3) functions of UNIX. This is (hopefully)
    totally  machine  independent, is very easily configurable, has many
    optimizations, and care has been taken that the most common case has
    a short path.

    Among   unusual   features   are   deallocation   of  unused  store,
    minimization  of copying when reallocation is requested, unification
    of the allocate and reallocate function, easy hiding of os dependent
    code  in a couple of procedures, adaptability to 2D address machines
    with nearly optimal exploitation of segments.

    Extensive  debugging  and  tracing  have been provided. Three things
    remain to be done for a truly professional library:

	    1)  some functions are expanded inline (e.g. DEQUEUE) -- the
	    installer ought to be able to option between this and normal
	    procedures,  for  machines where space is tight or procedure
	    calling is fast.

	    2) statistics gathering (counts of requests by type, of free
	    list  searches,  of collapses, sum of bytes created/deleted,
	    etc...).

	    3) breaking this long (> 1000 lines) file into smaller ones,
	    with   one  or  two  procedures  at  most  for  each  (prime
	    candidates are a Zone.c file and an Area.c one).

    This  library has been tested, but not truly extensively -- you will
    find a few remaining bugs.
*/

/*
    To conserve columns in declarations, else things like "register struct
    StructName" are too long.
*/
#define reg		register

#define skip		/* null statement */

/*
    Following Dijkstra -- elseless if (post is pre negated).

	example: were (i < 0) i = -i;

    NB:  in  the  second  case  (condition) had better not have any side
    effects... after all the (condition) is SUPPOSED to be tested twice!

    Under   ATT   S5.2/286  the  second  formulation  gives  a  spurious
    "statement  not  reached"  diagnostic at compile time if the body of
    the were is a return.
*/

#if (defined(NDEBUG))
#   define were(condition)	if (!(condition)) skip; else
#else
#   define were(condition)	for (; (condition); (condition) && abort())
#endif

/*
    If  you  define  a  macro as {} block, you will not be able to put a
    semicolon  after  its invocation, if that is the then part of an if.
    Use instead these two and you will be home safe.
*/
#define begindef	do {
#define enddef		} while(0)

/*
    These  allow  you to avoid duplicating a declaration of an entity in
    your library that you want to export. You prepare a .h that contains
    the declarations, using public. Under UNIX the given null definition
    works  both in the library and in its client, because the UNIX ld(1)
    puts such variables in common.

    With  another os or linker you shall provide for public to expand to
    extern in the client and to nothing in the library.
*/

#define private		static
#define public		/* extern */

/*
    Here  we assume we are on a two's complement machine. Suppose that n
    in  1M2TO  is  the  number of bits in a word; then Bit2TO(n) will be
    ZERO, and 0-1 will be n ones, as required.
*/

#define Bit2TO(n)	(1<<(n))		/* 2^n (n+1 bits wide)	*/
#define Bit1M2TO(n)	(Bit2TO(n)-1)		/* (2^n)-1 (n bits wide)*/
#define BitMODULUS(i,n)	((i) & Bit1M2TO(n))	/* i%(2^n)		*/
#define BitMULTIPLE(i,n)(1 + (Bit1M2TO(n)|((i)-1)))

/*
    This  set  of  definitions allows any program to be fully addressing
    strategy  independent:  addressing  is expected to be bidimensional;
    each address has two fields (they need not have any special place in
    the  address,  not  even  contiguous), one identifies a segment, the
    other an offset within it. Arithmetic can only be done on the offset
    portion, and the id portion does not take place in it.

    On   monodimensional   address   machines   we  use  the  very  same
    definitions,  but  the  id  portion  is  in  effect  missing from an
    address,  so  we  fake it as always being 0, and the offset actually
    takes  up all bits in the address (unless of course it is a 370 or a
    MC68010 or such that use only 24 of their 32 address bits or such).

    Mnemonic  rule for X,Y : X is the segment indeX number, and Y is the
    bYte number.
*/

#if (defined(iAPX286))
    typedef char		*Ptr;		/* Generic pointer	*/
#   define PtrNIL		((Ptr) 0)	/* Distinguished pointer*/

#   if (defined(LARGE_M))
	typedef long unsigned	PtrW;		/* Pointer as binary word*/
#	define PtrBITS		32		/* Bits in pointer	*/

	typedef short unsigned 	PtrX;		/* Row or segment number*/
	typedef short unsigned  PtrY;		/* Column or byte number*/
#	define PtrXBITS		16		/* May also be 13 ?	*/
#	define PtrYBITS		16

#	define PtrTO(x,y)	((Ptr) ((((PtrW) (x)) << PtrYBITS) | (y)))
#	define PtrXOF(ptr)	((PtrX) (((PtrW) (ptr)) >> PtrYBITS))
#	define PtrYOF(ptr)	((PtrY) (PtrW) (ptr))
#	if (defined(undef)) /* lvalue X and Y -- highly questionable */
#	    define PtrXIN(ptr)	(((PtrX *) &(ptr))[1])
#	    define PtrYIN(ptr)	(((PtrY *) &(ptr))[0])
#	endif
#   endif

#   if (defined(SMALL_M))
	typedef short unsigned	PtrW;		/* Pointer as binary word*/
#	define PtrBITS		16		/* Bits in pointer	*/

	typedef short unsigned 	PtrX;		/* Row or segment number*/
	typedef short unsigned  PtrY;		/* Column or byte number*/
#	define PtrXBITS		0
#	define PtrYBITS		16

#	define PtrTO(x,y)	((Ptr) (y))
#	define PtrXOF(ptr)	((PtrX) 0)
#	define PtrYOF(ptr)	((PtrY) (ptr))
#	if (defined(undef)) /* lvalue X and Y -- highly questionable */
	    PtrX		PtrXDummy;	/* Fakes assignments	*/
#	    define PtrXIN(ptr)	PtrXDummy
#	    define PtrYIN(ptr)	(((PtrY *) &(ptr))[0])
#	endif
#   endif

#endif

#if (PtrXBITS == 0)
#   define Ptr1D	1
#else
#   define Ptr1D	0
#endif

/*
    Given  a  pointer  to  a  field  of  a the specified structure type,
    returns the address of the whole structure instance. Very useful for
    having  records  linked  in more than one list or tree, or where the
    link cannot be the first field in a struct.

    NB: This definition is "perfectly" portable.
*/

#define offsetof(struct,field)						\
	((PtrY) ((Ptr)  &((struct *) 8)->field - (Ptr) ((struct *) 8)))

#define structof(ptr,struct,field)					\
	((struct *) ((PtrW) ((Ptr) (ptr)) - offsetof(struct,field)))

/* Parameters and includes for this library */

#if (defined(iAPX286))
#   define CoreCOPY(from,to,bytes)	memcpy((to),(from),(bytes))
#endif

#if (defined(iAPX286))
#   define CoreFILL(where,value,bytes)	memset((where),(value),(bytes))
#endif

#if (!defined(BlockDEBUG))
#   define BlockDEBUG		0
#endif

#if (!defined(BlockPRINT))
#   define BlockPRINT		0
#endif

#if (BlockDEBUG)
#   define static
#   undef BlockPRINT
#   define BlockPRINT		1
#endif

#include "assert.h"

#if (BlockPRINT)
#   include "stdio.h"
#endif

/*
    GRANULE  is  the  granularity of allocation, all blocks returned
    must be multiple of this size (and their address too).

    EXTEND  is  the  minimum  extension  to  the  break we request; what
    remains  having  satisfied  the current request is left free. If the
    current  request  is smaller than KEEPIT then the GRANULE is divided
    not  into 1 busy block and 1 free block but into n+1 busy blocks, of
    which n go onto the keeplist for that size (until n becomes equal to
    MAXKEPT), and the remainder of the granule is left free.

    RELEASE, if defined, is the minimum amount of free memory to release
    to the operating system by bringing down the break.

    TRYLOWER  and  TRYHIGHER  are  flags that control the work of Create
    when  it  must  resize a block, and the new size is greater than the
    current  size.  With  TRYLOWER  Create will collapse it to the lower
    block  if  free and useful, and with TRYHIGHER will do the same with
    the  higher block. If both are defined, both lower and higher blocks
    will be collapsed into the current block if this creates a new block
    of sufficient size.

    RELEASE,  TRYLOWER,  and  TRYHIGHER have been provided so that it is
    possible  to  exclude from compilation seldom used sections of code,
    to make it smaller.
*/

/*
    This is a default value for a runtime quantity
*/

#if  (!defined(AreaEXTEND))
#   define AreaEXTEND		2048		/* Multiple of 2^GRANULE*/
#endif

/*
    This is a default value for a runtime quantity, except that if 0 the
    relevant code is not compiled.
*/

#if  (!defined(AreaRELEASE))
#   define AreaRELEASE		4096		/* Multiple of 2^GRANULE*/
#endif

/*
    These define compile time values
*/

#if (!defined(BlockGRANULE))
#   define BlockGRANULE		2		/* Base 2 logarithm	*/
#endif

#if  (!defined(BlockNEW))
#   define BlockNEW		defined(iAPX286)/* Boolean		*/
#endif

#if  (!defined(BlockTRYLOWER))
#   define BlockTRYLOWER	1		/* Boolean		*/
#endif

#if  (!defined(BlockTRYHIGHER))
#   define BlockTRYHIGHER	1		/* Boolean		*/
#endif

/*
    Each  block is linked in one or two lists. All blocks are lonked
    in and ascending memory address list; free blocks are also linked in
    one free list. Both lists are doubly linked.

    The  ascending  memory, fundamental, list links are contained in the
    first  two  fields of a block, as the address of the previous block,
    and  the size of the current; macros exist to compute the address of
    the next block. The size is guranteed to be a multiple of some power
    of  two, so that the lower bits of the size field ought to be always
    zero.  We  use the least significant bit to distinguish between busy
    and free blocks.

    The  body  of a block contains a minimum of (1<<SMALL) bytes, or the
    free list two links if the block is free.
*/

#if  (BlockGRANULE  < 1)
#   include "ERROR: block addresses must be even, for flagging"
#endif

#if (PtrYBITS  < BlockGRANULE)
#   include "ERROR: the seg size MUST be a multiple of the granule size"
#endif

/*
    Notice  that  the free list links are full pointers, because the
    free list is the same for all segments.
*/

struct BlockFree
{ 
    struct BlockHead		    *BfBefore; 
    struct BlockHead		    *BfAfter; 
};

#define BlockNIL		((struct BlockHead *) PtrNIL)

/*
    Notice  that the "pointers" to contiguous blocks are just the in
    segment  byte  offset,  as  blocks  in  different segments cannot be
    collapsed  together,  i.e.,  they  are  not contiguous by definition.
*/

struct BlockHead
{  
    PtrY		    BhLower;  
    PtrY		    BhHigher;
#   define			BlockBUSY   ((PtrY) 1)
#   define			BlockHIGH   (~(PtrY) 1)

    union BlockBody
    {
	long unsigned	    BbData; 
	struct BlockFree    BbLink; 
    }
			BhBody;
#   define		    BhBODY	    BhBody.BbData
#   define		    BhBEFORE	    BhBody.BbLink.BfBefore
#   define		    BhAFTER	    BhBody.BbLink.BfAfter
};

#define BlockSMALL	    (sizeof (struct BlockHead))
#define BlockEXTRA	    (sizeof (struct BlockHead) - sizeof (union BlockBody))

#define BlockISFREE(block)  (((block)->BhHigher & BlockBUSY) != BlockBUSY)

/*
  We have SIZE and FSIZE. HIGHER and FHIGHER because when we KNOW that
  the block is free we can omit masking Higher.
*/

#define BlockSIZE(block)    (((block)->BhHigher&BlockHIGH) - PtrYOF(block))
#define BlockFSIZE(block)   ((block)->BhHigher - PtrYOF(block))

#define BlockLOWER(block)						\
    ((struct BlockHead *) PtrTO(PtrXOF(block),(block)->BhLower))
#define BlockHIGHER(block) \
    ((struct BlockHead *) PtrTO(PtrXOF(block),((block)->BhHigher&BlockHIGH)))
#define BlockFHIGHER(block) \
    ((struct BlockHead *) PtrTO(PtrXOF(block),(block)->BhHigher))

/*
    Collapse two contiguous blocks on the ascending address list into one,
    the lower one (of course...). Both blocks are assumed to be free...
*/

#define BlockCOLLAPSE(lower,higher)					\
begindef								\
    assert(BlockISFREE(lower));						\
    assert(BlockISFREE(higher));					\
    BlockFHIGHER(higher)->BhLower = PtrYOF(lower);			\
    (lower)->BhHigher = (higher)->BhHigher;				\
enddef

/*
    This  splits  a block, denoted by lower, in a lower piece and a higher
    piece with the given sizes.
*/

#define BlockSPLIT(lower,lowersize,higher,highersize)			\
begindef								\
    assert(BlockISFREE(lower));						\
    assert(BlockFSIZE(lower) == ((lowersize)+(highersize)));		\
    (lower)->BhHigher = PtrYOF(lower) + (lowersize);			\
    (higher) = BlockFHIGHER(lower);					\
    (higher)->BhLower = PtrYOF(lower);					\
    (higher)->BhHigher = PtrYOF(higher) + (highersize);			\
    BlockFHIGHER(higher)->BhLower = PtrYOF(higher);			\
enddef

/*
    We define an interator on a free list queue.
*/

#define forqueue(head,queue,condition)					\
    for									\
    (									\
	(head) = (queue)->BhAFTER;					\
	(head) != (queue) && (condition);				\
	(head) = (head)->BhAFTER					\
    )

/*
    Append a block into a free list, or remove it.

    We  reset  the BlockHand to the most recently queued block, to make it
    more  probable  that  it  will  be  the  next  to  be  considered  for
    allocation,  to  conserve  locality  of  reference.  Probably a purely
    sequential   movement   of   the   BlockHand   is  better  to  contain
    fragmentation...
*/

#define BlockENQUEUE(list,block)					\
begindef								\
    assert(BlockISFREE(block));						\
    assert((list)->BhAFTER != BlockNIL && (list)->BhBEFORE != BlockNIL);\
    BlockHand		= (block);					\
    (block)->BhBEFORE	= (list);					\
    (block)->BhAFTER	= (list)->BhAFTER;				\
    (block)->BhAFTER->BhBEFORE = (block);				\
    (list)->BhAFTER	= (block);					\
enddef

#define BlockDEQUEUE(block)						\
begindef								\
    assert(BlockISFREE(block));						\
    assert((block)->BhAFTER != BlockNIL && (block)->BhBEFORE != BlockNIL);\
    if (BlockHand == (block)) BlockHand = (block)->BhAFTER;		\
    (block)->BhAFTER->BhBEFORE = (block)->BhBEFORE;			\
    (block)->BhBEFORE->BhAFTER = (block)->BhAFTER;			\
enddef

extern Ptr		sbrk();
extern int		brk();

/*
    The  simulated  break  pointer.  To  avoid problems with the segment
    size,  and  overflowing  a  PtrY,  we  will  never allocate the last
    GRANULE of a segment.
*/

private Ptr		AreaBreak = PtrNIL;

/*
    Allocator parameters
*/

#if (BlockNEW)
    private PtrY	AreaExtend = AreaEXTEND;
#   if(AreaRELEASE)
	private PtrY	AreaRelease = AreaRELEASE;
#   endif
#else
#   define AreaExtend	AreaEXTEND
#   define AreaRelease	AreaRELEASE
#endif

/*
    Here we see what's left at the end of this segment. It is

	2^YBITS - 2^GRANULE - (offset+rounding)

    or equivalently

	((2^YBITS-1) - (2^GRANULE - 1) - (offset+rounding)

    of  course. We write it like that to make evident that we do not
    want  to  raise  2  to  YBITS  directly  because  this may be an
    overflow;  also on the ATT S5.2/286 a segment cannot be its full
    size,  but just one byte shorter. For both reasons we will never
    allocate the last GRANULE of a segment.

    Notice  how  we  split  the  operation in two parts to avoid the
    compiler rearranging the operands and incurring in an overflow.
*/

private PtrY		AreaTail(offset)
    reg PtrY		    offset;
{
    reg PtrY		    tail;

    tail = Bit1M2TO(PtrYBITS) - Bit1M2TO(BlockGRANULE);
    offset = BitMULTIPLE(offset,BlockGRANULE);

    return (offset < tail) ? tail-offset : (PtrY) 0;
}

/*
    This  procedure  embodies  our  policy for padding a request. If the
    space  left  after allocation of the requested core is less than the
    minimum chunk size, we will allocate all the available core, else we
    will  allocate  the  minimum  chunk size or the if larger the actual
    amount requested.
*/

private PtrY		AreaPad(requested,chunk,available)
    reg PtrY		    requested,chunk,available;
{
    assert(available >= requested);

    return
	((available-requested) < chunk) ? available
	: (requested < chunk) ? chunk
	: requested;
}

/*
    AreaRequest()  will  actually  ask the OS to extend the break by the
    given  number  of  bytes;  it will try to fit the required number of
    bytes in the current segment if that's less than the bytes remaining
    free at its end, else will allocate a new segment.

    It will return a pointer aligned to a GRANULE boundary.
*/

private Ptr		AreaRequest(thebreak,bytesp,chunk)
    reg Ptr		    thebreak;
    PtrY		    *bytesp;
    PtrY		    chunk;
{
    reg PtrY		    tail;
    reg PtrY		    bytes;

    /*
	This  piece of code increments the break in the current segment,
	by  due rounding and by an appropriate number of bytes. This may
	be  not  work  either  because  not  enough space is left in the
	current segment, or because the actual allocation fails.

	On  a  1D  machine there is only segment, but on a 2D machine we
	can  fall  through  to  the  code  that  tries to allocate a new
	segment.

	Note  that  since  we  supply to brk() a pointer, we can request
	allocations  in  excess  of  the  largest  signed int, which are
	impossible using sbrk().

	Also  note  that  on  a  1D  machine the check for request being
	smaller  than  the  size  of  the  tail is not superfluous as it
	seems,  especially  on  machine  with a small segment or address
	space.
    */


#   if (Ptr1D)
	assert(thebreak == sbrk(0));
#   else
	if (thebreak == PtrNIL)
	    goto allocateNewSegment;
#   endif

attemptExtendCurrent:

    if (*bytesp <= (tail = AreaTail(PtrYOF(thebreak))))
    {
	bytes = AreaPad(*bytesp,chunk,tail);

	/*
	    The  break  may  be  anywhere,  so  we round it to a GRANULE
	    multiple. Note that this cannot result in an overflow, as we
	    never  allow  the  break to get into the last GRANULE of the
	    segment. Also, AreaTail has checked things...
	*/

	thebreak = PtrTO(PtrXOF(thebreak),
	    BitMULTIPLE(PtrYOF(thebreak),BlockGRANULE));

	if (brk(thebreak+bytes) == 0)
	{
	    assert(BitMODULUS(PtrYOF(thebreak),BlockGRANULE) == 0);

	    *bytesp = bytes;
	    return thebreak;
	}
    }

    /*
	On  a  2D  machine we can try allocating core from a new segment
	with  sbrk()  if  allocating from the current segment with brk()
	was not possible.

	As  a result of using sbrk() we will also get an aligned pointer
	in return, and a new guaranteed accurate value of the break.
    */

#   if (!Ptr1D)

allocateNewSegment:

    if (*bytesp <= (tail = AreaTail((PtrY) 0)))
    {
	bytes = AreaPad(*bytesp,chunk,tail);

	/*
	    This is tricky. Unfortunately sbrk() has a signed parameter,
	    so  if  the number of bytes requested would seem negative to
	    it,  we  first  call  sbrk(0) to allocate a new segment, and
	    then  we  call  brk()  to extend that segment to the desired
	    size,  as  brk()  takes a pointer as an argument, and we can
	    compute it ourselves with unsigned arithmetic.
	*/

	if
	(
	    ((int) bytes >= 0)
		? (thebreak = sbrk(bytes)) != (Ptr) -1
		: (thebreak = sbrk(0)) != (Ptr) -1 && brk(thebreak+bytes) == 0
	)
	{
	    assert(BitMODULUS(PtrYOF(thebreak),BlockGRANULE) == 0);

	    *bytesp = bytes;
	    return thebreak;
	}
    }
#   endif

    /*
	If we fail, we do not modify *bytesp, as we may be called again...
    */

    return PtrNIL;
}

/*
    AreaCreate will try to allocate an area of size *bytesp; it will put
    in  *bytesp  the effective number of bytes allocated (always larger,
    never  smaller than the requested), and will return a pointer to the
    base of the allocated area. This pointer is guaranteed to be aligned
    to a GRANULE.
*/

private Ptr		AreaCreate(bytesp)
    reg PtrY		    *bytesp;
{
    reg Ptr		    newarea;

#   if (BlockDEBUG)
	fprintf(stderr,"\nAreaCreate: AreaBreak 0x%08lx\n",(long) AreaBreak);
#   endif

    /*
	On  a  1D  machine  sbrk(0)  will return to us the CURRENT break
	value,  so  that  we can ignore the simulated value of AreaBreak
	and  our package will work even if the user uses sbrk() or brk()
	directly.

	Unfortunately  on  1D  machines  we  must  rely on the simulated
	break,  and  this  means  thatf  the  user does use brk() on the
	current segment, the simulated break will be wrong.

	Note  that  if the user does an sbrk() on 1D machines or a brk()
	on  another  segment,  than  the  simulated  break  will  not be
	invalidated.
    */

#   if (Ptr1D)
	AreaBreak = sbrk(0);
#   endif

    /*
	Our first attempt is for a request to allocate the given number of
	bytes with padding, if necessary, to AreaEXTEND.
    */

    newarea = AreaRequest(AreaBreak,bytesp,AreaExtend);

    /*
	If the allocation has failed, this may have been because of the
	padding requested. Specify no padding and resubmit.
    */

    were (newarea /* still */ == PtrNIL)
    {
	newarea = AreaRequest(AreaBreak,bytesp,(PtrY) 0);

    giveUp:
	were (newarea /* stuck as */ == PtrNIL)
	{
	    *bytesp = (PtrY) 0;
	    return PtrNIL;
	}
    }

    assert(BitMODULUS((PtrW) newarea,BlockGRANULE) == 0);

    /*
	Note  that  this  may  well leave the break unaligned. We do not
	care, it will be realigned on the next request.

	Note  also  that  newarea  may  be  !=  AreaBreak,  if we had to
	allocate a new segment.
    */

adjustTheBreak:
    AreaBreak = newarea + *bytesp;

    return newarea;
}

#if (AreaRELEASE > 0)
private Ptr		AreaDelete(from)
    reg Ptr		    from;
{
#   if (BlockDEBUG)
	fprintf(stderr,"AreaDelete: from 0x%08lx, AreaBreak 0x%08lx\n",
	    (long) from,(long) AreaBreak);
#   endif

    if (AreaRelease == 0)
	return;

    assert(PtrXOF(AreaBreak) == PtrXOF(from));

    if (brk(from) == 0)
	AreaBreak = from;
#   if (BlockDEBUG)
    else
    {
	perror("Releasing memory with brk()");
	abort();
    }
#   endif
}
#endif

private struct BlockHead BlockList;
private struct BlockHead *BlockHand;

#if (BlockNEW)
    private PtrY	BlockSmall = BlockSMALL;
#else
#   define BlockSmall	BlockSMALL
#endif

private short unsigned	BlockStarted = 0;

#if (BlockPRINT)

private void		BlockHeadPrint(head)
    reg struct BlockHead    *head;
{
    fprintf(stderr,"  HEAD 0x%08lx: lower 0x%04x, higher 0x%04x, %s, size %u\n",
	(long) head,head->BhLower,head->BhHigher&BlockHIGH,
	BlockISFREE(head) ? "FREE" : "BUSY",
	(head->BhHigher > PtrYOF(head)) ? BlockSIZE(head) : 0);
}

private struct BlockHead *BlockFirst;

private void		BlockAllPrint()
{
    reg struct BlockHead    *head;

    for (head = BlockFirst; head->BhHigher > head->BhLower; head = BlockHIGHER(head))
	BlockHeadPrint(head);
}

private void		BlockFreePrint()
{
    reg struct BlockHead    *head;

    fprintf(stderr,"FREE 0x%08lx before 0x%08lx, after 0x%08lx, hand 0x%08lx\n",
	(long) &BlockList,(long) BlockList.BhBEFORE,
	(long) BlockList.BhAFTER,(long) BlockHand);

    forqueue(head,&BlockList,1)
	BlockHeadPrint(head);

    fputs("END FREE\n",stderr);
}
#endif

/*
    Here  we  allocate  zones.  A  zone  is  a region of contiguous core
    divided  into blocks, where the blocks are doubly linked into a list
    of  contiguous ascending address blocks. In order to close this list
    we  put at the end of each zone a fictitious block, size zero (so it
    is  its own higher), permanently busy, whose lower is the block that
    covers the whole zone.

    This last block is surreptitiously addressed as the "lower" block of
    the first block in the zone, closing the list.

    When  we  allocate  a new area of core to turn into a zone, we check
    whether  the  new  area  comes  just  at  the  end of the previously
    allocated one; if so, we merge the zones.
*/

private struct BlockHead *ZoneLast = BlockNIL;

private struct BlockHead *ZoneCreate(size)
    PtrY		    size;
{
    reg struct BlockHead    *new;
    reg struct BlockHead    *first;
    reg struct BlockHead    *newlast;

    {
	reg Ptr			area;
	reg struct BlockHead    *oldlast;

    obtainNewArea:

	size += BlockEXTRA;
	area = AreaCreate(&size);

	were (area == PtrNIL)
	    return BlockNIL;

    recomputePointerToLast:

	newlast = (struct BlockHead *) (area+size-BlockEXTRA);

	were (ZoneLast == BlockNIL)
	    ZoneLast = newlast;

	oldlast = ZoneLast;

#	if (BlockDEBUG)
	{
	    fprintf(stderr,"\nZoneCreate: area 0x%08lx, newlast 0x%08lx, oldlast:\n",
		(long) area,(long) newlast);
	    BlockHeadPrint(oldlast);
	}
#	endif

	/*
	    Now we have got the storage, we must organize it. A zone has
	    two distinguished headers, the first and the last. The first
	    is  the  header  of  a  normal block, except that is BhLower
	    actually  points higher, the the last. The last is an header
	    without  a body, and its BhHigher points lower to the first.
	    In  between  there is a theory of headers with their blocks,
	    chained as usual.

	    A zone is correctly set up when the above situation is true,
	    so we only have to establish it. The newly allocated storage
	    has to be set up as a new block just under the last.

	    We  only  have to determine three pointers then: that to the
	    first  block, to the last block (which is fixed), and to the
	    new block.

	    In  order  to  collapse zones together, we check whether the
	    new zone sits just above the last of the previously allocate
	    one. If this is true, the first is that of the old zone, the
	    new  is either the previous zone last (to recover its space)
	    or  the  block before that if free; if this is not true, the
	    first  and  new  coincide  in  the  beginning  of  the newly
	    allocated zone.
	*/

    computeFirstAndNew:

	if (((Ptr) oldlast + BlockEXTRA) != area)
	    first = new = (struct BlockHead *) area;
	else /* Append the new zone to the old */
	{
	    /*
		Notice  that  first  ==  new == oldlast may be true, and
		later  on  we  test  whether  new  is  free.  Resist the
		temptation  to take off BUSY from oldlast so that we can
		use FHIGHER.
	    */

	    first = BlockHIGHER(oldlast);
	    new = BlockLOWER(oldlast);

	    if (!BlockISFREE(new))
		new = oldlast;
	    else
		BlockDEQUEUE(new);
	}
    }

    /*
	To make our zone look properly shaped, we must link the
	first, new and newlast headers so that newlast looks like
	being under first, and being over new.
    */

linkLastUnderFirst:

    first->BhLower	= PtrYOF(newlast);
    newlast->BhHigher	= PtrYOF(first)|BlockBUSY;

linkLastAboveNew:

    new->BhHigher	= PtrYOF(newlast);
    newlast->BhLower	= PtrYOF(new);

#   if (BlockDEBUG)
    {
	fputs("ZoneCreate: new:\n",stderr);
	BlockHeadPrint(new);
	fputs("\n",stderr);
    }
#   endif

    ZoneLast = newlast;

    assert(BlockHIGHER(ZoneLast) == first && BlockLOWER(first) == ZoneLast);
    assert(BlockLOWER(ZoneLast) == new
	&& BlockISFREE(new) && BlockFHIGHER(new) == ZoneLast);

    return new;
}

#if (AreaRELEASE > 0)
private void		ZoneDelete(newlast)
    reg struct BlockHead	  *newlast;
{
    reg struct BlockHead	  *oldlast;

    assert(BlockFHIGHER(newlast) == ZoneLast && BlockFSIZE(newlast) >= AreaRELEASE);

    /*
	We want to release everything above newlast, but if newlast is at the
	beginning of a zone, we want to release that zone in its entirety.
    */

    if (newlast == BlockHIGHER(ZoneLast))
    {
#	if (Ptr1D)
	/*
	    Commented  out,  as  not  necessarily  we started allocating
	    storage first.
	*/
	{
	    extern int			end;
	    /* assert(newlast == (Ptr) BitMULTIPLE((PtrW) &end,BlockGRANULE)); */
	}
#	else
	    /*
		It must be the beginning of a segment
	    */
	    assert(PtrYOF(newlast) == 0);
#	endif

	AreaDelete((Ptr) newlast);

	ZoneLast = BlockNIL;
	return;
    }

newlastBecomesZoneLast:
    oldlast = ZoneLast;
    ZoneLast = newlast;

#   if (BlockDEBUG)
    {
	fprintf(stderr,"ZoneDelete: first, block, oldlast\n");
	BlockHeadPrint(BlockHIGHER(oldlast));
	BlockHeadPrint(newlast);
	BlockHeadPrint(oldlast);
    }
#   endif

linkFirstAndNewLast:
    BlockHIGHER(oldlast)->BhLower = oldlast->BhLower;
    newlast->BhHigher = oldlast->BhHigher;

#   if (BlockDEBUG)
    {
	fprintf(stderr,"  first, newlast\n");
	BlockHeadPrint(BlockHIGHER(oldlast));
	BlockHeadPrint(newlast);
    }
#   endif

    AreaDelete((Ptr) &(newlast->BhBODY));
}
#endif


private void		BlockStartup()
{
    /*
	Tricky ! We see that Higher is not higher, Lower is not lower ...
    */

    BlockList.BhHigher = BlockList.BhLower = PtrYOF(&BlockList);
    BlockList.BhAFTER = BlockList.BhBEFORE = &BlockList;

    BlockHand = &BlockList;

    BlockStarted = 1;

#   if (BlockDEBUG)
    {
	fputs("\nBlockStartup:\n",stderr);
	BlockFreePrint();
	fputs("\n",stderr);
    }
#   endif
}

private void		BlockDelete(block)
    reg struct BlockHead	  *block;
{
    assert(!BlockISFREE(block));

#   if (BlockDEBUG)
    {
	fputs("\n\nBlockDelete: block, free list\n",stderr);
	BlockHeadPrint(block);
	BlockFreePrint();
    }
#   endif

    block->BhHigher &= BlockHIGH;

    {
	reg struct BlockHead   *nearest;

    collapseHigherBlock:

	nearest = BlockFHIGHER(block);
#	if (BlockDEBUG)
	{
	    fprintf(stderr,"  higher 0x%08lx\n",(long) nearest);
	    BlockHeadPrint(nearest);
	}
#	endif

	if (nearest != block && BlockISFREE(nearest))
	{
	    BlockDEQUEUE(nearest);
	    BlockCOLLAPSE(block,nearest);
	}

    collapseLowerBlock:

	nearest = BlockLOWER(block);
#	if (BlockDEBUG)
	{
	    fprintf(stderr,"  lower 0x%08lx\n",(long) nearest);
	    BlockHeadPrint(nearest);
	}
#	endif

	if (nearest != block && BlockISFREE(nearest))
	{
	    BlockDEQUEUE(nearest);
	    BlockCOLLAPSE(nearest,block);
	    block = nearest;
	}
    }

    /*
 	Before  putting it back onto the free list, we check if this the
	last  block in memory and whether it is "large"; if so, the zone
	is contracted, freeing up storage. The following test deals with
	"Zone"  variables about which "Block" ought to know nothing; The
	test  ought  to  be  in  ZoneDelete(),  thus,  but we can save a
	procedure call in the most common case...

	Note  that  we  do not allow deleting a block that begins a zone
	(and  must  therefore  cover  it all, as if it is considered for
	deletion  must  be  also the last). We split it into a part that
	remains and one that is actaully deleted if possible.
    */

#   if (AreaRELEASE == 0)
	BlockENQUEUE(&BlockList,block);
#   else
    if (AreaRelease == 0)
	BlockENQUEUE(&BlockList,block);
    else
    {
	reg struct BlockHead	    *rest;
	reg PtrY		    blocksize;

	/*
	    The  block  to be deleted is split into two parts, block, which is
	    put  on  the free list, and rest, which is deleted. Either of them
	    may be NIL.

	    * rest must be at least AreaRelease bytes long.
	    * block must be at least BlockSmall bytes long.
	    * rest must not begin at the beginning of a zone.

	    if the block is smaller than AreaRelease
	    or the block is not the last one in memory:
		set free to it.
		rest is set to NIL.
	    if the block is larger than AreaRelease
	    and is the last one in memory:
		if it is not the first block in the zone:
		    set free to NIL.
		    set rest to point to it.
		if it is the first block in the zone:
		    if it is larger than AreaRelease+BlockSmall:
			split it into tweo pieces, the second to be AreaRelease bytes long.
			set free to the first piece.
			set rest to the second piece.
		    if it is smaller than AreaRelease+BlockSmall:
			set free to it.
			set rest to NIL.
	*/

	if
	(
	    (blocksize = BlockFSIZE(block)) < AreaRelease
	    || BlockFHIGHER(block) != ZoneLast
	)
	    rest = BlockNIL;
	else
	{
#ifdef undef /* Of very dubious value... */
	    if (block->BhLower > PtrYOF(block))
	    {
		if (blocksize < (AreaRelease+BlockSmall))
		    rest = BlockNIL;
		else
		{
		    blocksize -= AreaRelease;
		    BlockSPLIT(block,blocksize,rest,AreaRelease);
		}
	    }
	    else
#endif
	    {
		rest = block;
		block = BlockNIL;
	    }
	}

#	if (BlockDEBUG)
	{
	    if (block != BlockNIL)
	    {
		fputs("BlockDelete: block to enqueue\n",stderr);
		BlockHeadPrint(block);
	    }
	    if (rest != BlockNIL)
	    {
		fputs("BlockDelete: rest to delete\n",stderr);
		BlockHeadPrint(rest);
	    }
	}
#	endif

	if (block != BlockNIL)
	    BlockENQUEUE(&BlockList,block);

	if (rest != BlockNIL)
	    ZoneDelete(rest);
    }
#   endif

#   if (BlockDEBUG)
    {
	fputs("BlockDelete: at end\n",stderr);
	BlockFreePrint();
	fputs("\n",stderr);
    }
#   endif
}

private struct BlockHead *BlockCreate(old,bytes)
    reg struct BlockHead    *old;
    reg PtrY		    bytes;
{
    reg struct BlockHead    *new;
    reg PtrY		    newsize;

initializeGlobals:
    were (!BlockStarted)
	BlockStartup();

    assert(bytes >= 1);

includeHeadInSize:
    newsize = BlockEXTRA + bytes;

#   if (BlockDEBUG)
	fprintf(stderr,"\n\nBlockCreate: bytes %u, size %u",bytes,newsize);
#   endif

roundAndPadSize:

    were (BitMODULUS(newsize,BlockGRANULE) != 0)
	newsize = BitMULTIPLE(newsize,BlockGRANULE);

    were (newsize < BlockSmall)
	newsize = BlockSmall;

#   if (BlockDEBUG)
	fprintf(stderr,", rounded size %u\n",newsize);
#   endif

    /*
	Now  for  our logic. The old block may or may not exist. In any case
	in  order  to  resize it we must find a new block of the new desired
	size or large and then pare it down to the requested size if larger.

	Finding  a new block if an old one exists, has two special cases: If
	the  old  block is larger than the new size, we do nothing, as it is
	the  block  we  are  looking  for.  If  the old is smaller, we check
	whether the block higher than it is free, and whether their combined
	sizes  are  sufficient.  If so, the higher is detached from the free
	list  and then they are just collapsed. If not, we look at the lower
	block and do the same.

	At  this  point  if  we  have not found a suitable new block yet, we
	allocate  a  new  one  searching  in the free list or creating a new
	zone.

	Finally,  if  the  old block existed, and the new is not at the same
	address,  we  copy  the  body  of the old block to the new, which is
	marked busy, and it's body address returned.
    */

presumeFailure:
    new = BlockNIL;

tryReuseOld:
    if (old != BlockNIL)
    {
	reg PtrY		  oldsize;
#	if (BlockTRYLOWER)
	    reg struct BlockHead  *lower = BlockLOWER(old);
#	endif
#	if (BlockTRYHIGHER)
	    reg struct BlockHead  *higher = BlockHIGHER(old);
#	endif

    oldConsideredFree:
	old->BhHigher &= BlockHIGH;
	oldsize = BlockFSIZE(old);

#	if (BlockDEBUG)
	{
	    fprintf(stderr,"old 0x%08lx, lower,higher\n",(long) old);
#	    if (BlockTRYLOWER)
		BlockHeadPrint(lower);
#	    endif
#	    if (BlockTRYHIGHER)
		BlockHeadPrint(higher);
#	    endif
	}
#	endif

	/*
	    If  this  has any success, old will transform itself into new, and
	    old  will  no  longer  have  meaning.  If one of the two lower and
	    higher  bring  the  size  to be greater than the requested one, we
	    collapse  only  higher,  to avoid copying down the contents of the
	    block; if both are needed, we collapse both with old.
	*/

#	if (BlockTRYHIGHER)
	    if (BlockISFREE(higher)
		&& (oldsize < newsize)
		    && ((oldsize+BlockFSIZE(higher)) >= newsize
#		if (BlockTRYLOWER)
			|| (BlockISFREE(lower)
			    && (BlockFSIZE(lower)+oldsize+BlockFSIZE(higher)) >= newsize)
		    )
#		endif
	    )
	    {
	    appendHigherToOld:
		BlockDEQUEUE(higher);
		BlockCOLLAPSE(old,higher);
		oldsize = BlockFSIZE(old);
	    }
#	endif
	    
#	if (BlockTRYLOWER)
	    if (BlockISFREE(lower)
		&& (oldsize < newsize)
		&& ((BlockFSIZE(lower)+oldsize) >= newsize)
	    )
	    {
	    prependLowerDoCopy:
		BlockDEQUEUE(lower);
		BlockCOLLAPSE(lower,old);
		CoreCOPY((Ptr) &(old->BhBODY),(Ptr) &(lower->BhBODY),oldsize);
		old = lower; oldsize = BlockFSIZE(old);
	    }
#	endif

	/*
	    Now  old  has been enlarged as much as possible by collapsing into
	    it  the  lower  and  higher blocks if free. If its size is greater
	    than  the requested size we just recycle it as the new block, else
	    we  set  it  to  busy  again, and will have to allocate a new free
	    block.
	*/

	if (oldsize < newsize)
	    old->BhHigher |= BlockBUSY;
	else
	{
	    new = old;
	    old = BlockNIL;
	}
    }

    assert(old == BlockNIL || !BlockISFREE(old));
    assert((new != BlockNIL) <= /* IMPLIES */ (old == BlockNIL));

mustAllocateNew:
    if (new == BlockNIL)
    {
#	if (BlockDEBUG)
	    BlockFreePrint();
	    fprintf(stderr,"  searching:\n");
#	endif

    searchNextFit:
	forqueue(new,BlockHand,BlockFSIZE(new) < newsize)
#	if (BlockDEBUG)
	    BlockHeadPrint(new);
#	else
	    skip;
#	endif
#	if (BlockDEBUG)
	    BlockFreePrint();
	    fprintf(stderr,"  end of search!\n");
#	endif

	if (new != &BlockList && BlockFSIZE(new) >= newsize)
	    BlockDEQUEUE(new);
	else
	{
	tryCreateStorage:
	    new = ZoneCreate(newsize);
	    were (new == BlockNIL)
		return BlockNIL;
	}

    mayCopyOldAndDeleteIt:

	if (old != BlockNIL)
	{
	    CoreCOPY((Ptr) &(old->BhBODY),(Ptr) &(new->BhBODY),BlockSIZE(old));
	    /*
		Tricky  Dick ! BlockDelete will check for neighbouring blocks to
		be  free  for  collapsing,  and  nobody forbids new to be one of
		those. We temporarily mark it busy, then free it again.
	    */
	    new->BhHigher |= BlockBUSY;
	    BlockDelete(old);
	    new->BhHigher &= BlockHIGH;
	}
    }

    /*
	At  this point, either old recycled or newly allocated, new is ready
	to  be returned, and old has lost any meaning. Note however that new
	may be larger than requested; if the excess part is not too small we
	cut it away and refree it.
    */

maybeTooLarge:

    if (new != BlockNIL)
    {
	reg PtrY		    actualsize;
	reg PtrY		    restsize;

	if ((actualsize = BlockFSIZE(new)) != newsize
		&& (restsize = actualsize-newsize) > BlockSmall)
	{
	    reg struct BlockHead    *rest;

	splitNew:
	    BlockSPLIT(new,newsize,rest,restsize);
	    rest->BhHigher |= BlockBUSY; 
#	    if (BlockDEBUG)
		fprintf(stderr,"rest 0x%08lx size %u\n",(long) rest,BlockFSIZE(rest));
		BlockHeadPrint(rest);
#	    endif

	    /*
		We must mark new as busy before deleting rest otherwise Delete
		will collapse rest again with new... Why Delete(rest) instead of
		ENQUEUE(rest) ? Because the block after rest may be free, and they
		ought to be collapsed.
	    */
	    new->BhHigher |= BlockBUSY;
	    BlockDelete(rest);
	    /* new->BhHigher &= BlockHIGH; */
	}
    }

    new->BhHigher |= BlockBUSY;

#   if (BlockDEBUG)
    {
	fputs("BlockCreate: at end\n",stderr);
	BlockHeadPrint(new);
	fputs("\n",stderr);
    }
#   endif

    return new;
}
#if (!BlockNEW)

    extern char		    *malloc();
    extern char		    *realloc();
    extern void		    free();

    extern char		    *calloc();
    extern coid		    cfree();

#else

#   include <malloc.h>

    struct mallinfo	    mallinfo(maximum)
	int			maximum;
    {
	static struct mallinfo	info;

	return info;
    }

#   include <errno.h>
    extern int		    errno;

    /*
	This  is backwardly compatible with the mallopt(3x) procedure in
	earlier  versions  of  this library. The first option that still
	applies,  M_MXFAST  is extended with an extra parameter, so that
	code  written  for this mallopt() will work with the old one, as
	the old one will ignore the extra parameter.

	The  first parameter is by at least how many bytes to extend the
	break  if  there  is  insufficient space, the second is how many
	free  bytes  under the break must there be there at least before
	reducing the break.

	Note  that these two parameters may be changed at any time, even
	after  the  start of allocation, but for backwards compatibility
	we fail calls to mallopt() once allocation has started.

	The  second  option  we  keep,  M_GRAIN, has its effect slightly
	changed;  instead  of being the rounding granule for allocations
	under  M_MXFAST,  it  becomes  the  minimum block size, which is
	compatible, and probably more sensible.
    */

    int			    mallopt(option,value1,value2)
	int			option;
	int			value1,value2;
    {
	errno = 0;

	if (BlockStarted)
	{
	    errno = EBUSY;
	    return -1;
	}

	switch (option)
	{
	case M_MXFAST:

	    if (value1 != -1)
	    {
		AreaExtend = (PtrY) value1;

		were (AreaExtend < Bit2TO(2+BlockGRANULE))
		{
		    errno = EINVAL;
		    AreaExtend = Bit2TO(2+BlockGRANULE);
		}

		were (AreaExtend > AreaTail((PtrY) 0))
		{
		    errno = EINVAL;
		    AreaExtend = AreaTail((PtrY) 0);
		}
	    }

	    if (value2 != -1)
	    {
		AreaRelease = (PtrY) value2;

		were (AreaRelease < Bit2TO(4+BlockGRANULE))
		{
		    errno = EINVAL;
		    AreaRelease = Bit2TO(4+BlockGRANULE);
		}

		were (AreaRelease > AreaTail((PtrY) 0))
		{
		    errno = EINVAL;
		    AreaRelease = AreaTail((PtrY) 0);
		}
	    }

	    break;

	case M_GRAIN:

	    BlockSmall = BlockEXTRA + (PtrY) value1;

	    were (BlockSmall < BlockSMALL)
	    {
		errno = EINVAL;
		BlockSmall = BlockSMALL;
	    }

	    were (BlockSmall > AreaExtend)
	    {
		errno = EINVAL;
		BlockSmall = AreaExtend;
	    }

	    break;

	case M_NLBLKS:
	case M_KEEP:
	default:

	    errno = EINVAL;
	    break;
	}

	return (errno) ? -1 : 0;
}
#endif

public char		*malloc(bytes)
    reg unsigned		  bytes;
{
    reg struct BlockHead	  *block;

    block = BlockCreate(BlockNIL,(PtrY) bytes);
    return (block == BlockNIL) ? (char *) PtrNIL : (char *) &block->BhBODY;
}

/*
    Allocating  a  row  of  n  objects  each  bytes long is different from
    allocating  a  single  block  of  n*bytes size only when each object's
    bytes  does  not include padding or alignment, so that there is unused
    space  between  each  element  of  the  row. Example: if sizeof of the
    struct s {short int i; char c;} is 3, but the machine requires aligned
    data  references,  an  array of 10 struct s will be 40 bytes long, not
    30.  Most compilers will tell you that sizeof (struct s) is 4, though,
    if  the  machine requires aligned data references... Use the following
    (portable) program to see what happens on your machine:

	struct s1 { short int i; char c; }	a1[10];
	struct s2 { long float f; char c; }	a2[10];

	main()
	{
	    printf("sizeof: s1 x 1 = %lu, s1 x 10 = %lu\n",
		(long unsigned) sizeof (struct s1),(long unsigned) sizeof a1);

	    printf("sizeof: s2 x 1 = %lu, s2 x 10 = %lu\n",
		(long unsigned) sizeof (struct s2),(long unsigned) sizeof a2);
	}
*/

public char		*calloc(n,bytes)
    reg unsigned		  n;
    reg unsigned		  bytes;
{
    reg char		  *area;

    {
	reg struct BlockHead    *block;

#	if (defined(iAPX286))
	    block = BlockCreate(BlockNIL,(PtrY) (n*bytes));
#	endif
	were (block == BlockNIL)
	    return (char *) PtrNIL;
	area = (char *) &block->BhBODY;
    }

    /*
	Calloc(3) must zero the area it returns.
    */
    CoreFILL(area,0,n*bytes);

    return area;
}

public char		*realloc(old,newbytes)
    reg char		  *old;
    reg unsigned		  newbytes;
{
    reg struct BlockHead	  *block;

    block = BlockCreate(
	structof((union BlockBody *) (old),struct BlockHead,BhBODY),
	(PtrY) newbytes
    );

    return (block == BlockNIL) ? (char *) PtrNIL : (char *) &block->BhBODY;
}

public void		free(old)
    reg char		  *old;
{
    assert(old != (char *) PtrNIL);
    BlockDelete(
	structof((union BlockBody *) (old),struct BlockHead,BhBODY)
    );
}

/*
    Why  ever  one would want two different free functions is beyond me. I
    cannot  imagine  a  sensible  implementation  of  this module in which
    calloc() would require a different free() from malloc().
*/

public void		cfree(old)
    reg char		  *old;
{
    free(old);
}
SHAR_EOF
fi # end of overwriting check
if test -f 'malloc.h'
then
	echo shar: will not over-write existing file "'malloc.h'"
else
cat << \SHAR_EOF > 'malloc.h'
extern char		*malloc(/* unsigned */);
extern char		*realloc(/* char *,unsigned */);
extern void		free(/* char * */);

extern char		*calloc(/* unsigned,unsigned */);
extern void		cfree(/* char * */);

extern int		mallopt();
#   define M_MXFAST	    1		/* Core allocation watermarks	*/
#   define M_LBLKS	    2		/* Blocks in a chunk		*/
#   define M_GRAIN	    3		/* Small blocks rounded to this	*/
#   define M_KEEP	    3		/* free() must not deallocate	*/

struct mallinfo
{
    int			    arena;
    int			    ordblks;
    int			    smblks;
    int			    hblks;
    int			    hblkhd;
    int			    usmblks;
    int			    fsmblks;
    int			    uordblks;
    int			    fordblks;
    int			    keepcost;
};

extern struct mallinfo	mallinfo();
SHAR_EOF
fi # end of overwriting check
#	End of shell archive
exit 0
-- 
Piercarlo "Peter" Grandi            |  ARPA: pcg%cs.aber.ac.uk at nss.cs.ucl.ac.uk
Dept of CS, UCW Aberystwyth         |  UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK  |  INET: pcg at cs.aber.ac.uk



More information about the Alt.sources mailing list