Does TC's farrealloc have a bug?
Graham Wheeler
gram at uctcs.uucp
Wed Jun 19 18:39:45 AEST 1991
I am reposting this article as I never saw the original appear when using nn
and have had no responses, so I assume that it got lost.
I have an application in which I am allocating a number of variable sized
records, of average size about 6 bytes. Whenever the size changes, I use
farrealloc. I have over 300kb available at the time I start allocating these
nodes. I also keep track of the number of allocated bytes. My problem is that
I get a memory allocation failure at about 120kb of allocated memory. This
must mean one of two things:
i) either there is a bug in TC (actually TC++ v1.0) so that when realloc
fails to resize a block and allocates a new one it doesn't free the
old one; or
ii) the allocated blocks are being held on a linked list with an overhead of
(I would guess) 12 bytes per block (two far pointers and one long size).
This would mean that my nodes are actually using up 3 times more memory
than I am actually using for record storage.
Personally, I think it is better to have the free blocks on a linked list as
you get the maximum use from your memory that way. I don't know how TC does
it.
Does anyone know which of these two theories is correct? Or is there a
different explanation?
Graham Wheeler <gram at cs.uct.ac.za> | "That which is weak conquers the strong,
Data Network Architectures Lab | that which is soft conquers the hard.
Dept. of Computer Science | All men know this; none practise it"
University of Cape Town | Lao Tzu - Tao Te Ching Ch.78
More information about the Comp.lang.c
mailing list