Does TC's farrealloc have a bug?

alex colburn colburn at tessa
Fri Jun 21 00:06:13 AEST 1991


In article <1991Jun19.083945.8921 at ucthpx.uct.ac.za> gram at uctcs.uucp (Graham Wheeler) writes:
>I am reposting this article as I never saw the original appear when using nn
>records, of average size about 6 bytes. Whenever the size changes, I use
>farrealloc. I have over 300kb available at the time I start allocating these
>nodes. I also keep track of the number of allocated bytes. My problem is that
>I get a memory allocation failure at about 120kb of allocated memory. This
	I don't think your problem lies in Borland's internal memory
management systems, it would kind of silly for them to write something
that uses twice as much memory as it is managing.  Is this error message
something you get, and then its time for a reboot?  If so, I bet you're
referencing invalid addresses.
   	How are you keeping track of your reallocated memory?  If you have
some sort of linked list data structure, when you farrealloc you just might
be scrambling pointers.  For debugging purposes try doing a farheapcheck()
after each memory function so you can determine the exact point where things
go buggy. If you have memory blocks greater than 64K the pointer must be
explicity declared as huge (this one threw me for the last week and 1/2), 
otherwise address calculations are only done on the offset, and you
get a neat little effect that usually locks up the system.
Good Luck!

Alex.



More information about the Comp.lang.c mailing list