close() erros (was: Trojan Horses)

Kristoffer Eriksson ske at pkmab.se
Tue Oct 23 05:51:47 AEST 1990


In article <35111 at cup.portal.com> ts at cup.portal.com (Tim W Smith) writes:
>Furthermore, when the close() fails, you now have a program that knows
>that some amount of previously written data is not valid. ...
>Or does this mean that a program should keep a copy
>in memory of all data that is hard to reproduce until it closes the file?

Yes, I think it should. Provided that you really think it can do anything
to recover from the problem when it is detected, of course. There's no
point in saving the data if it won't be possible to recover from the trouble
and write it out later. There is also no point in saving it if you can
simply rerun the program with the original input data, which, in many
cases, you still have around. (I certainly would not through away my input
data before the output data was safely stored away, and often not even then.
You never know what can happen.) And in the case of an editor, for example,
there is no problem in saving the data for later retry, since you have to
store the edited text in memory anyway.

You don't have to store your data for longer than until the next fsync()
you do (if you do any, and if your data is very sensitive, you may have
good reason to do some), or if your system happens to have the option of
setting a synchronous write mode on your Very Important File, then you
don't have to save anything.

>In summary, this behaviour of a file system is not acceptable.

It apparently was deemed acceptable for Unix. And I think it is quite hard
to make a completely failure-free file system, especially if you want
performance too.
-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske at pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!sunic.sunet.se!kullmar!pkmab!ske



More information about the Comp.unix.internals mailing list