%g format in printf

Bradford R. Daniels daniels at grue.dec.com
Sun Sep 10 04:59:31 AEST 1989


Distribution: world
Organization: Digital Equipment Corporation

In article <1441 at hiatus.dec.com>, daniels at grue.dec.com (Bradford R.
Daniels) writes:
> In article <MCGRATH.89Sep5175335 at saffron.Berkeley.EDU>,
> mcgrath at saffron.Berkeley.EDU (Roland McGrath) writes:
> > Yes.  The ANSI standard does specify that the default precision is 6.
> 
> Huh?  Where?  I am working from document X3J11/88-159, which says
> under %g:
> 
>   "The double argument is converted in the style f or e (or in
>    style E in the case of a G conversion specifier), with the
>    precision specifying the number of digits.  If the precision
>    is zero, it is taken as 1.  The style used depends on the
>    value converted..."
> 
> It then goes on to describe when weach format is used, and that
> trailing zeroes, etc. should be removed.  What do you see that I
> don't?

I really would like a definitive answer (or at least some kind of
consensus) on this issue.  I appreciate all of the input on what
significant digits should mean in the context of %g, but now that
I'm pretty sure we handle that correctly, the default precision
issue is more important...

Thanks again,

- Brad

-----------------------------------------------------------------
Brad Daniels			|  Digital Equipment Corp. almost
DEC Software Devo		|  definitely wouldn't approve of
"VAX C RTL Whipping Boy"	|  anything I say here...



More information about the Comp.std.c mailing list