[Thread Prev][Thread Next][Index]

Re: [ferret_users] missing values and scale factors interfere?



Hi Andrea,
You're right! Thank you for the report. The CF standard, http://www.cgd.ucar.edu/cms/eaton/cf-metadata/CF-1.0.html, which we use for NetCDF files, has this paragraph

The missing values of a variable with scale_factor and/or add_offset
attributes (see section 8.1
<http://www.cgd.ucar.edu/cms/eaton/cf-metadata/CF-1.0.html#pack>)
are interpreted relative to the variable's external values, i.e.,
the values stored in the netCDF file. Applications that process
variables that have attributes to indicate both a transformation
(via a scale and/or offset) and missing values should first check
that a data value is valid, and then apply the transformation. Note
that values that are identified as missing should not be transformed.

Ferret does save the correct value as the missing-value flag (say `temanom,return=bad` does return the -9999 missing value), but when it applies the the scale and offset, it does not check for the missing value flag before rescaling. This bug will be fixed in the next release. You're correct that the workaround for the time being is to

set ver/bad =-99.99 my_var

but of course that is not a good solution if a variable might contain valid values of -99.99!

Ansley

Grant Andrea Nicole wrote:

Hi,

I have a netcdf file of the CRUTem2v global surface tempertures from the Climatic Research Unit (http://www.cru.uea.ac.uk/cru/data/temperature/).
In the attributes the 'scale_factor' is set to 0.01 and 'missing_value' is set to -9999. In the file, the raw data indeed contains many
-9999. However, when I read it into ferret, these all become -99.99 after the
scale factor is applied and when I then try to do a shade command the colorbar scale is worthless because so many -99s are plotted. It is easy
enough to work around by defining

set ver/bad =-99.99 my_var

but I'm wondering if this is something that could be fixed? The other programs I've
used (grads or matlab) are able to interpret this combination correctly. Or is
it that the header should be defined differently, but possibly then other software
wouldn't interpret it correctly? I am new to gridded data and not sure if there is
a universal convention about these sorts of things.

Thanks,

Andrea


Andrea Grant Institut für Atmosphäre und Klima ETH Zürich Universitätsstrasse 16 CH-8092 Zürich Schweiz Tel: +41 (0)44 632 79 75 andrea.grant@env.ethz.ch



[Thread Prev][Thread Next][Index]

Dept of Commerce / NOAA / OAR / PMEL / TMAP

Contact Us | Privacy Policy | Disclaimer | Accessibility Statement