[Thread Prev][Thread Next][Index]

[ferret_users] pyferret crash while storing large dataset - alternatives to prepare an ensemble?



Hello ferreters,

I am storing a large dataset, looping over variables. The main line for that is
save/quiet/file=($out)/NCFORMAT=4/DEFLATE=4/append ($var);\

After the file has grown to 713 MB, the script crashes with the following error message:

/afs/ipp/.cs/python_modules/amd64_generic/pyferret/anaconda/2/4.1.1/bin/ferret: line 17: 40827 Killed                  python ${python_flags} -c "import sys; import pyferret; (errval, errmsg) = pyferret.init(sys.argv[1:], True)" "$@"

Same happens when using DEFLATE=1 instead as recommended.

This is quite cryptic to me - can anybody help?



*Background* (Maybe there's another more efficient way?)

I have about 80 large netCDF files with 2 dimensions - time and E. The times overlap partially and cover altogether about 3 years. The E dimension has different lengths. I'd like to combine those datasets into an ensemble / one file, for instance along F.

To combine the data as an ensemble, I have to unify the existing dimensions first, so I
1) read in a dataset
2) define the unified axes for T and E
3) loop over all variables in the files (they are the same in all files)
4) write out all variables on the new T and E axes.

After having written 713 MB of the new file, which corresponds to 31 of the 115 variables, the above error occurs. There's no special issues with the variable where it happens, no non-standard name.


An other method I have tried: Enlarge the E dimension by producing an empty hyperslab and glueing this onto the original dataset via ncrcat. This is impossibly slow.


Any other way to efficiently fill a file with undefined values to make them ready for being an ensemble?


Thanks in advance!

Hella




[Thread Prev][Thread Next][Index]
Contact Us
Dept of Commerce / NOAA / OAR / PMEL / Ferret

Privacy Policy | Disclaimer | Accessibility Statement