Well, first I'll briefly try to answer the Ferret questions. To
handle the variable naming, you can rename the variable in the input
dataset to some dummy name, then define the variable to write using
the original name. |
yes? use input_file.ncYou could use a repeat loop to do the SET VAR/NAME for all the variables in the input file, similar to the loop in this message :
http://www.pmel.noaa.gov/maillists/tmap/ferret_users/fu_2007/msg00621.html (where there should be a definition "let nvars = ..nvars")
For managing the sequence of events to get all the data written, usually writing one or more smaller scripts to do the actual operations is the way to go, passing filenames, indices, variable names or whatever else, into the scripts as arguments.
But, I'm not sure that writing all the data into one giant file is the right way to manage a collection of data like this. What happens when more data needs to be added? Or removed? Re-write the whole things? This is not flexible. Managing and using large files has problems.
One alternative that comes to my mind is ERDDAP, http://coastwatch.pfeg.noaa.gov/erddap. This is the product of one of our sister NOAA labs, which lets you present a set of data files as a single dataset. It sounds as if your data is a collection of time series so you might look at the "tabledap" protocol. I have not installed ERDDAP myself but my understanding is that it is not difficult to install or to set up with data. It lets one access data from a browser page, look at tables, download data subsets, and it has has an interface that draws graphics. Specially constructed URL's can also be used to access an ERDDAP dataset, specifying a subset of variables and constraints, so that one can choose some of the variables, over, say a range of latitude and longitude, or perhaps at a specified set of station names, and then save that in a file. Please have a look and see what you think.
On 12/6/2013 12:00 PM, Akshay Hegde wrote: