[Thread Prev][Thread Next][Index]

Re: [las_users] Error on large/high volume data sets - "remote server was unable to deliver data"



Hi Kacie,
No, LAS will never load anywhere near a GB of data for a plot. LAS will ask the thredds server where the data is located for a decimated subset of the data, covering the time span and region in x,y, and/or z. On LAS plots, there'll be an annotation "subsampled a in X, b in Y".  That's indicating the decimation that was done. For example instead of loading the 32768 by 16384 grid for an XY plot, it'll ask for only something like 500x300, by requesting every 64th point in X and 56th point in Y. If you zoomed in to smaller XY regions, you'd get more resolution. For a Hovmuller plot, a similar thing is done, asking for every 10th timestep and 40th point in X for instance.

When it says "remote server was unable to deliver data", it means exactly that. The data request from the thredds server returned an error. It's not the size of the request but some other interaction of the data request with the data on that server. 

Possibly the way the data is stored, or the aggregation that is used, is making it difficult for the server to return data. Imagine if all the data for this dataset is stored in separate files, one file per timestep, and the dataset is an aggregation of all of those files. To get a time series or a Hovmuller plot, the data server software is opening up each file, reading the piece of it that it needs, and then going on to the next file. If the individual datasets are compressed, then decompression is also being done.  The thredds server software is meant to be able to do this, but one can imagine the complexities.

We'd be happy to look at the errors you're seeing; click on the link in the "Advanced users may see more technical information" and let us know what it says.  That link returns a big long debug output listing, but the error messages appear at the top of it, so the first couple dozen lines have the information.

ansley

On 4/25/2013 12:07 PM, Shelton, Kacie E (398C) wrote:

Hi.

 

To confirm that I understand;

 

High volume data (either enough high resolution data or long enough time series of data) will produce an error, and the only thing to do is to post caveats to users to not select the entire time-series at once.   Is this correct?

 

We can produce a hovmoller plot for ~1.5 yrs of data for MUR, and about 8 yrs for AVHRR_OI. 

 

Is there a known limit on how much data can be handled on one request?  Ie, no more than X GB per request, or no more than Y granules in one request?

 

Thank you,

Kacie

 

 

 

From: Ansley Manke [mailto:ansley.b.manke@xxxxxxxx]
Sent: Friday, April 19, 2013 5:49 PM
To: Shelton, Kacie E (398C)
Cc: las users [las_users@xxxxxxxx]
Subject: Re: [las_users] Error on large/high volume data sets - "remote server was unable to deliver data"

 

Hi Kacie,
As a test, try making a short time selection and smaller range in longitude to see if you can get a result.

In LAS we handle large grids like this by asking for a decimated version when the request is really large. A several-hundred by several-hundred pixel image can't show that fine a grid anyway so we make a smaller request, striding the data request so it's a more reasonable size to be shipping around.

I have put the MUR dataset into a test server running LAS v8 and I can get an XT plot for a year's worth of it. See the attached screenshot. However for the entire time range the data that is returned from the NetCDF library reading the datset is all zero's.

Likewise I can make time series plots that are 6 or 8 months long, but requests for more data results in error messages from the NetCDF library reading the data.

Going back to the html view of the dataset, I can get coordinates successfully. Ask for lat 8000:1:8000, lon 32000:1:32000, time 0:10:2791 and the coordinates are returned correctly.  Try:
http://thredds.jpl.nasa.gov/thredds/dodsC/GHRSST_JPL-L4UHfnd-GLOB-MUR_TIMEAGG/aggregate__ghrsst_JPL-L4UHfnd-GLOB-MUR.ncml.ascii?lat[8000:1:8000],lon[32000:1:32000],time[0:10:3791],analysed_sst[0:10:3791]

But, try to get analysed_sst at those same index ranges, and we get an error.  That's this constrained url:
http://thredds.jpl.nasa.gov/thredds/dodsC/GHRSST_JPL-L4UHfnd-GLOB-MUR_TIMEAGG/aggregate__ghrsst_JPL-L4UHfnd-GLOB-MUR.ncml.ascii?analysed_sst[0:10:3791][8000:1:8000][32000:1:32000]

The software behind the url access to the datset is not the same netCDF library that Ferret, running within LAS is using to read this opendap data but it seems to me this is an indication that both pieces of software are having trouble accessing the dataset along the time dimension.

Ansley

On 4/19/2013 1:55 PM, Shelton, Kacie E (398C) wrote:

Hi.

We are getting a near instant error when requesting Hovmoller plots of MUR and AVHRR_OI data sets.  (See included images.)

"A remote server was unable to deliver the data LAS needs to make your product."

The links are good; MUR links to here: http://thredds.jpl.nasa.gov/thredds/dodsC/GHRSST_JPL-L4UHfnd-GLOB-MUR_TIMEAGG/aggregate__ghrsst_JPL-L4UHfnd-GLOB-MUR.ncml.html

Likewise the link for AVHRR_OI is good. 

We suspect this error has something to do with the volume of requested data; MUR is high resolution (~1km global), and AVHRR_OI has 30 yrs of global data. 

We do not get an error on loading any one particular granule, nor on a time series plot -- the Hovmoller plot is where we see this error. 

Any ideas on what is going on, or how we can fix this?

Thank you,
Kacie

 



[Thread Prev][Thread Next][Index]


Contact Us
Dept of Commerce / NOAA / OAR / PMEL / TMAP

Privacy Policy | Disclaimer | Accessibility Statement