Full paper (pdf): Standardizing and Benchmarking of Model-Derived DNI-Products – Phase 1
Authors: Richard Meyer, Chris Gueymard, Pierre Ineichen
Summary:
Modeled direct normal irradiance (DNI) can be either derived from satellite data or from numerical weather models. Such modeled datasets are available at continental scale and provide continuous long-term time series, but are known to fall short in quality over some areas, compared to high-quality ground measurements.
The uncertainty in DNI may be locally so high that CSP projects cannot be financed. The CSP industry would obviously benefit from a comprehensive and large-scale benchmarking of the existing modeled DNI datasets. This would help CSP developers select the most appropriate dataset for a given region, and would also provide due-diligence or financial analysts with the desired information on the expected accuracy of the data. This contribution investigates how the benchmarking study should be conducted, and evaluates the difficulties that can be encountered. A large set of ground stations with anticipated good- to high-quality measurements has been identified, most of which reporting public-domain data. Various criteria that such measured datasets must fulfill to be usable in the benchmarking study are discussed. Automatized quality control methods are also proposed for quality assessment purposes, and are described in some detail. The most important statistics to be used for the benchmarking process are discussed, with an emphasis on those that appear of particular importance to the CSP industry. A full-scale benchmarking study should now follow, assuming proper funding is secured.
For such a study it is proposed to take as many high-quality measurement stations as possible for reference purposes. The search for such stations preferably should focus on latitudes below 45°. This covers the regions where most of the CSP power is going to be deployed, but was not a focus in the few satellite solar radiation validation studies that have been published so far. Ideally, the measurement data to be used for validation purposes should not be in the public domain, to guarantee that these datasets have not been available to the model developers to train or adapt their models. Only this ideal scenario would fulfill the conditions for truly independent results, because otherwise the validation process would suffer from data incest. The rapid development of solar projects over many regions has sparked a proliferation of private weather/radiometric stations, and thus the existence of many “secret” datasets that could prove extremely useful for the proposed task. The great difficulty, however, is to find ways to motivate the data owners to make their exclusive data available for such a scientific study.
Another challenge is the anticipated arduous effort to thoroughly quality check the measured data series to be taken as a reference. In the authors’ experience, this process (to be conducted a posteriori, without sufficient knowledge of the measurement conditions, etc.) is overwhelmingly the most critical issue. This report discusses several procedures to screen data automatically, but experience proves that manual and visual screening by experienced human analysts is absolutely necessary before any measured dataset can be considered a valid reference for such a validation. In particular, special attention must be given to the possible degradation of instruments through lack of calibration, maintenance or regular cleaning. Soiling, for instance, can rapidly make the data measured with the best instruments of little value, if not promptly detected and corrected by the radiometric station supervisor. All the issues just mentioned induce systematic biases in the reference data, and hence may lead to the wrong conclusions about the quality/accuracy of model-derived data.