Other language confidence: 0.9854364928365327
LPJmL4 is a process-based model that simulates climate and land-use change impacts on the terrestrial biosphere, the water and carbon cycle and on agricultural production. The LPJmL4 model combines plant physiological relations, generalized empirically established functions and plant trait parameters. The model incorporates dynamic land use at the global scale and is also able to simulate the production of woody and herbaceous short-rotation bio-energy plantations. Grid cells may contain one or several types of natural or agricultural vegetation. A comprehensive description of the model is given by Schaphoff et al. (2018, http://doi.org/10.5194/gmd-2017-145). We here present an extended version of the LPJmL4 model code described and used by the publications in GMD: LPJmL4 - a dynamic global vegetation model with managed land: Part I – Model description and Part II – Model evaluation (Schaphoff et al. 2018, http://doi.org/10.5194/gmd-2017-145 and http://doi.org/10.5194/gmd-2017-146). Additional features of this version, including agricultural trees as a new cultivation type in LPJmL4, are described and used in Jans et al. (2020, HESS) The model code of LPJmL4 is programmed in C and can be run in parallel mode using MPI. Makefiles are provided for different platforms. Further informations on how to run LPJmL4 is given in the INSTALL file. Additionally to the publication a html documentation and man pages are provided. The model data presented here represent some standard LPJmL4 model results for the land surface described in Schaphoff et al. (2018 part I). Additionally, these results include agricultural trees (olives, non-citrus orchards, and cotton) implemented as a new cultivation type into LPJmL4. Standard results are evaluated in Schaphoff et al. (2018 part II). Results of cotton as a newly implemented agricultural tree are evaluated in Jans et al. (2020), HESSD. The data collection includes some key output variables made with the model setup described by Jans et al. (2020, HESS). Overall, data sets are resulting from 40 different simulations, where we combined 5 different GCMs (GFDL, HadGEM, IPSL, MIROC, NorESM) with 4 different RCPs (2p6, 4p5, 6p0, 8p5) without and with CO2 fertilization, respectively. The data cover the entire globe with a spatial resolution of 0.5° and temporal coverage from 1901-2011 on an annual basis for crop yields, absorbed photosynthetically active radiation and the water fluxes (irrigation, transpiration, evaporation,interception, blue and green evapotranspiration). Crop yields, and water fluxes are given for each crop functional type (CFT), respectively. Monthly data are provided for one carbon flux (net primary production) and the water fluxes transpiration, evaporation, interception, and runoff. The data are provided in one binary file for each variable and simulation. An overview of all variables and information on how data are stored within the binary files are given in the file inventory.
Depth profiles of stable water isotopes in the soil provide important information on flow and transport processes in the subsurface. We sampled depth profiles of stable water isotopes (2H and 18O) in the pore waters on two occasions at 46 sites in the Attert catchment, Luxembourg and are partly located in mixed deciduous forest and partly on grassland. These sites correspond to the sensor cluster sites of the DFG research unit CAOS. Sampling took place once between February 2012 and October 2013 and once in June 2014. Sampling procedure: We took 1-3 soil cores of 8 cm diameter in close proximity with a percussion drill (Atlas Copco Cobra, Stockholm, Sweden) at each study site within a radius of 5 m from the soil moisture sensor profiles. We drilled as deep as possible and divided the extracted soil cores into subsamples of 5 to 10 cm length and sealed the material in air tight bags (Weber Packaging, Güglingen, Germany). The soil sample depths were corrected for compaction during the drilling pro-cess and are provided as the mean depth of 5 or 10 cm soil core subsamples. For isotope analyses of the pore water, we used the direct equilibration method (Wassenaar et al., 2008). Analyses were carried out at the Chair of Hydrology, University of Freiburg. We provide detailed information about the laboratory analyses in Sprenger et al. (2015) and Sprenger et al. (2016) and the data description associated with the data.
The data set contains hydrological, meteorological and gravity time series collected at Argentine-German Geodetic Observatory (AGGO) in La Plata, Argentina. The hydrological series include soil moisture, temperature, electric conductivity, soil parameters, and groundwater variation. The meteorological time series comprise air temperature, humidity, pressure, wind speed, solar short- and long-waver radiation, and precipitation. The observed hydrometeorological parameters are extended by modelled value of evapotranspiration and water content variation in the zone between deepest soil moisture sensor and the groundwater level. Gravity products include large-scale hydrological, oceanic as well as atmospheric effects. These gravity effects are furthermore extended by local hydrological effects and gravity residuals suitable for comparison and evaluation of the model performance. Provided are directly observed values denoted as Level 1 product along with pre-processed series corrected for known issues (Level 2). Level 3 products are model outputs acquired using Level 2 data. The maximal temporal coverage of the data set ranges from May 2016 up to November 2018 with some exceptions for sensors and models set up in May 2017. The data set is organized in a database structure suitable for implementation in a relational database management system. All definitions and data tables are provided in separate text files allowing for traditional use without database installation.Software related to the data acquisition, processing, and modelling can be found in a separate publication describing scripts applied to the data set presented here. The software publication is available at https://doi.org/10.5880/GFZ.5.4.2018.002 (Mikolaj, 2018)
The data sets contains the major results of the article “Improving information extraction from model data using sensitivity-weighted performance criteria“ written by Guse et al. (2020). In this article, it is analysed how a sensitivity-weighted performance criterion improves parameter identifiability and model performance. More details are given the in article. The files of this dataset are described as follows. Parameter sampling: FAST parameter sampling.xlsx: To estimate the sensitivity, the Fourier Amplitude Sensitivity Test (FAST) was used (R-routine FAST, Reusser, 2013). Each column shows the values of the model parameter of the SWAT model (Arnold et al., 1998). All parameters are explained in detail in Neitsch et al. (2011). The FAST parameter sampling defines the number of model runs. For twelve model parameters as in this case, 579 model runs are required. The same parameter sets were used for all catchments. Daily sensitivity time series: Sensitivity_2000_2005.xlsx: Daily time series of parameter sensitivity for the period 2000-2005 for three catchments in Germany (Treene, Saale, Kinzig). Each column shows the sensitivity of one parameter of the SWAT model. The methodological approach of the temporal dynamics of parameter sensitivity (TEDPAS) was developed by Reusser et al. (2011) and firstly applied to the SWAT model in Guse et al. (2014). As sensitivity index, the first-order partial variance is used that is the ratio of the partial variance of one parameter divided by the total variance. The sensitivity is thus always between 0 and 1. The sum in one row, i.e. the sensitivity of all model parameters on one day, could not be higher than 1. Parameter sampling: LH parameter sampling.xlsx: To calculate parameter identifiability, Latin Hypercube sampling was used to generate 2000 parameter sets (R-package FME, Soetaert and Petzoldt, 2010). Each column shows the values of the model parameter of the SWAT model (Arnold et al., 1998). All parameters are explained in detail in Neitsch et al. (2011). The same parameter sets were used for all catchments. Performance criteria with and without sensitivity weights: RSR_RSRw_cal.xlsx: • Calculation of the RSR once and RSRw separately for each model parameter. • RSR: Typical RSR (RMSE divided by standard deviation) • RSR_w: RSR with weights according to daily sensitivity time series. The calculation was carried out in all three catchments. • The column RSR shows the results of the RSR (RMSE divided by standard deviation) for the different model runs. • The column RSR[_parameter name] shows the calculation of the RSR_w for the specific model parameter. • RSR_w give weights on each day based on the daily parameter sensitivity (as shown in sensitivity_2000_2005.xlsx). This means that days with a higher parameter sensitivity are higher weighted. In the methodological approach the best 25% of the model runs were calculated (best 500 model runs) and the model parameters were constrained to the most appropriate parameter values (see methodological description in the article). Performance criteria for the three catchments: GOFrun_[catchment name]_RSR.xlsx: These three tables are organised identical and are available for the three catchments in Germany (Treene, Saale, Kinzig). In using the different parameter ranges for the catchments as defined in the previous steps, 2000 model simulation were carried out. Therefore, a Latin-Hypercube sampling was used (R-package FME, Soetaert and Petzoldt, 2010). The three tables show the results of 2000 model simulations for ten different performance criteria for the two different methodological approaches (RSR and swRSR) and two periods (calibration: 2000-2005 and validation: 2006-2010). Performance criteria for the three catchments: GOFrun_[catchment name]_MAE.xlsx: The three tables show the results of 2000 model simulations for ten different performance criteria for the two different methodological approaches (MAE and swMAE) and two periods (calibration: 2000-2005 and validation: 2006-2010).
This software publication describes the data acquisition, processing and modelling of hydrological, meteorological and gravity time series prepared for the Argentine-German Geodetic Observatory (AGGO) in La Plata, Argentina. The corresponding output data set is available at http://doi.org/10.5880/GFZ.5.4.2018.001 (Mikolaj et al., 2018).Processed hydrological series include soil moisture, temperature, electric conductivity, and groundwater variation. The processed meteorological time series comprise air temperature, humidity, pressure, wind speed, solar short- and long-waver radiation, and precipitation. Modelling scripts include evapotranspiration, combined precipitation, and water content variation in the zone between deepest soil moisture sensor and groundwater. In addition, large-scale hydrological, oceanic as well as atmospheric effect are modelled along with the local hydrological effects. To allow for a comparison of the model outputs to observations, processing script of gravity residuals is provided as well.
This data publication contains: • the source codes for the 1-D finite-difference glaciofluvial model (directory "model_code"), • the model results presented in Banerjee and Scherler (2025) (directory "model_results"), • and codes to produce the plots in Banerjee and Scherler (2025) To compile and run the source codes to generate the output files presented in Banerjee and Scherler (2025), use the commands given in “run_commands.txt”. The output files from the above runs are provided in the directory "model_results". To reproduce the figures 2 & 3 in the main text of Banerjee and Scherler (2025), and figures S1, S2 & S3 in the supplementary material, use the commands given in plot_commands.txt. This requires AWK and GNUPLOT commandline tools. Figure S2 is based on a Matlab script.
The netCDF data stored here represent crop production simulations from the LPJmL biosphere model underlying the different steps of the U-turn portrayed in the main paper by Gerten et al. The LPJmL data cover the entire globe with a spatial resolution of 0.5° for the baseline period as well as for different scenarios reflecting the studied ways to restrict crop production through maintaining planetary boundaries on the one hand and the various opportunities to increase food supply within the boundaries on the other hand (see paper, specifically Figs. 1 & 2, Table 2). The stored variable is crop production (fresh matter) multiplied by the fractional coverage of different crop functional types, per 0.5° grid cell. The data are provided in one netCDF file for each scenario. An overview of the scenarios assigned to the folder names is given in the file inventory.The data support the study: Gerten, D., Heck, V., Jägermeyr, J., Bodirsky, B.L., Fetzer, I., Jalava, M., Kummu, M., Lucht, W., Rockström, J., Schaphoff, S., Schellnhuber, H.J.: Feeding ten billion people is possible within four terrestrial planetary boundaries. Nature Sustainability (2020).
This publication contains the supplementary data set to Mikolaj et al. "Resolving geophysical signals by terrestrial gravimetry: a time domain assessment of the correction-induced uncertainty" (2019, JGR-Solid Earth). The aim of the article is to estimate the uncertainty of terrestrial gravity corrections applied to resolve small-scale gravity effects. The uncertainty of the gravity corrections is assessed using various models of the tidal effect, large-scale hydrology, non-tidal ocean loading, and atmosphere. Taken into account are widely recognized models with global spatial coverage, sufficient temporal resolution and coverage, and available to the public for research purposes. The uncertainty is expressed in terms of a root-mean-square and mean-absolute error of the deviations between all available models. The data set comprises models for 11 sites worldwide. The processing scripts are provided along with an explanatory file with all instructions for results reproduction and application of the uncertainty analysis for an arbitrary location. Please consult the readme file for further details on the data.
In Irrgang et al. (2020), we have trained a convolutional neural network to perform a so-called downscaling task. This downscaling aims to recover the fine-structure continental water storage distribution on the South American continent from coarse-resolution space-borne gravimetry observations. Here, we share data sets that were used for training the neural network, namely (1) monthly pairs of gridded terrestrial water storage anomalies (TWSA) of the South American continent and (2) surface water storage anomalies (SWSA) in the Amazonas region for the time period 2003-2019. TWSAs were used as target (output) values of the neural network and were derived from the Land Surface Discharge Model (LSDM, Dill, 2008). The corresponding input values were calculated by spatially smoothing the TWSA fields with a 600 km Gaussian filter. After training the neural network over the time period of 2003 to 2018, its performance was tested and compared to LSDM for the subsequent year 2019.
The dataset (Mielke et al, 2023) consists of daily ASCII-files, each containing the spherical harmonic coefficients (SHCs) for atmosphere, hydrology, and ocean bottom pressure. The files that include the AH+O coefficients are provided in the AOD format of the GFZ with the naming convention TYPE_YYYY-MM-DD_X_01.asc and contain header information (30 lines) and four columns with degree (n) order (m) and Stokes coefficients cnm and snm. Coefficients in each file are split up into different subsets, each corresponding to a subdaily time step (i.e., a daily file with 3-hour temporal resolution is split up into 8 subsets). The entire dataset is organized following the folder structure /TYPE/NEST/coeff_aodFormat_XXX/. We provide regional refined (nested), coarse grained (nested, but with a lower resolution version of the regional model), or global model solutions of SHCs for each datatype. Some datasets are available in different spectral resolutions, with d/o up to 179, 180, or 360. In this release all AH+O coefficients have a temporal resolution of 3 hours, except the non-regional refined atmospheric solution, which is given 6-hourly. Currently, the whole data set is provided for June 2007 and some components for the whole year 2007. Additional months and years will be added with newer versions of the dataset or can be provided by the authors on request. For the atmospheric and hydrological background model, regional models with high spatial and temporal resolution are nested into global models: Therefore, global and regional models must be resampled and interpolated on the same regular grid with equivalent time epochs. For the nesting, the global model is interpolated on the same grid resolution as the regional model. Grid points of the global model are than replaced with the data of the regional model of the CORDEX-EU region. A Gaussian filter is applied in a transition zone with a width of 7.5° to reduce an edge effect (Gibbs effect) between the two combined models.
| Organisation | Count |
|---|---|
| Wissenschaft | 19 |
| Type | Count |
|---|---|
| unbekannt | 19 |
| License | Count |
|---|---|
| Offen | 19 |
| Language | Count |
|---|---|
| Englisch | 19 |
| Resource type | Count |
|---|---|
| Keine | 19 |
| Topic | Count |
|---|---|
| Boden | 10 |
| Lebewesen und Lebensräume | 12 |
| Luft | 11 |
| Mensch und Umwelt | 18 |
| Wasser | 15 |
| Weitere | 19 |