Other language confidence: 0.9874159500536117
This dataset comprises gridded precipitation fields, simulated hourly discharge values and simulated inundation areas and depths in the Ahr catchment in Germany for the reference scenario of the July 2021 flood and 25 spatial counterfactuals. The precipitation dataset contains the observed gridded E-OBS precipitation field and 25 counterfactuals shifted by one cell. Subsequently, the reference scenario and spatial counterfactuals are used as atmospheric forcing for the mesoscale hydrological model mHM set up and calibrated for the Ahr catchment, Germany. The model simulates hourly discharge series at seven gauge locations (Müsch, Kirmutscheid, Niederadenau, Denn, Kreuzberg, Altenahr, Bad Bodendorf) from which the event peak flows and flood event volumes can be derived. These discharge data is used as boundary condition for the RIM2D hydrodynamic inundation model which simulates inundation areas and maximum inundation depths along the Ahr valley between Müsch and Sinzig for the reference scenario and spatial counterfactuals.
We construct a precomputed lookup table to predict flood loss to private households based on predictor variables from a Bayesian Network model (BN-FLEMO∆). BN-FLEMO∆ is a probabilistic model that provides multinomial probability distributions of relative building loss (i.e. absolute building loss/building value) in discrete classes. More information on the development of BN-FLEMO∆ can be found in Rafiezadeh Shahi et al. (2025). The zip folder contains the precomputed lookup table, where all possible combinations of predictor and response values are stored. The lookup table contains an ID for each unique combination of possible predictor and response (i.e., relative loss) values. The file name is coded as “2023-002_Rafiezadeh Shahi-et-al_lookup.csv”.
This dataset comprises event peak flows, representing extreme floods at 516 stations in Germany. The data generation process involves several key steps. Initially, observed rainfall events associated with 10 historical flood disasters from 1950 to 2021 are undergone spatial shifts. These shifts involve three distances (20, 50, and 100 km) and eight directions (North, Northeast, East, Southeast, South, Southwest, West, Northwest), resulting in 24 counterfactual precipitation events. Including the factual (no shift) event, a total of 25 distinct shifting events are considered. Subsequently, these shifted fields are used as atmospheric forcing for a mesoscale hydrological model (mHM) set up and calibrated for the entire Germany. The model produces daily stream flows across its domain, from which the event peak flows are derived. This dataset is expected to provide a valuable resource for analyzing and modeling the dynamics extreme flood events in Germany.
The Climate Change Workflow is part of the Flood Event Explorer (FEE, Eggert et al., 2022), developed at the GFZ German Research Centre for Geosciences in close collaboration with Helmholtz-Zentrum Hereon , Climate Service Center Germany. It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/). The goal of the Climate Change Workflow is to support the analysis of climate-driven changes in flood-generating climate variables, such as precipitation or soil moisture, using regional climate model simulations from the Earth System Grid Federation (ESGF) data archive. It should support to answer the geoscientific question How does precipitation change over the course of the 21st century under different climate scenarios, compared to a 30-year reference period over a certain region? Extraction of locally relevant data over a region of interest (ROI) requires climate expert knowledge and data processing training to correctly process large ensembles of climate model simulations, the Climate Change Workflow tackles this problem. It supports scientists to define the regions of interest, customize their ensembles from the climate model simulations available on the Earth System Grid Federation (ESGF), define variables of interest, and relevant time ranges. The Climate Change Workflow provides: (1) a weighted mask of the ROI ; (2) weighted climate data of the ROI; (3) time series evolution of the climate over the ROI for each ensemble member; (4) ensemble statistics of the projected change; and lastly, (5) an interactive visualization of the region’s precipitation change projected by the ensemble of selected climate model simulations for different Representative Concentration Pathways (RCPs). The visualization includes the temporal evolution of precipitation change over the course of the 21st century and statistical characteristics of the ensembles for two selected 30 year time periods for the mid and the end of the 21st century (e.g. median and various percentiles). The added value of the Climate Change Workflow is threefold. First, there is a reduction in the number of different software programs necessary to extract locally relevant data. Second, the intuitive generation and access to the weighted mask allows for the further development of locally relevant climate indices. Third, by allowing access to the locally relevant data at different stages of the data processing chain, scientists can work with a vastly reduced data volume allowing for a greater number of climate model ensembles to be studied; which translates into greater scientific robustness. Thus, the Climate Change Workflow provides much easier access to an ensemble of high-resolution simulations of precipitation, over a given ROI, presenting the region’s projected precipitation change using standardized approaches and supporting the development of additional locally relevant climate indices.
As the negative impacts of hydrological extremes increase in large parts of the world, a better understanding of the drivers of change in risk and impacts is essential for effective flood and drought risk management and climate adaptation. However, there is a lack of comprehensive, empirical data about the processes, interactions and feedbacks in complex human-water systems leading to flood and drought impacts. To fill this gap, we present an IAHS Panta Rhei benchmark dataset containing socio-hydrological data of paired events, i.e. two floods or two droughts that occurred in the same area (Kreibich et al. 2017, 2019). The contained 45 paired events occurred in 42 different study areas (in three study areas we have data on two paired events), which cover different socioeconomic and hydroclimatic contexts across all continents. The dataset is unique in covering floods and droughts, in the number of cases assessed and in the amount of qualitative and quantitative socio-hydrological data contained. References to the data sources are provided in 2023-001_Kreibich-et-al_Key_data_table.xlsx where possible. Based on templates, we collected detailed, review-style reports describing the event characteristics and processes in the case study areas, as well as various semi-quantitative data, categorised into management, hazard, exposure, vulnerability and impacts. Sources of the data were classified as follows: scientific study (peer-reviewed paper and PhD thesis), report (by governments, administrations, NGOs, research organisations, projects), own analysis by authors, based on a database (e.g. official statistics, monitoring data such as weather, discharge data, etc.), newspaper article, and expert judgement. The campaign to collect the information and data on paired events started at the EGU General Assembly in April 2019 in Vienna and was continued with talks promoting the paired event data collection at various conferences. Communication with the Panta Rhei community and other flood and drought experts identified through snowballing techniques was important. Thus, data on paired events were provided by professionals with excellent local knowledge of the events and risk management practices.
The GFZ Potsdam HART (Hazard and Risk Team) in cooperation with the DFG research training group 2043 NatRiskChange at Potsdam University has enabled the acquisition of Airborne Laser Scanning (ALS) and high-resolution optical data which were acquired between 22 September 2021 and 24 October 2021 by the Milan Geoservice company, Spremberg, Germany. This data acquisition took place in the Eifel regions of North Rhine-Westphalia (NRW) and Rhineland-Palatinate (RLP), which were hit by the 14 July 2021 precipitation event leading to widespread severe inundations, flash floods and caused around 185 victims and massive damage to settlements, river geometry and other geomorphic features. The high-resolution ALS and optical data acquisitions aimed at the documentation and quantification of the extent of flood related changes and destructions as well as their reappraisal before diffusion erases traces. Thus, the generated data are valuable for forensic event analysis and future attempts on flood forecasting and warning in the context of scientific and practical purposes.
Data used for simulating the flood event in July 2021 along the river Ahr, Germany, and results of the simulation. The data cover the reach of the river from Altenahr to Sinzig (inflow to the Rhine). The data set contains: Model input data: - DEM with 10 m resolution (ASCII Raster) - Roughness raster (ASCII Raster) - building raster (ASCII Raster) - boundary time series (csv spreadsheet) Model outpout data: - maximum inundation depths of flood forecast and estimated real flood peak (ASCII Raster) - maximum effective flow velocities of flood forecast and estimated real flood peak (ASCII Raster) - maximum prduct of water depth and flow valocities of flood forecast and estimated real flood peak (ASCII Raster)
The River Plume Workflow is part of the Flood Event Explorer (FEE, Eggert et al., 2022), developed at the GFZ German Research Centre for Geosciences in close collaboration with Helmholtz-Zentrum Hereon. It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/). The focus of the River Plume Workflow is the impact of riverine flood events on the marine environment. At the end of a flood event chain, an unusual amount of nutrients and pollutants is washed into the North Sea, which can have consequences, such as increased algae blooms. The workflow aims to enable users to detect a river plume in the North Sea and to determine its spatio-temporal extent. Identifying river plume candidates can either happen manually in the visual interface or also through an automatic anomaly detection algorithm, using Gaussian regression. In both cases a combination of observational data, namely FerryBox transects and satellite data, and model data are used. Once a river plume candidate is found, a statistical analysis supplies additional detail on the anomaly and helps to compare the suspected river plume to the surrounding data. Simulated trajectories of particles starting on the FerryBox transect at the time of the original observation and modelled backwards and forwards in time help to verify the origin of the river plume and allow users to follow the anomaly across the North Sea. An interactive map enables users to load additional observational data into the workflow, such as ocean colour satellite maps, and provides them with an overview of the flood impacts and the river plume’s development on its way through the North Sea. In addition, the workflow offers the functionality to assemble satellite-based chlorophyll observations along model trajectories as a time series. They allow scientists to understand processes inside the river plume and to determine the timescales on which these developments happen. For example, chlorophyll degradation rates in the Elbe river plume are currently investigated using these time series. The workflow's added value lies in the ease with which users can combine observational FerryBox data with relevant model data and other datasets of their choice. Furthermore, the workflow allows users to visually explore the combined data and contains methods to find and highlight anomalies. The workflow’s functionalities also enable users to map the spatio-temporal extent of the river plume and investigate the changes in productivity that occur in the plume. All in all, the River Plume Workflow simplifies the investigation and monitoring of flood events and their impacts in marine environments.
The Digital Earth Flood Event Explorer supports geoscientists and experts to analyse flood events along the process cascade event generation, evolution and impact across atmospheric, terrestrial, and marine disciplines. It applies the concept of scientific workflows and the component-based Data Analytics Software Framework (DASF, Eggert and Dransch, 2021) to an exemplary showcase. It aims at answering the following geoscientific questions: - How does precipitation change over the course of the 21st century under different climate scenarios over a certain region? - What are the main hydro-meteorological controls of a specific flood event? - What are useful indicators to assess socio-economic flood impacts? - How do flood events impact the marine environment? - What are the best monitoring sites for upcoming flood events? The Flood Event Explorer developed scientific workflows for each geoscientific question providing enhanced analysis methods from statistics, machine learning, and visual data exploration that are implemented in different languages and software environments, and that access data form a variety of distributed databases. The collaborating scientists are from different Helmholtz research centers and belong to different scientific fields such as hydrology, climate-, marine-, and environmental science, and computer- and data science. It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/).
The Socio-Economic Flood Impacts Workflow is part of the Flood Event Explorer (FEE, Eggert et al., 2022), developed at the GFZ German Research Centre for Geosciences . It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/). The Socio-Economic Flood Impacts Workflow aims to support the identification of relevant controls and useful indicators for the assessment of flood impacts. It should support answering the question What are useful indicators to assess socio-economic flood impacts?. Floods impact individuals and communities and may have significant social, economic and environmental consequences. These impacts result from the interplay of hazard - the meteo-hydrological processes leading to high water levels and inundation of usually dry land, exposure - the elements affected by flooding such as people, build environment or infrastructure, and vulnerability - the susceptibility of exposed elements to be harmed by flooding. In view of the complex interactions of hazard and impact processes a broad range of data from disparate sources need to be compiled and analysed across the boundaries of climate and atmosphere, catchment and river network, and socio-economic domains. The workflow approaches this problem and supports scientists to integrate observations, model outputs and other datasets for further analysis in the region of interest. The workflow provides functionalities to select the region of interest, access hazard, exposure and vulnerability related data from different sources, identifying flood periods as relevant time ranges, and calculate defined indices. The integrated input data set is further filtered for the relevant flood event periods in the region of interest to obtain a new comprehensive flood data set. This spatio-temporal dataset is analysed using data-science methods such as clustering, classification or correlation algorithms to explore and identify useful indicators for flood impacts. For instance, the importance of different factors or the interrelationships among multiple variables to shape flood impacts can be explored. The added value of the Socio-Economic Flood Impacts Workflow is twofold. First, it integrates scattered data from disparate sources and makes it accessible for further analysis. As such, the effort to compile, harmonize and combine a broad range of spatio-temporal data is clearly reduced. Also, the integration of new datasets from additional sources is much more straightforward. Second, it enables a flexible analysis of multivariate data and by reusing algorithms from other workflows it fosters a more efficient scientific work that can focus on data analysis instead of tedious data wrangling.
| Organisation | Count |
|---|---|
| Wissenschaft | 19 |
| Type | Count |
|---|---|
| unbekannt | 19 |
| License | Count |
|---|---|
| Offen | 19 |
| Language | Count |
|---|---|
| Englisch | 19 |
| Resource type | Count |
|---|---|
| Keine | 19 |
| Topic | Count |
|---|---|
| Boden | 19 |
| Lebewesen und Lebensräume | 19 |
| Luft | 19 |
| Mensch und Umwelt | 19 |
| Wasser | 18 |
| Weitere | 19 |