API src

Found 19 results.

Other language confidence: 0.9874159500536116

Data used for simulating the flood event in July 2021 along the river Ahr, Germany, and results of the simulation.

Data used for simulating the flood event in July 2021 along the river Ahr, Germany, and results of the simulation. The data cover the reach of the river from Altenahr to Sinzig (inflow to the Rhine). The data set contains: Model input data: - DEM with 10 m resolution (ASCII Raster) - Roughness raster (ASCII Raster) - building raster (ASCII Raster) - boundary time series (csv spreadsheet) Model outpout data: - maximum inundation depths of flood forecast and estimated real flood peak (ASCII Raster) - maximum effective flow velocities of flood forecast and estimated real flood peak (ASCII Raster) - maximum prduct of water depth and flow valocities of flood forecast and estimated real flood peak (ASCII Raster)

The River Plume Workflow of the Flood Event Explorer: Detection and impact assessment of a river plume

The River Plume Workflow is part of the Flood Event Explorer (FEE, Eggert et al., 2022), developed at the GFZ German Research Centre for Geosciences in close collaboration with Helmholtz-Zentrum Hereon. It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/). The focus of the River Plume Workflow is the impact of riverine flood events on the marine environment. At the end of a flood event chain, an unusual amount of nutrients and pollutants is washed into the North Sea, which can have consequences, such as increased algae blooms. The workflow aims to enable users to detect a river plume in the North Sea and to determine its spatio-temporal extent. Identifying river plume candidates can either happen manually in the visual interface or also through an automatic anomaly detection algorithm, using Gaussian regression. In both cases a combination of observational data, namely FerryBox transects and satellite data, and model data are used. Once a river plume candidate is found, a statistical analysis supplies additional detail on the anomaly and helps to compare the suspected river plume to the surrounding data. Simulated trajectories of particles starting on the FerryBox transect at the time of the original observation and modelled backwards and forwards in time help to verify the origin of the river plume and allow users to follow the anomaly across the North Sea. An interactive map enables users to load additional observational data into the workflow, such as ocean colour satellite maps, and provides them with an overview of the flood impacts and the river plume’s development on its way through the North Sea. In addition, the workflow offers the functionality to assemble satellite-based chlorophyll observations along model trajectories as a time series. They allow scientists to understand processes inside the river plume and to determine the timescales on which these developments happen. For example, chlorophyll degradation rates in the Elbe river plume are currently investigated using these time series. The workflow's added value lies in the ease with which users can combine observational FerryBox data with relevant model data and other datasets of their choice. Furthermore, the workflow allows users to visually explore the combined data and contains methods to find and highlight anomalies. The workflow’s functionalities also enable users to map the spatio-temporal extent of the river plume and investigate the changes in productivity that occur in the plume. All in all, the River Plume Workflow simplifies the investigation and monitoring of flood events and their impacts in marine environments.

Event peak flow dataset for spatial counterfactual events, Germany

This dataset comprises event peak flows, representing extreme floods at 516 stations in Germany. The data generation process involves several key steps. Initially, observed rainfall events associated with 10 historical flood disasters from 1950 to 2021 are undergone spatial shifts. These shifts involve three distances (20, 50, and 100 km) and eight directions (North, Northeast, East, Southeast, South, Southwest, West, Northwest), resulting in 24 counterfactual precipitation events. Including the factual (no shift) event, a total of 25 distinct shifting events are considered. Subsequently, these shifted fields are used as atmospheric forcing for a mesoscale hydrological model (mHM) set up and calibrated for the entire Germany. The model produces daily stream flows across its domain, from which the event peak flows are derived. This dataset is expected to provide a valuable resource for analyzing and modeling the dynamics extreme flood events in Germany.

The Digital Earth Flood Event Explorer: A showcase for data analysis and exploration with scientific workflows

The Digital Earth Flood Event Explorer supports geoscientists and experts to analyse flood events along the process cascade event generation, evolution and impact across atmospheric, terrestrial, and marine disciplines. It applies the concept of scientific workflows and the component-based Data Analytics Software Framework (DASF, Eggert and Dransch, 2021) to an exemplary showcase. It aims at answering the following geoscientific questions: - How does precipitation change over the course of the 21st century under different climate scenarios over a certain region? - What are the main hydro-meteorological controls of a specific flood event? - What are useful indicators to assess socio-economic flood impacts? - How do flood events impact the marine environment? - What are the best monitoring sites for upcoming flood events? The Flood Event Explorer developed scientific workflows for each geoscientific question providing enhanced analysis methods from statistics, machine learning, and visual data exploration that are implemented in different languages and software environments, and that access data form a variety of distributed databases. The collaborating scientists are from different Helmholtz research centers and belong to different scientific fields such as hydrology, climate-, marine-, and environmental science, and computer- and data science. It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/).

The Socio-Economic Flood Impacts Workflow of the Flood Event Explorer: Identification of relevant controls and useful indicators for the assessment of flood impacts

The Socio-Economic Flood Impacts Workflow is part of the Flood Event Explorer (FEE, Eggert et al., 2022), developed at the GFZ German Research Centre for Geosciences . It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/). The Socio-Economic Flood Impacts Workflow aims to support the identification of relevant controls and useful indicators for the assessment of flood impacts. It should support answering the question What are useful indicators to assess socio-economic flood impacts?. Floods impact individuals and communities and may have significant social, economic and environmental consequences. These impacts result from the interplay of hazard - the meteo-hydrological processes leading to high water levels and inundation of usually dry land, exposure - the elements affected by flooding such as people, build environment or infrastructure, and vulnerability - the susceptibility of exposed elements to be harmed by flooding. In view of the complex interactions of hazard and impact processes a broad range of data from disparate sources need to be compiled and analysed across the boundaries of climate and atmosphere, catchment and river network, and socio-economic domains. The workflow approaches this problem and supports scientists to integrate observations, model outputs and other datasets for further analysis in the region of interest. The workflow provides functionalities to select the region of interest, access hazard, exposure and vulnerability related data from different sources, identifying flood periods as relevant time ranges, and calculate defined indices. The integrated input data set is further filtered for the relevant flood event periods in the region of interest to obtain a new comprehensive flood data set. This spatio-temporal dataset is analysed using data-science methods such as clustering, classification or correlation algorithms to explore and identify useful indicators for flood impacts. For instance, the importance of different factors or the interrelationships among multiple variables to shape flood impacts can be explored. The added value of the Socio-Economic Flood Impacts Workflow is twofold. First, it integrates scattered data from disparate sources and makes it accessible for further analysis. As such, the effort to compile, harmonize and combine a broad range of spatio-temporal data is clearly reduced. Also, the integration of new datasets from additional sources is much more straightforward. Second, it enables a flexible analysis of multivariate data and by reusing algorithms from other workflows it fosters a more efficient scientific work that can focus on data analysis instead of tedious data wrangling.

Spatial counterfactuals of the July 2021 flood in the Ahr valley, Germany

This dataset comprises gridded precipitation fields, simulated hourly discharge values and simulated inundation areas and depths in the Ahr catchment in Germany for the reference scenario of the July 2021 flood and 25 spatial counterfactuals. The precipitation dataset contains the observed gridded E-OBS precipitation field and 25 counterfactuals shifted by one cell. Subsequently, the reference scenario and spatial counterfactuals are used as atmospheric forcing for the mesoscale hydrological model mHM set up and calibrated for the Ahr catchment, Germany. The model simulates hourly discharge series at seven gauge locations (Müsch, Kirmutscheid, Niederadenau, Denn, Kreuzberg, Altenahr, Bad Bodendorf) from which the event peak flows and flood event volumes can be derived. These discharge data is used as boundary condition for the RIM2D hydrodynamic inundation model which simulates inundation areas and maximum inundation depths along the Ahr valley between Müsch and Sinzig for the reference scenario and spatial counterfactuals.

Seismic data from the 2016-02-22 flood event and from an active seismic survey conducted around the Eshtemoa River, Israel

Bedload transport is a key process in fluvial morphodynamics and hydraulic engineering, but is notoriously difficult to measure. The recent advent of stream-side seismic monitoring techniques provides an alternative to in-stream monitoring techniques, which are often costly, staff-intensive, and cannot be deployed during large floods. Seismic monitoring is a surrogate method requiring several steps to convert seismic data into bedload data. State-of-the-art approaches of conversion exploit physical models predicting the seismic signal generated by bedload transport. Here, we did an active seismic survey (2017-11) and used seismic data from a flood event (2016-02-22) on the Nahal Ehstemoa to constrain a seismic bedload model. We conducted the active seismic survey to determine the local seismic ground properties, i.e., the Green’s function. We also used water depth and bedload grain size distribution to constrain the seismic bedload model and were able to compare the bedload flux obtained from the seismic data using the model with high-quality independent bedload measurements from slot samplers on the site. The complementary non-seismic data is published in a separate data publication (Lagarde et al., 2020).

Panta Rhei benchmark dataset: socio-hydrological data of paired events of floods and droughts

As the negative impacts of hydrological extremes increase in large parts of the world, a better understanding of the drivers of change in risk and impacts is essential for effective flood and drought risk management and climate adaptation. However, there is a lack of comprehensive, empirical data about the processes, interactions and feedbacks in complex human-water systems leading to flood and drought impacts. To fill this gap, we present an IAHS Panta Rhei benchmark dataset containing socio-hydrological data of paired events, i.e. two floods or two droughts that occurred in the same area (Kreibich et al. 2017, 2019). The contained 45 paired events occurred in 42 different study areas (in three study areas we have data on two paired events), which cover different socioeconomic and hydroclimatic contexts across all continents. The dataset is unique in covering floods and droughts, in the number of cases assessed and in the amount of qualitative and quantitative socio-hydrological data contained. References to the data sources are provided in 2022-002_Kreibich-et-al_Key_data_table.xlsx where possible. Based on templates, we collected detailed, review-style reports describing the event characteristics and processes in the case study areas, as well as various semi-quantitative data, categorised into management, hazard, exposure, vulnerability and impacts. Sources of the data were classified as follows: scientific study (peer-reviewed paper and PhD thesis), report (by governments, administrations, NGOs, research organisations, projects), own analysis by authors, based on a database (e.g. official statistics, monitoring data such as weather, discharge data, etc.), newspaper article, and expert judgement. The campaign to collect the information and data on paired events started at the EGU General Assembly in April 2019 in Vienna and was continued with talks promoting the paired event data collection at various conferences. Communication with the Panta Rhei community and other flood and drought experts identified through snowballing techniques was important. Thus, data on paired events were provided by professionals with excellent local knowledge of the events and risk management practices.

Panta Rhei benchmark dataset: socio-hydrological data of paired events of floods and droughts (version 2)

As the negative impacts of hydrological extremes increase in large parts of the world, a better understanding of the drivers of change in risk and impacts is essential for effective flood and drought risk management and climate adaptation. However, there is a lack of comprehensive, empirical data about the processes, interactions and feedbacks in complex human-water systems leading to flood and drought impacts. To fill this gap, we present an IAHS Panta Rhei benchmark dataset containing socio-hydrological data of paired events, i.e. two floods or two droughts that occurred in the same area (Kreibich et al. 2017, 2019). The contained 45 paired events occurred in 42 different study areas (in three study areas we have data on two paired events), which cover different socioeconomic and hydroclimatic contexts across all continents. The dataset is unique in covering floods and droughts, in the number of cases assessed and in the amount of qualitative and quantitative socio-hydrological data contained. References to the data sources are provided in 2023-001_Kreibich-et-al_Key_data_table.xlsx where possible. Based on templates, we collected detailed, review-style reports describing the event characteristics and processes in the case study areas, as well as various semi-quantitative data, categorised into management, hazard, exposure, vulnerability and impacts. Sources of the data were classified as follows: scientific study (peer-reviewed paper and PhD thesis), report (by governments, administrations, NGOs, research organisations, projects), own analysis by authors, based on a database (e.g. official statistics, monitoring data such as weather, discharge data, etc.), newspaper article, and expert judgement. The campaign to collect the information and data on paired events started at the EGU General Assembly in April 2019 in Vienna and was continued with talks promoting the paired event data collection at various conferences. Communication with the Panta Rhei community and other flood and drought experts identified through snowballing techniques was important. Thus, data on paired events were provided by professionals with excellent local knowledge of the events and risk management practices.

The Smart Monitoring Workflow (Tocap) of the Flood Event Explorer: Determining the most suitable time and location for event-driven, ad-hoc monitoring

The Smart Monitoring Workflow (Tocap) is part of the Flood Event Explorer (FEE, Eggert et al., 2022), developed at the GFZ German Research Centre for Geosciences in close collaboration with the Helmholtz-Centre for Environmental Research UFZ Leipzig. It is funded by the Initiative and Networking Fund of the Helmholtz Association through the Digital Earth project (https://www.digitalearth-hgf.de/). A deeper understanding of the Earth system as a whole and its interacting sub-systems depends not only on accurate mathematical approximations of the physical processes but also on the availability of environmental data across time and spatial scales. Even though advanced numerical simulations and satellite-based remote sensing in conjunction with sophisticated algorithms such as machine learning tools can provide 4D environmental datasets, local and mesoscale measurements continue to be the backbone in many disciplines such as hydrology. Considering the limitations of human and technical resources, monitoring strategies for these types of measurements should be well designed to increase the information gain provided. One helpful set of tools to address these tasks are data exploration frameworks providing qualified data from different sources and tailoring available computational and visual methods to explore and analyse multi-parameter datasets. In this context, we developed a Smart Monitoring Workflow to determine the most suitable time and location for event-driven, ad-hoc monitoring in hydrology using soil moisture measurements as our target variable. The Smart Monitoring Workflow consists of three main steps. First is the identification of the region of interest, either via user selection or recommendation based on spatial environmental parameters provided by the user. Statistical filters and different color schemes can be applied to highlight different regions. The second step is accessing time-dependent environmental parameters (e.g., rainfall and soil moisture estimates of the recent past, weather predictions from numerical weather models and swath forecasts from Earth observation satellites) for the region of interest and visualizing the results. Lastly, a detailed assessment of the region of interest is conducted by applying filter and weight functions in combination with multiple linear regressions on selected input parameters. Depending on the measurement objective (e.g highest/lowest values, highest/lowest change), most suitable areas for monitoring will subsequently be visually highlighted. In combination with the provided background map, an efficient route for monitoring can be planned directly in the exploration environment. The added value of the Smart Monitoring Workflow is multifold. The workflow gives the user a set of tools to visualize and process their data on a background map and in combination with data from public environmental datasets. For raster data from public databases, tailor-made routines are provided to access the data in the spatial-temporal limits required by the user. Aiming to facilitate the design of terrestrial monitoring campaigns, the platform and device-independent approach of the workflow gives the user the flexibility to design a campaign at the desktop computer first and to refine it later in the field using mobile devices. In this context, the ability of the workflow to plot time-series of forecast data for the region of interest empowers the user to react quickly to changing conditions, e.g thunderstorm showers, by adapting the monitoring strategy, if necessary. Finally, the integrated routing algorithm assists to calculate the duration of a planned campaign as well as the optimal driving route between often scattered monitoring locations.

1 2