Zusammenfassung

With 120 Mio. m3 per year of lost water globally, leakages in drinking water distribution networks (WDN) still pose a major challenge to water utilities, furthermore, resulting in a multitude of cascading effects such as operational disruptions, environmental hazard, property damage, and sanitary issues. In the last decades there has been a growing focus on leakage detection within the scientific community leading to the development of numerous computer-based solutions for leakage detection. Despite these developments, practical approaches employed by water utilities in their leak management routines still primarily rely on in-situ acoustic devices in combination with periodic water audits, altogether falling short of ensuring continuous system monitoring and leaving much further potential for leakage reduction. Conclusively, further dissemination and widespread implementation of automatic leakage detection technology in the near future will be paramount to contain water losses and foster robust and climate-resilient water supply systems.

Currently available computer-based technologies for leakage detection can be categorized either as data-driven or model-based, primarily depending on their requirement of a hydraulic model. Algorithms for leakage detection based on hydraulic models may accurately detect the occurrence and location of leakages, yet they are highly sensitive to model inputs and, thus, are required to be well calibrated. On the other hand, data-driven models operating on the premise of anomaly detection merely require data without any anomaly, i.e., leakage, for their calibration. However, these data-driven models cannot compete with the localisation accuracy of model-based leakage detection, as they do not incorporate geophysical information about the underlying WDN. Altogether, while yielding great improvement over in-situ technology, the requirements of automatic leakage detection technology still hamper its practical implementation. While both model-based and data-driven approaches have different requirements, their combination may ultimately enable mitigation of high technical requirements and, thus, enhance its practical applicability, thereby potentially facilitating a more efficacious, robust, and widespread implementation of leakage detection technology in water distribution networks.

In this work, we explore the trade-off between model-based and data-driven leakage detection on the basis of two award-winning state-of-the-art leakage detection algorithms developed by our consortium in previous research, i.e., the data-driven LILA and the model-based Dual Model. Through the integration of both algorithms into a unified application, we aim to mitigate technical barriers and bolster detection robustness. To validate our approach, we quantitatively evaluate its performance regarding false alarms, time-to-detection, and localisation accuracy against the individual algorithms while considering different levels of confidence and availability regarding the input data, i.e., hydraulic model, water demand estimation, and pressure data.

Remy, C. (2024): CO2e-Emissionfaktoren bereitstellen zur Unternehmensbilanz - Scope 3.

DWA WebSeminar "Zero Emission? Beiträge der Abwasserbeseitigung zur Reduzierung der CO2e-Emissionen", 16.-17.04.2024

Remy, C. (2024): Treibhausgasbilanz der Produktion und Regeneration von Aktivkohle.

DWA Expertengespräch "Aktivkohle aus Biomasse für eine nachhaltige Abwasserreinigung", 21.-22.03.2024, Kassel, Germany

Zusammenfassung

Short-term fecal pollution events are a major challenge for managing microbial safety at recreational waters. Long turn-over times of current laboratory methods for analyzing fecal indicator bacteria (FIB) delay water quality assessments. Data-driven models have been shown to be valuable approaches to enable fast water quality assessments. However, a major barrier towards the wider use of such models is the prevalent data scarcity at existing bathing waters, which questions the representativeness and thus usefulness of such datasets for model training. The present study explores the ability of five data-driven modelling approaches to predict short-term fecal pollution episodes at recreational bathing locations under data scarce situations and imbalanced datasets. The study explicitly focuses on the potential benefits of adopting an innovative modeling and risk-based assessment approach, based on state/cluster-based Bayesian updating of FIB distributions in relation to different hydrological states. The models are benchmarked against commonly applied supervised learning approaches, particularly linear regression, and random forests, as well as to a zero-model which closely resembles the current way of classifying bathing water quality in the European Union. For model-based clustering we apply a non-parametric Bayesian approach based on a Dirichlet Process Mixture Model. The study tests and demonstrates the proposed approaches at three river bathing locations in Germany, known to be influenced by short-term pollution events. At each river two modelling experiments (“longest dry period”, “sequential model training”) are performed to explore how the different modelling approaches react and adapt to scarce and uninformative training data, i.e., datasets that do not include event pollution information in terms of elevated FIB concentrations. We demonstrate that it is especially the proposed Bayesian approaches that are able to raise correct warnings in such situations (> 90 % true positive rate). The zero-model and random forest are shown to be unable to predict contamination episodes if pollution episodes are not present in the training data. Our research shows that the investigated Bayesian approaches reduce the risk of missed pollution events, thereby improving bathing water safety management. Additionally, the approaches provide a transparent solution for setting minimum data quality requirements under various conditions. The proposed approaches open the way for developing data-driven models for bathing water quality prediction against the reality that data scarcity is common problem at existing and prospective bathing waters.

Zusammenfassung

This deliverable summarises progress at month 18 of the AD4GD project on three pilot studies on air quality, water and biodiversity, and identifies the key next steps for all partners to support the implementation. The pilot studies are designed to demonstrate the feasibility of re-using, developing, extending and integrating a range of tools, semantics and standards to facilitate data-driven decision making on Green Deal priority topics. The progress described includes:

  • engagement with stakeholders;

  • requirements gathering;

  • identification of existing re-usable components, data and services which can support the pilots and, more broadly, the Green Deal Data Space;

  • identification of gaps, and of components required to fill those gaps;

  • progress on development and integration of the identified components.

The purpose of Deliverable 6.1 is to review the context and lessons learnt in the first 6 months of the pilot work package, and to identify and plan priority actions for the next 18 months to ensure robust integration of accessible, re-usable tools and work flows by the end of the project. Where deliverables already exist from the project that document underpinning technologies and services, these will be sign posted. Evaluation of performance and scaling potential is beyond the scope of the current deliverable, and will be addressed in its second iteration (D6.2). The current deliverable focuses primarily on the integration of existing and bespoke tools to support the work flows necessary to consume, use and produce data and metadata for the three identified pilot case studies.

We describe a human-centred co-design approach employed by FIT in eliciting high-level requirements for interfaces and user experience in the Green Deal Data Space, both during the project and in a dedicated workshop in September 2023. This work has required us to work closely with sister projects and existing GEO initiatives to ensure efficiency and interoperability.

For each pilot, we describe the initial rationale, indicators to be computed and stakeholders, before delineating the relative contribution (and potential future contribution) of EO, citizen science, socio-economic and IoT data. Next, we present the value proposition and design for an e d-user tool (to be developed by FIT) which will allow GDDS users to easily access the application or work flow , with a high-level view of the underlying data and processing services. Finally, for each pilot study, we describe the technical components that have been identified as necessary to support such interfaces from end to end, including 12 bespoke tools and components being developed by project partners to ease the integration of existing solutions.

Progress on these 12 technical components are explained, including whether each is being re-used, extended or specifically developed within tasks and work packages of the project. In each case, URLs are given for supporting demonstrations, instances or code repositories. We have aligned their development and iteratively integrated them at two face-to-face project hackathons in October 2023 and February 2024. We then revisit each pilot study to assess the progress of integration and development, and identify priorities for the next 3, 6, 9 and 12 months, aiming towards an integration that can be documented and evaluated within the final 6 months of the project.

Möchten Sie die „{filename}“ {filesize} herunterladen?

Um unsere Webseite für Sie optimal zu gestalten und fortlaufend verbessern zu können, verwenden wir Cookies. Durch die weitere Nutzung der Webseite stimmen Sie der Verwendung von Cookies zu. Weitere Informationen zu Cookies erhalten Sie in unserer Datenschutzerklärung.