Zusammenfassung

The trophic index is often used to monitor the primary production of lakes. In Brandenburg, Germany, lakes are sampled several times every three years between April and October. The trophic index is then calculated from the values for phosphorus concentration, turbidity and chlorophyll-a content. This is usually only done for lakes that are monitored according to the Water Framework Directive (area > 50 ha). The low temporal resolution in combination with natural annual variations makes trend analysis of trophic levels very difficult and a high proportion of lakes are excluded from this monitoring.
Satellite images can be used to obtain information on chlorophyll-a and turbidity. Phosphorus, as a nutrient for algae, also has an indirect effect on water color. There are already many indices based on the Copernicus Sentinel-2 program, such as the Normalized Difference Chlorophyll Index, which can be used for real-time water monitoring. In addition, annual data are essential for lake management to identify long-term trends. The trophic index is a widely used and easily interpreted indicator in this regard.
The AD4GD project explored i) which bands of the Sentinel-2 images are best suited for estimating trophic state, ii) how the data can be temporally aggregated within a season, and iii) whether one pixel within a lake is sufficient to reliably describe the trophic state of the lake. Especially the latter was necessary to apply the method to small lakes where regular monitoring is not available.
The developed Normalized Difference Trophic Index (NDTI), aggregated over the months of April to October, best represented the trophic index based on measured values. It was developed and validated using 294 lakes in Brandenburg with trophic data between 2018 and 2022 and is defined for a satellite image as
NDTI_image=(B5-B2)/(B5+B2)
Band 5 describes the near infrared reflectance at 705 nm, band 2 the reflectance of blue light at 490 nm. In oligotrophic lakes, band 2 reflectance usually dominates and the index is below zero. The trophic index based on in-situ measurements is best calculated from monthly values. Similarly, NDTIimage is first averaged monthly and then seasonally (April to October in Germany).
The resulting NDTIseason was found to be highly correlated with the in-situ data for the available years (Pearson correlation coefficient between 0.83 and 0.92). Thus, it allows a comparison of the trophic state of lakes in the Brandenburg region. The data are available at an annual resolution, which is three times more frequent than the conventional analysis. This allows a much more reliable trend analysis, which can be used to monitor the success of water quality improvement measures or to identify water quality problems more quickly. Small lakes can be included in the monitoring without much effort.
A first sensitive analysis has shown that the classification of eutrophic water bodies is more reliable than that of oligotrophic water bodies. Further factors influencing the accuracy of the method will be investigated in a subsequent sensitivity analysis.

DOI
Zusammenfassung

The deliverable D3.2 “Scalability and edge computing optimization” presents the updates of the heterogeneous IoT data sources encountered in the three pilots of the AD4GD project. The first pilot concerns the water quantity and quality in the lakes located in Berlin, Germany. The second pilot studies the biodiversity in the region of Catalonia. Finally, the third pilot is dedicated to the air quality. All the pilots are using IoT data and different components and building blocks were developed during the AD4GD project. The updates of these components are described in this deliverable. Furthermore, the SIMPL middleware initiative is also discussed in the deliverable D3.2. The edge computing is an important part of the Internet of Things in the context of the AD4GD project: this topic is presented in a dedicated chapter where different Key Performance Indicators (KPIs) related to edge computing are specified. Finally, some actions to improve these KPIs are proposed, followed by several recommendations.

Zusammenfassung

This document presents the final form of the work done in WP 4 and previously partially presented in D4.1 and D4.2, the Dataspace architecture, the data catalogue and metadata system and the data trustworthiness framework. Like in D4.2, the text follows the components’ architecture defined by D6.1, focussing primarily on new work done since D4.2:

Component 2 – Evaluation of Connector Solutions and Deployments
Component 9 – Data catalogue and Metadata
Component 11 – Data Trustworthiness Framework

In terms of tasks, this deliverable predominantly discusses work in WP4 “Satellite and Green Deal Data Space Integration”, including tasks 4.2 “Green Deal Data Space Implementation”, 4.3 “Green Data Space integration with third-party services” and 4.4 “Ground truthing and data trustworthiness framework”. It also has overlap with and incorporates work from the three pilot projects within WP6 and the machine learning work being done in WP5.

Zusammenfassung

This deliverable builds upon D5.1 and outlines progress in applying Artificial Intelligence and High-Performance Computing within the AD4GD project. It details the use of AI models in pilot studies, such as water level prediction in Berlin lakes and connectivity mapping in Catalonia, highlighting the AI models ability to process complex environmental data efficiently. The document also presents the development of user-friendly interfaces that make these advanced tools accessible to non-expert stakeholders, promoting informed decision-making. Additionally, it reports on the integration of HPC resources to support AI model training and execution, enhancing performance and scalability. The deliverable concludes with reflections on the benefits, limitations, and future directions of these technologies in AD4GD pilots.

Zusammenfassung

This document is the Deliverable D6.2 for the AD4GD project. It presents the final results achieved in the context of Tasks T6.1, T6.2, T6.3, and T6.4. The document is a follow-up version of the Deliverable 6.1 “Pilot Technical Implementation Planning, Implementation and Assessment” that reported on pilot establishment, design of workflow and requirements analysis.

The purpose of Deliverable D6.2 is to review and report on the integration of accessible, re-usable tools and workflows, including re-use and extension of existing tools, semantics and standards as well as bespoke development of 12 new interoperable components and approaches within the project. Where component reports have already been published within other deliverables that document underpinning technologies and services, these will be signposted to avoid redundancy and duplication.

This collection of Green Deal Data Space components is presented in the form of tested FAIR workflows that consume, use and produce data and metadata for the three identified pilot case studies, to facilitate data-driven decision making on Green Deal priority topics.

The progress described includes:

re-use and extension of existing re-usable components, data and services which can support the pilots and, more broadly, the Green Deal Data Space;

identification of remaining gaps, and of components required to fill those gaps;

development and integration of the identified components;

evaluation of workflow and interface performance, and of output quality and consistency.

Our human-centred co-design approach has enabled us to work closely with sister projects and existing GEO initiatives to ensure efficiency and interoperability.
For each pilot, the reader may refer to D6.1 for in-depth descriptions of the initial rationale, indicators and stakeholders, and evaluation of the relative contribution of EO, citizen science, socio-economic and IoT data. In D6.2 we show how the workflows developed to support some areas of the Green Deal decision-making have been developed, and illustrate how a range of data and services can be transparently and reproducibly integrated within the Green Deal Data Space to generate scientifically defensible outputs which can be easily discovered, re-used and visualised by stakeholders. The corresponding assessment of scalability, performance, and technology convergence can be found in D6.3.

Zusammenfassung

A risk-based human health exposure assessment (HHEA) model was developed to evaluate the exposure for humans in 4 circular economy (CE) routes investigated in 6 of the 7 case studies in the project PROMISCES. The HHEA is a probabilistic tool evaluating the risk posed to human health. The HHEA was applied to the following routes: 1) semi-closed drinking water cycle; 2) groundwater remediation; 3) water reuse for agricultural irrigation; and 4) nutrient recovery. Each of these exposure routes results in a product – drinking water or lettuce – which can be consumed by humans. For some routes, the exposure is purely theoretical, while for others, the entire process chain is investigated in the PROMISCES case study.

The HHEA is built on Bayesian principles and includes Bayesian updating, which enables assessment of risk under conditions of low data availability and high uncertainty. This is particularly useful for evaluation of substances such as PFAS and other industrial persistent, mobile and potentially toxic (iPMT) substances, the removal of which in treatment processes is not yet well studied in literature. The deliverable explains the different treatments, environmental matrices, and substances which were the focus of the initial assessment. It describes the construction of the HHEA model, with explanations of how different data types – literature data, site specific data, and modelled data – are used to update the prior probability of the removal factor for substances in a process. It also describes how non-technical processes, such as mixing or evaporation, have been included into the treatment trains evaluated. Finally, individual reference quotients for the substances are established, which are used to assess the relative risk of the final concentrations in the products which could be consumed by humans.

Zusammenfassung

Mischwasserüberläufe nach Starkregenereignissen führen in den Berliner Fließgewässern im Sommer regelmäßig zu Sauerstoffdefiziten bis hin zu Fischsterben. Um solche Zustände zu vermeiden, ist neben der Sanierung des Kanalnetzes die Abkopplung von 20 bis 40 % der angeschlossenen Flächen in den Mischwassereinzugsgebieten notwendig und in Planung. Im Projekt MiSa - Mischwassereinzugsgebietssanierung - wurden im Auftrag der Umweltverwaltung in Workshops mit Berliner Bezirksämtern mögliche Abkopplungsstrategien definiert. Zur Bewertung dieser Strategien wurde eine Modellkette aus Kanalnetz- und Gewässergütemodell aufgebaut, die erstmals eine immissionsbasierte Bewertung ermöglicht und damit die Flächenabkopplung in einen direkten Zusammenhang mit der Gewässergüte stellt.

Zusammenfassung

Poster presented at the IWA Leading Edge Technologies Conference in Essen, Germany in June 2024

Möchten Sie die „{filename}“ {filesize} herunterladen?

Um unsere Webseite für Sie optimal zu gestalten und fortlaufend verbessern zu können, verwenden wir Cookies. Durch die weitere Nutzung der Webseite stimmen Sie der Verwendung von Cookies zu. Weitere Informationen zu Cookies erhalten Sie in unserer Datenschutzerklärung.