Semantic segmentation based on Deep learning for the detection of Cyanobacterial Harmful Algal Blooms (CyanoHABs) using synthetic images
dc.contributor.author | Barrientos-Espillco, Fredy | |
dc.contributor.author | Gascó, Esther | |
dc.contributor.author | López-González, Clara I. | |
dc.contributor.author | Gómez-Silva, María J. | |
dc.contributor.author | Pajares, Gonzalo | |
dc.date.accessioned | 2023-06-22T11:11:39Z | |
dc.date.available | 2023-06-22T11:11:39Z | |
dc.date.issued | 2023-04 | |
dc.description.abstract | Cyanobacterial Harmful Algal Blooms (CyanoHABs) in lakes and reservoirs have increased substantially in recent decades due to different environmental factors. Its early detection is a crucial issue to minimize health effects, particularly in potential drinking and recreational water bodies. The use of Autonomous Surface Vehicles (ASVs) equipped with machine vision systems (cameras) onboard, represents a useful alternative at this time. In this regard, we propose an image Semantic Segmentation approach based on Deep Learning with Convolutional Neural Networks (CNNs) for the early detection of CyanoHABs considering an ASV perspective. The use of these models is justified by the fact that with their convolutional architecture, it is possible to capture both, spectral and textural information considering the context of a pixel and its neighbors. To train these models it is necessary to have data, but the acquisition of real images is a difficult task, due to the capricious appearance of the algae on water surfaces sporadically and intermittently over time and after long periods of time, requiring even years and the permanent installation of the image capture system. This justifies the generation of synthetic data so that sufficiently trained models are required to detect CyanoHABs patches when they emerge on the water surface. The data generation for training and the use of the semantic segmentation models to capture contextual information determine the need for the proposal, as well as its novelty and contribution. Three datasets of images containing CyanoHABs patches are generated: (a) the first contains real patches of CyanoHABs as foreground and images of lakes and reservoirs as background, but with a limited number of examples; (b) the second, contains synthetic patches of CyanoHABs generated with state-of-the-art Style-based Generative Adversarial Network Adaptive Discriminator Augmentation (StyleGAN2-ADA) and Neural Style Transfer as foreground and images of lakes and reservoirs as background, and (c) the third set, is the combination of the previous two. Four model architectures for semantic segmentation (UNet++, FPN, PSPNet, and DeepLabV3+), with two encoders as backbone (ResNet50 and EfficientNet-b6), are evaluated from each dataset on real test images and different distributions. The results show the feasibility of the approach and that the UNet++ model with EfficientNet-b6, trained on the third dataset, achieves good generalization and performance for the real test images. | |
dc.description.department | Depto. de Arquitectura de Computadores y Automática | |
dc.description.faculty | Fac. de Informática | |
dc.description.refereed | TRUE | |
dc.description.sponsorship | Comunidad Autónoma de Madrid | |
dc.description.sponsorship | Spanish Ministry of Science, Innovation and Universities | |
dc.description.sponsorship | Ministry of Education of Peru | |
dc.description.sponsorship | Spanish Ministry of Universities | |
dc.description.status | pub | |
dc.eprint.id | https://eprints.ucm.es/id/eprint/78130 | |
dc.identifier.doi | 10.1016/j.asoc.2023.110315 | |
dc.identifier.issn | 15684946 | |
dc.identifier.officialurl | https://doi.org/10.1016/j.asoc.2023.110315 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14352/72195 | |
dc.journal.title | Applied Soft Computing | |
dc.language.iso | eng | |
dc.page.initial | 110315 | |
dc.relation.projectID | Research Project IA-GES-BLOOM-CM (Y2020/TCS-6420) | |
dc.relation.projectID | Research Project AMPBAS (RTI2018-098962-BC21) | |
dc.rights | Atribución 3.0 España | |
dc.rights.accessRights | open access | |
dc.rights.uri | https://creativecommons.org/licenses/by/3.0/es/ | |
dc.subject.keyword | Cyanobacterial harmful algal blooms | |
dc.subject.keyword | Semantic segmentation | |
dc.subject.keyword | Generative adversarial network | |
dc.subject.keyword | Neural style transfer | |
dc.subject.keyword | Convolutional neural networks | |
dc.subject.keyword | Deep learning | |
dc.subject.keyword | Autonomous surface vehicles | |
dc.subject.ucm | Inteligencia artificial (Informática) | |
dc.subject.ucm | Software | |
dc.subject.unesco | 1203.04 Inteligencia Artificial | |
dc.subject.unesco | 3304.16 Diseño Lógico | |
dc.title | Semantic segmentation based on Deep learning for the detection of Cyanobacterial Harmful Algal Blooms (CyanoHABs) using synthetic images | |
dc.type | journal article | |
dc.type.hasVersion | VoR | |
dc.volume.number | 141 | |
dcterms.references | [1] J.L. Graham, N.M. Dubrovsky, S.M. Eberts, Cyanobacterial harmful algal blooms and U.S. Geological Survey science capabilities, U.S. Geological Survey, Reston, VA, 2016. https://doi.org/10.3133/ofr20161174. [2] J. Huisman, G.A. Codd, H.W. Paerl, B.W. Ibelings, J.M.H. Verspagen, P.M. Visser, Cyanobacterial blooms, Nat. Rev. Microbiol. 16 (2018) 471–483. https://doi.org/10.1038/s41579-018-0040-1. [3] D.P. Hamilton, S.A. Wood, D.R. Dietrich, J. Puddick, Costs of harmful blooms of freshwater cyanobacteria, in: Cyanobacteria, John Wiley & Sons, Ltd, 2014: pp. 245–256. https://doi.org/10.1002/9781118402238.ch15. [4] J.R. Yang, H. Lv, A. Isabwe, L. Liu, X. Yu, H. Chen, J. Yang, Disturbance-induced phytoplankton regime shifts and recovery of cyanobacteria dominance in two subtropical reservoirs, Water Res. 120 (2017) 52–63. https://doi.org/10.1016/j.watres.2017.04.062. [5] S.M. Feist, R.F. Lance, Genetic detection of freshwater harmful algal blooms: A review focused on the use of environmental DNA (eDNA) in Microcystis aeruginosa and Prymnesium parvum, Harmful Algae. 110 (2021) 102124. https://doi.org/10.1016/j.hal.2021.102124. [6] F. Tan, P. Xiao, J.R. Yang, H. Chen, L. Jin, Y. Yang, T.-F. Lin, A. Willis, J. Yang, Precision early detection of invasive and toxic cyanobacteria: A case study of Raphidiopsis raciborskii, Harmful Algae. 110 (2021) 102125. https://doi.org/10.1016/j.hal.2021.102125. [7] N. Chen, S. Wang, X. Zhang, S. Yang, A risk assessment method for remote sensing of cyanobacterial blooms in inland waters, Sci. Total Environ. 740 (2020) 140012. https://doi.org/10.1016/j.scitotenv.2020.140012. [8] J.P. Cannizzaro, B.B. Barnes, C. Hu, A.A. Corcoran, K.A. Hubbard, E. Muhlbach, W.C. Sharp, L.E. Brand, C.R. Kelble, Remote detection of cyanobacteria blooms in an optically shallow subtropical lagoonal estuary using MODIS data, Remote Sens. Environ. 231 (2019) 111227. https://doi.org/10.1016/j.rse.2019.111227. [9] C. Hu, A novel ocean color index to detect floating algae in the global oceans, Remote Sens. Environ. 113 (2009) 2118–2129. https://doi.org/10.1016/j.rse.2009.05.012. [10] T. Kutser, L. Metsamaa, N. Strömbeck, E. Vahtmäe, Monitoring cyanobacterial blooms by satellite remote sensing, Estuar. Coast. Shelf Sci. 67 (2006) 303–312. https://doi.org/10.1016/j.ecss.2005.11.024. [11] Y.-H. Ahn, P. Shanmugam, J.-H. Ryu, J.-C. Jeong, Satellite detection of harmful algal bloom occurrences in Korean waters, Harmful Algae. 5 (2006) 213–231. https://doi.org/10.1016/j.hal.2005.07.007. [12] H. Cao, L. Han, L. Li, A deep learning method for cyanobacterial harmful algae blooms prediction in Taihu Lake, China, Harmful Algae. 113 (2022) 102189. https://doi.org/10.1016/j.hal.2022.102189. [13] G. Hitz, F. Pomerleau, M.-E. Garneau, C. Pradalier, T. Posch, J. Pernthaler, R.Y. Siegwart, Autonomous Inland Water Monitoring: Design and Application of a Surface Vessel, IEEE Robot. Autom. Mag. 19 (2012) 62–72. https://doi.org/10.1109/MRA.2011.2181771. [14] E. Romero-Vivas, F.D.V. Borstel, C.J. Pérez-Estrada, D. Torres-Ariño, J.F. Villa-Medina, J. Gutiérrez, On-water remote monitoring robotic system for estimating the patch coverage of Anabaena sp. filaments in shallow water, Environ. Sci. Process. Impacts. 17 (2015) 1141–1149. https://doi.org/10.1039/C5EM00097A. [15] G. Hitz, F. Pomerleau, C. Pradalier, T. Posch, J. Pernthaler, R.Y. Siegwart, Lizhbeth: Toward Autonomous Toxic Algae Bloom Monitoring, in: 2011 IEEE Conf. Intell. Robots Syst. Workshop Robot. Environ. Monit., San Francisco, USA, 2011: p. 5. [16] W. Touzout, Y. Benmoussa, D. Benazzouz, E. Moreac, J.-P. Diguet, Unmanned surface vehicle energy consumption modelling under various realistic disturbances integrated into simulation environment, Ocean Eng. 222 (2021) 108560. https://doi.org/10.1016/j.oceaneng.2020.108560. [17] N. Patki, R. Wedge, K. Veeramachaneni, The Synthetic Data Vault, in: 2016 IEEE Int. Conf. Data Sci. Adv. Anal. DSAA, 2016: pp. 399–410. https://doi.org/10.1109/DSAA.2016.49. [18] P.-T. Nguyen, T.-H. Tran, V.-H. Dao, H. Vu, Improving Gastroesophageal Reflux Diseases Classification Diagnosis from Endoscopic Images Using StyleGAN2-ADA, in: N.H.T. Dang, Y.-D. Zhang, J.M.R.S. Tavares, B.-H. Chen (Eds.), Artif. Intell. Data Big Data Process., Springer International Publishing, Cham, 2022: pp. 381–393. https://doi.org/10.1007/978-3-030-97610-1_30. [19] M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, H. Greenspan, GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification, Neurocomputing. 321 (2018) 321–331. https://doi.org/10.1016/j.neucom.2018.09.013. [20] V. Sandfort, K. Yan, P.J. Pickhardt, R.M. Summers, Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks, Sci. Rep. 9 (2019) 16884. https://doi.org/10.1038/s41598-019-52737-x. [21] A. Mikołajczyk, M. Grochowski, Data augmentation for improving deep learning in image classification problem, in: 2018 Int. Interdiscip. PhD Workshop IIPhDW, 2018: pp. 117–122. https://doi.org/10.1109/IIPHDW.2018.8388338. [22] T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, T. Aila, Training Generative Adversarial Networks with Limited Data. Advances in neural information processing systems 33 (2020): 12104-12114. [23] L. Gatys, A. Ecker, M. Bethge, A Neural Algorithm of Artistic Style. Journal of Vision 16, 326 (2016). https://doi.org/10.1167/16.12.326. [24] R. Novak, Y. Nikulin, Improving the Neural Algorithm of Artistic Style, (2016). https://doi.org/10.48550/arXiv.1605.04603. [25] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, S. Hochreiter, GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, in: Adv. Neural Inf. Process. Syst., Curran Associates, Inc., 2017. https://proceedings.neurips.cc/paper/2017/hash/8a1d694707eb0fefe65871369074926d-Abstract.html (accessed March 23, 2023). [26] Z. Wu, C. Shen, A. van den Hengel, Wider or Deeper: Revisiting the ResNet Model for Visual Recognition, Pattern Recognit. 90 (2019) 119–133. https://doi.org/10.1016/j.patcog.2019.01.006. [27] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, IEEE Trans. Pattern Anal. Mach. Intell. 40 (2018) 834–848. https://doi.org/10.1109/TPAMI.2017.2699184. [28] D. Arthur, S. Vassilvitskii, k-means++: The Advantages of Careful Seeding, (2006). http://ilpubs.stanford.edu:8090/778/?ref=https://githubhelp.com (accessed March 23, 2023). [29] J.C. Bezdek (1981). Pattern Recognition with Fuzzy Objective Function Algorithms. Boston, MA: Springer US. [30] L. Steccanella, D. Bloisi, J. Blum, A. Farinelli, Deep Learning Waterline Detection for Low-Cost Autonomous Boats, in: M. Strand, R. Dillmann, E. Menegatti, S. Ghidoni (Eds.), Intell. Auton. Syst. 15, Springer International Publishing, Cham, 2019: pp. 613–625. https://doi.org/10.1007/978-3-030-01370-7_48. [31] S. Griffith, G. Chahine, C. Pradalier, Symphony Lake Dataset, in: 2017 Int. J. Robot. Res. IJRR, 2017. https://dream.georgiatech-metz.fr/datasets/symphony-lake-dataset-2014/ (accessed March 23, 2023). [32] D. Mukherkjee, P. Saha, D. Kaplun, A. Sinitca, R. Sarkar, Brain tumor image generation using an aggregation of GAN models with style transfer, Sci. Rep. 12 (2022) 9141. https://doi.org/10.1038/s41598-022-12646-y. [33] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative Adversarial Nets, in: Adv. Neural Inf. Process. Syst., Curran Associates, Inc., 2014. https://proceedings.neurips.cc/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html (accessed March 23, 2023). [34] T. Karras, S. Laine, T. Aila, A Style-Based Generator Architecture for Generative Adversarial Networks, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 2019: pp. 4401–4410. https://openaccess.thecvf.com/content_CVPR_2019/html/Karras_A_Style-Based_Generator_Architecture_for_Generative_Adversarial_Networks_CVPR_2019_paper.html (accessed March 23, 2023). [35] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, T. Aila, (2020). Analyzing and Improving the Image Quality of StyleGAN: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). In IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp. 8107-8116. https://doi.org/10.1109/CVPR42600.2020.00813. [36] K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, ArXiv14091556 Cs. (2015). http://arxiv.org/abs/1409.1556 (accessed March 23, 2023). [37] L.A. Gatys, M. Bethge, A. Hertzmann, E. Shechtman, Preserving Color in Neural Artistic Style Transfer, (2016). https://doi.org/10.48550/arXiv.1606.05897. [38] G. Pajares, J. Manuel de la Cruz, A wavelet-based image fusion tutorial, Pattern Recognit. 37 (2004) 1855–1872. https://doi.org/10.1016/j.patcog.2004.03.010. [39] K. Wada, Labelme: Image Polygonal Annotation with Python, (2022). https://doi.org/10.5281/zenodo.5711226. [40] P. Chlap, H. Min, N. Vandenberg, J. Dowling, L. Holloway, A. Haworth, A review of medical image data augmentation techniques for deep learning applications, J. Med. Imaging Radiat. Oncol. 65 (2021) 545–563. https://doi.org/10.1111/1754-9485.13261. [41] J. Wang, L. Perez, The Effectiveness of Data Augmentation in image Classification using Deep Learning. Convolutional Neural Networks Vis. Recognit 11. 2017 (2017): 1-8. [42] H.-C. Shin, N.A. Tenenholtz, J.K. Rogers, C.G. Schwarz, M.L. Senjem, J.L. Gunter, K.P. Andriole, M. Michalski, Medical Image Synthesis for Data Augmentation and Anonymization Using Generative Adversarial Networks, in: A. Gooya, O. Goksel, I. Oguz, N. Burgos (Eds.), Simul. Synth. Med. Imaging, Springer International Publishing, Cham, 2018: pp. 1–11. https://doi.org/10.1007/978-3-030-00536-8_1. [43] A. Buslaev, V.I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, A.A. Kalinin, Albumentations: Fast and Flexible Image Augmentations, Information. 11 (2020) 125. https://doi.org/10.3390/info11020125. [44] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg, L. Fei-Fei, ImageNet Large Scale Visual Recognition Challenge, Int. J. Comput. Vis. 115 (2015) 211–252. https://doi.org/10.1007/s11263-015-0816-y. [45] Y.A. LeCun, L. Bottou, G.B. Orr, K.-R. Müller, Efficient BackProp, in: G. Montavon, G.B. Orr, K.-R. Müller (Eds.), Neural Netw. Tricks Trade Second Ed., Springer, Berlin, Heidelberg, 2012: pp. 9–48. https://doi.org/10.1007/978-3-642-35289-8_3. [46] S. Asgari Taghanaki, K. Abhishek, J.P. Cohen, J. Cohen-Adad, G. Hamarneh, Deep semantic segmentation of natural and medical images: a review, Artif. Intell. Rev. 54 (2021) 137–178. https://doi.org/10.1007/s10462-020-09854-1. [47] V. Zyuzin, T. Chumarnaya, Comparison of Unet architectures for segmentation of the left ventricle endocardial border on two-dimensional ultrasound images, in: 2019 Ural Symp. Biomed. Eng. Radioelectron. Inf. Technol. USBEREIT, 2019: pp. 110–113. https://doi.org/10.1109/USBEREIT.2019.8736616. [48] O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, in: N. Navab, J. Hornegger, W.M. Wells, A.F. Frangi (Eds.), Med. Image Comput. Comput.-Assist. Interv. – MICCAI 2015, Springer International Publishing, Cham, 2015: pp. 234–241. https://doi.org/10.1007/978-3-319-24574-4_28. [49] Z. Zhou, M.M. Rahman Siddiquee, N. Tajbakhsh, J. Liang. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA ML-CDS 2018. Lecture Notes in Computer Science (2018), vol 11045. Springer, Cham. https://doi.org/10.1007/978-3-030-00889-5_1. [50] H. Zhao, J. Shi, X. Qi, X. Wang and J. Jia, Pyramid Scene Parsing Network. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 6230-6239, doi: 10.1109/CVPR.2017.660. [51] B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, A. Torralba, Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127, (2019): 302-321. https://doi.org/10.1007/s11263-018-1140-0 [52] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, in: V. Ferrari, M. Hebert, C. Sminchisescu, Y. Weiss (Eds.), Comput. Vis. – ECCV 2018, Springer International Publishing, Cham, 2018: pp. 833–851. https://doi.org/10.1007/978-3-030-01234-2_49. [53] L.-C. Chen, G. Papandreou, F. Schroff, H. Adam, Rethinking Atrous Convolution for Semantic Image Segmentation, ArXiv170605587 Cs. (2017). http://arxiv.org/abs/1706.05587 (accessed March 23, 2023). [54] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature Pyramid Networks for Object Detection, in: 2017 IEEE Conf. Comput. Vis. Pattern Recognit. CVPR, 2017: pp. 2117-2125. https://doi.org/10.1109/CVPR.2017.106. [55] A. Kirillov, R. Girshick, K. He, P. Dollar, Panoptic Feature Pyramid Networks, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 2019: pp. 6399–6408. https://openaccess.thecvf.com/content_CVPR_2019/html/Kirillov_Panoptic_Feature_Pyramid_Networks_CVPR_2019_paper.html (accessed March 12, 2023). [56] M. Tan, Q.V. Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the 36th International Conference on Machine Learning, ICML 2019, Long Beach, 9-15 June 2019, 6105-6114. http://proceedings.mlr.press/v97/tan19a.html. [57] K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, in Proceedings of the IEEE conference on computer vision and pattern recognition: 2016: pp. 770–778. https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html (accessed March 23, 2023). [58] K. He, X. Zhang, S. Ren, J. Sun, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Proceedings of the IEEE international conference on computer vision: 2015: pp. 1026–1034. https://openaccess.thecvf.com/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html (accessed March 23, 2023). [59] A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, P. Martinez-Gonzalez, J. Garcia-Rodriguez. A survey on deep learning techniques for image and video semantic segmentation. Applied Soft Computing 70 (2018) 41–65. [60] J. Merkow, A. Marsden, D. Kriegman, Z. Tu, Dense Volume-to-Volume Vascular Boundary Detection, in: S. Ourselin, L. Joskowicz, M.R. Sabuncu, G. Unal, W. Wells (Eds.), Med. Image Comput. Comput.-Assist. Interv. - MICCAI 2016, Springer International Publishing, Cham, 2016: pp. 371–379. https://doi.org/10.1007/978-3-319-46726-9_43. [61] T.S. Sharan, S. Tripathi, S. Sharma, N. Sharma, Encoder Modified U-Net and Feature Pyramid Network for Multi-class Segmentation of Cardiac Magnetic Resonance Images, IETE Tech. Rev. 0 (2021) 1–13. https://doi.org/10.1080/02564602.2021.1955760. [62] S. Jadon, A survey of loss functions for semantic segmentation, in: 2020 IEEE Conf. Comput. Intell. Bioinforma. Comput. Biol. CIBCB, 2020: pp. 1–7. https://doi.org/10.1109/CIBCB48159.2020.9277638. [63] P. Iakubovskii, qubvel/segmentation_models.pytorch, (2022). https://github.com/qubvel/segmentation_models.pytorch (accessed March 23, 2023). [64] J.G. Moreno-Torres, T. Raeder, R. Alaiz-Rodríguez, N.V. Chawla, F. Herrera, A unifying view on dataset shift in classification, Pattern Recognit. 45 (2012) 521–530. https://doi.org/10.1016/j.patcog.2011.06.019. [65] B. Ltd, Algae On The Water Free Stock Photo - Public Domain Pictures, (n.d.). https://www.publicdomainpictures.net/en/view-image.php?image=103402&picture=algae-on-the-water (accessed March 28, 2023). [66] C. Fischer, Bloom of cyanobacteria in a freshwater pond, 2014. https://commons.wikimedia.org/wiki/File:Cyanobacteria_Aggregation1.jpg (accessed March 28, 2023). [67] Lamiot, Inflorescence planctonique. They are cyanophyceae (“blue algae”)., 2009. https://commons.wikimedia.org/wiki/File:CyanobacteriaLamiot2009_07_26_237.jpg (accessed March 28, 2023). | |
dspace.entity.type | Publication |
Download
Original bundle
1 - 1 of 1
Loading...
- Name:
- 1-s2.0-S1568494623003332-main.pdf
- Size:
- 2.78 MB
- Format:
- Adobe Portable Document Format