Adaptive Terrain-Cognizant Multispectral Vision and Distributed Intelligence for Low-Altitude Unmanned Systems in Yunnan Plateau and Canyon Ecosystems
DOI:
https://doi.org/10.63313/AERpc.9099Keywords:
Unmanned aerial systems, Visual odometry, SLAM, Multispectral imaging, Spectral attention networks, Edge–cloud computing, Ecological monitoring, Terrain adaptation, Yunnan PlateauAbstract
The rapid expansion of the low-altitude economy across southwestern China necessitates autonomous unmanned aerial systems capable of sustained operations within deeply dissected topography and hyperdiverse ecosystems. This study presents a unified framework, designated LAVENS, that integrates terrain-cognizant adaptive visual odometry with hierarchical spectral attention networks to simultaneously address navigation robustness and ecological inference accuracy in complex Yunnan terrains. A terrain roughness adaptive factor is embedded within an extended Kalman filtering pipeline to modulate measurement noise covariance in real time, thereby mitigating scale drift during canyon traversal. Complementarily, a hierarchical spectral attention network processes ten-band multispectral tensors through scaled dot-product self-attention to resolve vegetation boundaries under heterogeneous canopy illumination. Edge-cloud orchestration, based on directed acyclic graph scheduling, ensures end-to-end latency remains below 150 milliseconds while conforming to low-altitude airspace traffic management protocols. Extensive validation spanning six altitudinal belts—from tropical rainforest to nival rock—demonstrates that the proposed methodology reduces absolute trajectory error by a mean of 41.3 percent relative to current open-source SLAM libraries, and improves mean intersection-over-union for canopy species segmentation by 18.7 percent over conventional encoder-decoder architectures. These findings establish a scalable software and hardware architecture for ecological logistics and infrastructure inspection within the low-altitude economic corridor of the Yunnan plateau.
References
[1] Chang, B. et al. Application of UAV remote sensing for vegetation identification: a review and meta-analysis. Front. Plant Sci. 16, 1452053 (2025).
[2] Myers, N. et al. Biodiversity hotspots for conservation priorities. Nature 403, 853–858 (2000).
[3] Sun, Q. et al. Mapping biodiversity conservation priorities for protected areas for spatial optimization: A case study in the Songnen Plain, China. Ecol. Evol. 14, e70516 (2024).
[4] Cadena, C. et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 32, 1309–1332 (2016).
[5] Zhang, C. & Kovacs, J. M. The application of small unmanned aerial systems for precision agriculture: A review. Int. J. Remote Sens. 35, 6936–6963 (2014).
[6] Mur-Artal, R. & Tardós, J. D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33, 1255–1262 (2017).
[7] Campos, C. et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Trans. Robot. 37, 1874–1890 (2021).
[8] Lowe, D. G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004).
[9] Hartley, R. & Zisserman, A. Multiple View Geometry in Computer Vision (Cambridge Univ. Press, 2004).
[10] Shan, T. & Englot, B. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. 2020 IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) 5135–5142 (2020).
[11] Pohl, C. & Van Genderen, J. L. Multispectral image fusion in remote sensing: A review. Int. J. Remote Sens. 19, 823–854 (1998).
[12] Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional networks for biomedical image segmentation. Lect. Notes Comput. Sci. 9351, 234–241 (2015).
[13] Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale. Int. Conf. Learn. Represent. (ICLR) (2021).
[14] He, K. et al. Deep residual learning for image recognition. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 770–778 (2016).
[15] Xia, X. et al. A survey on UAV-enabled edge computing: Resource management perspective. ACM Comput. Surv. (2024).
[16] Wu, Y., Kirillov, A., Massa, F., Lo, W.-Y. & Girshick, R. Detectron2. Facebook AI Research Technical Report (2019).
[17] Vaswani, A. et al. Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 5998–6008 (2017).
[18] Redmon, J. & Farhadi, A. YOLOv3: An incremental improvement. arXiv:1804.02767 (2018).
Downloads
Published
Issue
Section
License
Copyright (c) 2026 by author(s) and Erytis Publishing Limited.

This work is licensed under a Creative Commons Attribution 4.0 International License.








