Few-Shot Joint Extraction of Entity-Relation Triples in the Domain of Ancient Chinese Medical Texts
DOI:
https://doi.org/10.63313/EBM.9056Keywords:
Joint entity and relation extraction, Few-shot relation extraction, Prototype network, Ancient Chinese medical textsAbstract
Entity–relation extraction aims to identify structured triples from unstructured text. However, most existing models are evaluated on large, densely annotated corpora; when applied to vertical domains with scarce labeled data, fully super-vised models tend to generalize poorly. Existing few-shot learning approaches often simplify the triple extraction task into relation classification with known entities, or overlook the common real-world scenario where a sentence con-tains multiple triples. To address these challenges, this paper focuses on ancient Chinese medicine texts and proposes a multi-granularity fused prototype net-work. The model fully integrates semantic information between relations and entities during prototype construction to enhance the discriminability among different prototypes. A pointer network is further employed to enable the ex-traction of multiple triples from a single text. Experiments on a self-constructed dataset of ancient Chinese medicine texts demonstrate the effectiveness of the proposed approach.
References
[1] Wei, Z., Su, J., Wang, Y., et al. (2020). A Novel Cascade Binary Tagging Framework for Rela-tional Triple Extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 1476 - 1488).
[2] Zheng, H., Wen, R., Chen, X., et al. (2021). PRGC: Potential Relation and Global Correspond-ence Based Joint Relational Triple Extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 6225 - 6235).
[3] Yuan, Y., Zhou, X., Pan, S., et al. (2021). A relation-specific attention network for joint entity and relation extraction. In International joint conference on artificial intelligence. Interna-tional Joint Conference on Artificial Intelligence.
[4] Wang, Y., Yu, B., Zhang, Y., et al. (2020). TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking. In Proceedings of the 28th International Con-ference on Computational Linguistics (pp. 1572 - 1582).
[5] Shang, Y. M., Huang, H., & Mao, X. (2022). Onerel: Joint entity and relation extraction with one module in one step. In Proceedings of the AAAI conference on artificial intelli-gence (Vol. 36, No. 10, pp. 11285 - 11293).
[6] Han, X., Zhu, H., Yu, P., et al. (2018). FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. In Proceedings of the 2018 Con-ference on Empirical Methods in Natural Language Processing (pp. 4803 - 4809).
[7] Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. Ad-vances in neural information processing systems, 30.
[8] Gao, T., Han, X., Liu, Z., et al. (2019). Hybrid attention-based prototypical networks for noisy few-shot relation classification. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 6407 - 6414).
[9] Han, J., Cheng, B., & Lu, W. (2021). Exploring Task Difficulty for Few-Shot Relation Extrac-tion. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 2605 - 2616).
[10] Cong, X., Sheng, J., Cui, S., et al. (2022). Relation-guided few-shot relational triple extraction. In Proceedings of the 45th International ACM SIGIR Conference on Research and Devel-opment in Information Retrieval (pp. 2206 - 2213).
[11] Zhang, N., Chen, M., Bi, Z., et al. (2022). CBLUE: A Chinese Biomedical Language Under-standing Evaluation Benchmark. In Proceedings of the 60th Annual Meeting of the Associ-ation for Computational Linguistics (Volume 1: Long Papers) (pp. 7888 - 7915).
Downloads
Published
Issue
Section
License
Copyright (c) 2025 by author(s) and Erytis Publishing Limited.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.