Slot filling is one of the critical tasks in modern conversational systems. The majority of existing literature employs supervised learning methods, which require labeled training data for each new domain. Zero-shot learning and weak supervision approaches, among others, have shown promise as alternatives to manual labeling. Nonetheless, these learning paradigms are significantly inferior to supervised learning approaches in terms of performance. To minimize this performance gap and demonstrate the possibility of open-domain slot filling, we propose a Self-supervised Co-training framework, called , that requires zero in-domain manually labeled training examples and works in three phases. Phase one acquires two sets of complementary pseudo labels automatically. Phase two leverages the power of the pre-trained language model BERT, by adapting it for the slot filling task using these sets of pseudo labels. In phase three, we introduce a self-supervised co-training mechanism, where both models automatically select high-confidence soft labels to further improve the performance of the other in an iterative fashion. Our thorough evaluations show that outperforms state-of-the-art models by 45.57% and 37.56% on SGD and MultiWoZ datasets, respectively. Moreover, our proposed framework achieves comparable performance when compared to state-of-the-art fully supervised models.
|Title of host publication||ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023|
|Number of pages||10|
|State||Published - Apr 30 2023|
|Event||2023 World Wide Web Conference, WWW 2023 - Austin, United States|
Duration: Apr 30 2023 → May 4 2023
|Name||ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023|
|Conference||2023 World Wide Web Conference, WWW 2023|
|Period||4/30/23 → 5/4/23|
Bibliographical notePublisher Copyright:
© 2023 ACM.
- open-domain slot filling
- weak supervision
ASJC Scopus subject areas
- Computer Networks and Communications