Keynote: K-BERT - Enabling Language Representation with Knowledge Graph
Knowledge-enhanced pre-trained models aims to leverage structured knowledge from knowledge graphs to strengthen these models. This allows them to learn both general semantic knowledge from free text and factual knowledge about real-world entities behind the text, effectively handling downstream knowledge-driven tasks. Existing knowledge enhancement methods for pre-trained language models can generally be categorized into three main types: augmenting input features with knowledge, improving model architecture with knowledge, and constraining training tasks with knowledge. These approaches introduce different knowledge enhancement strategies at the input layer, encoding layer, and pre-training task layer to reinforce the pre-trained language models.
K-BERT belongs to the first category, where knowledge from knowledge graphs is incorporated into the model’s input through methods like entity linking. K-BERT first identifies entities in the input text and then expands the original text into a tree structure using entity queries and linking operations. The pre-order traversal sequence of this tree is then used as the model’s input. Since the insertion of triples may introduce additional noise and cause the input sentence to deviate from its original meaning, K-BERT further mitigates this issue at the input stage by introducing soft position embeddings and a visibility matrix.