# Relational Pattern

A Relational Pattern is a Pattern that is composed of variables that are connected by a set of relations.

**Context:**- It can be linguistic pattern that connects entities.
- It can be discovered/learned by a Relational Pattern Recognition System.
- It can be associated with a Relational Pattern Language.

**Example(s):****Counter-Example(s):**- an Emerging Pattern,
- a Frequent Pattern,
- a Relational Graph.

**See:**Relational Learning, Relation Pattern Language, Relational Database, Relational Data Mining, Relational Dataset Repository, Inductive Logic Programming, Statistical Relational Learning, Graph Mining, Propositionalization, Multi-view learning, Skip-Gram Model, Pattern Discovery.

## References

### 2016a

- (Takase et al., 2016a) ⇒ Sho Takase, Naoaki Okazaki, and Kentaro Inui. (2016). “Modeling Semantic Compositionality of Relational Patterns.” In: Engineering Applications of Artificial Intelligence Journal, 50(C). doi:10.1016/j.engappai.2016.01.027
- QUOTE: In this task, it is essential to identify the meaning of a relational pattern (a linguistic pattern connecting entities). Based on the distributional hypothesis (Harris, 1954), most previous studies construct a co-occurrence matrix between relational patterns (e.g., “X cause Y”) and entity pairs (e.g., “X: smoking, Y: cancer”), and then they recognize relational patterns sharing the same meaning regarding the co-occurrence distribution as a semantic vector (Mohamed et al., 2011, Min et al., 2012, Nakashole et al., 2012). For example, we can find that the patterns “X cause Y” and “X increase the risk of Y” have the similar meaning because the patterns share many entity pairs (e.g., “X: smoking, Y: cancer”). Using semantic vectors, we can map a relational pattern such as “X cause Y” into a predefined semantic relation such as causality only if we can compute the similarity between the semantic vector of the relational pattern and the prototype vector for the relation. In addition, we can discover relation types by clustering relational patterns based on semantic vectors.

### 2016b

- (Takase et al., 2016b) ⇒ Sho Takase, Naoaki Okazaki, and Kentaro Inui. (2016). “Composing Distributed Representations of Relational Patterns.” In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). doi:DOI:10.18653/v1/p16-1215 arXiv:1707.07265
- QUOTE: In particular, semantic modeling of relations and their textual realizations (relational patterns hereafter) is extremely important because a relation (e.g., causality) can be mentioned by various expressions (e.g., “X cause Y”, “X lead to Y”, “Y is associated with X”). To make matters worse, relational patterns are highly productive: we can produce an emphasized causality pattern “X increase the severe risk of Y” from “X increase the risk of Y” by inserting severe to the pattern. To model the meanings of relational patterns, the previous studies built a co-occurrence matrix between relational patterns (e.g., “X increase the risk of Y”) and entity pairs (e.g., “X: smoking, Y: cancer”) (Lin and Pantel, 2001; Nakashole et al., 2012). Based on the distributional hypothesis (Harris, 1954), we can compute a semantic vector of a relational pattern from the co-occurrence matrix, and measure the similarity of two relational patterns as the cosine similarity of the vectors.

### 2011a

- (Zilles, 2011) ⇒ Michael Geilke, and Sandra Zilles. (2011). “Learning Relational Patterns.” In: Proceedings of International Conference on Algorithmic Learning Theory (ALT 2011). Lecture Notes in Computer Science. ISBN:978-3-642-24411-7, 978-3-642-24412-4, doi:10.1007/978-3-642-24412-4_10
- QUOTE:
*Let [math]R[/math] be a set of relations over [math]\Sigma^*[/math]. Then, for any [math]n \in N_+[/math], [math]R_n[/math] denotes the set of [math]n[/math]-ary relations in [math]R[/math]. A relational pattern with respect to [math]\Sigma[/math] and [math]R[/math] is a pair [math](p,v_R)[/math] where [math]p[/math] is a pattern over [math]\Sigma[/math] and [math]v_R \subseteq \{(r,y_1,\cdots,y_n) | n \in N_+, r \in R_n,[/math] and [math]y_1,\cdots,y_n[/math] are variables in [math]p\}[/math]. The set of relational patterns with respect to [math]R[/math] will be denoted by [math]Pat_{\Sigma, R}[/math].*The set of all possible substitutions for [math](p,v_R)[/math] is denoted [math]\Theta_{(p,v_R),\Sigma}[/math] It contains all substitutions [math]\theta \in \Theta_{\Sigma}[/math] that fulfill, for all [math]n \in N_+[/math]:

[math]\forall\, r\in R_n \; \forall \,y_1,\dots,y_n \in X \Big[r, y_1,\cdots, y_n \in v_R \Rightarrow \left(\theta(y_1), \cdots, \theta(y_n)\right) \in r\Big] [/math]*The language of [math](p,v_R)[/math], denoted by [math]L(p,v_R)[/math], is defined as [math]\{w \in \Sigma^* | \exists \theta \in \Theta_{(p,v_R),\Sigma}: \theta(p) = w\}[/math]. The set of all languages of relational patterns with respect to [math]R[/math] will be denoted by [math]\mathcal{L}_{\Sigma, R}[/math].*For instance, [math]r=\{(w_1,w_2)|w_1,w_2 \in \Sigma^* \wedge |w_1|=|w_2| \}[/math] is a binary relation, which, applied to two variables [math]x_1[/math] and [math]x_2[/math] in a relational pattern [math](p,v_R)[/math], ensures that the substitutions of [math]x_1[/math] and [math]x_2[/math] generating words from [math]p[/math] always have the same length. Formally, this is done by including [math](r, x_1, x_2)[/math] in [math]v_R[/math]

- QUOTE:

### 2011b

- (Giacometti et al., 2011) ⇒ Arnaud Giacometti, Patrick Marcel, and Arnaud Soulet. (2011). “A Relational View of Pattern Discovery.” In: Proceedings of the 16th international conference on Database systems for advanced applications - Volume Part I. ISBN:978-3-642-20148-6 DOI:10.1007/978-3-642-20149-3_13

### 2006

- (Culotta et al., 2006) ⇒ Aron Culotta, Andrew McCallum, and Jonathan Betz. (2006). “Integrating Probabilistic Extraction Models and Data Mining to Discover Relations and Patterns in Text.” In: Proceedings of HLT-NAACL 2006.