A popular research topic in Graph Convolutional Networks (GCNs) is to speedup the training time of the network.
The main bottleneck in training GCN is the exponentially growing of computations.
In Cluster-GCN based on this fact that each node and its neighbors are usually grouped in the same cluster, considers the clustering structure of the graph, and expand each node's neighborhood within each cluster when training GCN.
The main assumption of Cluster-GCN is the weak relation between clusters; which is not correct at all graphs. Here we extend their approach by overlapped clustering, instead of crisp clustering which is used in Cluster-GCN.
This is achieved by allowing the marginal nodes to contribute to training in more than one cluster. The evaluation of the proposed method is investigated through the experiments on several benchmark datasets.
The experimental results show that the proposed method is more efficient than Cluster-GCN, in average.