Research Post

Scalable Deep Generative Modeling for Sparse Graphs

Learning graph generative models is a challenging task for deep learning and has wide applicability to a range of domains like chemistry, biology and social science. However, current deep neural methods suffer from limited scalability: for a graph with n nodes and m edges, existing deep neural methods require \Omega(n^2) complexity by building up the adjacency matrix. On the other hand, many real-world graphs are actually sparse in the sense that m\ll n^2.

Based on this, the authors have developed a novel autoregressive model named BiGG that utilizes this sparsity to avoid generating the full adjacency matrix, and importantly reduces the graph generation time complexity to O((n + m)\log n). Furthermore, during training, this autoregressive model can be parallelized with O(\log n) synchronization stages, which makes it much more efficient than other autoregressive models that require \Omega(n).

Experiments on several benchmarks show that the proposed approach not only scales to orders of magnitude larger graphs than previously possible with deep autoregressive graph generative models, but also yields better graph generation quality.

This paper was published at the 37th International Conference on Machine Learning (ICML).

Latest Research Papers

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!