Document Type : Research Paper
Author
none
Abstract
With the emergence of Generative Flow Networks (GFlowNets) as a new paradigm in amortized inference, significant questions have arisen regarding the standing of traditional sampling methods such as Markov Chain Monte Carlo (MCMC). While generative models promise to mitigate ”mode mixing” challenges, their precise performance boundaries compared to computationally cheaper classical methods remain ambiguous. In this study, we conduct a comprehensive comparative evaluation between major GFlowNet objectives (including TB, DB, and FM) and the Metropolis-Hastings algorithm within discrete environments. The primary focusof this investigation is to analyze the sensitivity of these models to ”Reward Landscape Geometry” and dimensional complexity. We examine under which conditions the computational overhead of training a deep model is justifiable and identifying the critical points where traditional methods maintain their robustness. The findings of this research provide novel insights into selecting the optimal sampling strategy, regarding the universal su-
periority of learning-based approaches.
Keywords