Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Belief Propagation algorithm #1741

Open
ankurankan opened this issue Mar 4, 2024 · 3 comments
Open

Refactor Belief Propagation algorithm #1741

ankurankan opened this issue Mar 4, 2024 · 3 comments

Comments

@ankurankan
Copy link
Member

ankurankan commented Mar 4, 2024

  1. Clean the current implementation
  2. Better algorithms for creating Junction Trees.
  3. Incorporate the Factor Graph BP into the main algorithm.

Ref #1740

@tristantreb
Copy link
Contributor

About 3., IMO it makes sense to keep BeliefPropagationWithMessageParsing and BeliefPropagation separate because they handle the inference very differently.

On one hand the BeliefPropagation class is tightly bounded to the junction tree algorithm. In fact if you read this article, it is exactly what is happening in the code:

"One method of exact marginalization in general graphs is called the junction tree algorithm, which is simply belief propagation on a modified graph guaranteed to be a tree."

On the other hand the BeliefPropagationWithMessageParsing class is tightly bounded to the message parsing method on factor graphs. A relevant extension for this class is to create a to_factor_graph() method that transforms any graph into a factor graph, similarly to the existing to_junction_tree() method. Also, I'm planning to create an approximate inference class that inherits from BeliefPropagationWithMessageParsing to handle loopy cases.

Hence, what do you think of keeping them separated?

@tristantreb
Copy link
Contributor

I would even suggest to rename BeliefPropagation to JunctionTree/BeliefPropagationOnJunctionTree and BeliefPropagationWithMessageParsing to BeliefPropagation/BeliefPropagationOnFactorGraphs. I was initially confused when trying to understand what kind of BP algorithm was implemented under the BeliefPropagation class. Knowing it was the junction tree algorithm from start would have been very helpful! Happy to help make the transition

@ankurankan
Copy link
Member Author

@tristantreb

A relevant extension for this class is to create a to_factor_graph() method that transforms any graph into a factor graph, similarly to the existing to_junction_tree() method.

We do have a to_factor_graph method here:

def to_factor_graph(self):

IMO it makes sense to keep BeliefPropagationWithMessageParsing and BeliefPropagation separate because they handle the inference very differently.

I was thinking of merging them mainly because both are essentially message-passing algorithms. The difference is that one can deal with cycles in the graph and the other can't. Most users typically start with a Bayesian Network (or, less likely, a Markov Network) and might not be aware of the variations in different message-passing algorithms (such as which one to use under what conditions). So, my idea was to combine these two algorihthms such that if the model doesn't have any cycles the factor graph BP is used, otherwise Junction Tree algorithm is used. This would help in making the overall inference faster in the case when there are no cycles in the model. Furthermore, because the junction tree algorithm essentially does a factor graph BP on the constructed Junction tree, I believe there would be a lot of overlap in the code. Merging them would also make maintaining it easier. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants