Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The vision and the future of Linopy #207

Open
aurelije opened this issue Nov 22, 2023 · 6 comments
Open

The vision and the future of Linopy #207

aurelije opened this issue Nov 22, 2023 · 6 comments

Comments

@aurelije
Copy link
Contributor

Lets make a generic topic where we will get informed and discuss ideas about Linopy.

First would like to know what is the vision of linopy from it's author and how he thinks to achieve it so that we know where and how to jump to help. Also we can collect ideas, intended use cases from others and see if there is enough interest in implementing it...

@aurelije aurelije changed the title The idea and the future of Linopy The vision and the future of Linopy Nov 22, 2023
@FabianHofmann
Copy link
Collaborator

Hey @aurelije, thank you for your message and initiative to talk about the future of linopy. Some things I would really like to put on the list for the short run:

I am very open for visionary ideas on new features of linopy. So if you have some, please shout out :)

@aurelije
Copy link
Contributor Author

aurelije commented Dec 1, 2023

I am wondering if this project is tightly coupled with Xarray so that this fact should be thrown to end user, or it should be internal thing that will be known but not so exposed to external... I am thinking about it because recently pandas had problem having numpy too exposed and now they try to switch to pyarrow but it is a pain since numpy leaks everywhere.

Another question is should linopy try to do everything so if something is simply not possible in one backend it should not be allowed at all. Like backend agnostic. I think that approach like that has never worked. Just remember Java and its independence on OS and hardware. Python had better approach, 95% is the same but what is not possible on win and is possible on Linux just extract and document... Or in Java world standards like JPA for Object-Relatation Mapping to hide DB details. I mean a lot of thing will be possible on gurobi and not possible in some open source solver.

Maybe the scope of linopy should be to provide seamless experience for modeling, like modeling language of 21st century, easy, fast, no stupid iterations over sets, so that people do not have to use middle ages solution like GAMS, AMPL, OPML or late 20th century solution like Pyomo.

I see linopy like hibernate or better SQLAlchemy, you can use it as ORM not to depend on low level concrete db implementation details, but you can always drop to lower level SQL toolkit still being in python but with more control, and then you can even drop to pure SQL level is SA if you have something very specific. So that ORM covers 95% of cases, rest 4% toolkit and 1% you use direct SQL in SQLAlchemy.

In case of linopy I would like to be able to create model and then if needed expert it to gurobipy.Model or whatever is in other solver and then be pretty independent from linopy. I would be able to do the rest of tweaking, setting parameters, debuging, tuning... directly. That would be a rare case, maybe 95% of cases I would not go to this level, but for specific cases, for expert users, for going from dev stage to production...

This thing has already happened when linopy started closing Model to preserve tokens on Compute Server in Gurobi, that meant you can't call iis, you can't do postprocessing of result... The solution was to enable creation of environment outside of linopy and control it externally.

The idea is to have linopy as nonobtrusive modeling language at first and then work on extending it if it is needed

@aurelije
Copy link
Contributor Author

I have expected more people and more posts on this topic, but maybe it is not the best time of year for discussion.

But I got some idea. While learning and reading books about modeling and since linopy now covers more types of programming I would for sure implement some of those models using linopy. That way I can get a feeling of what is missing, what could be made simpler, what is not working as expected. Also by publishing those examples other users may use it for learning, and eventually it could also be used for testing and benchmarking.

@FabianHofmann
Copy link
Collaborator

Really appreciate the initiative and the food for thought. Indeed atm things are a bit tight, but we should keep this issue open to brainstorm further.
I also think of linopy as a vehicle for easily creating and accessing optimization models. However, the interactive and persistent back and forth between higher level linopy models and lower level optimizer models is definitely a thing that I would envision as the next big step forward. Happy to hear other thoughts as well :)

@odow
Copy link
Contributor

odow commented Dec 18, 2023

so that people do not have to use middle ages solution like GAMS, AMPL, OPML

Tools like GAMS and AMPL can do a variety of things that "modern" modeling systems still cannot do well (nonlinear, various rewrites of logical statements...).

or late 20th century solution like Pyomo.

Pyomo was started in 2008 😄 https://en.wikipedia.org/wiki/Pyomo

I think @aurelije is suggesting something quite similar to JuMP. We have a high-level interface, but solvers like Gurobi expose the full C API, and you can mix-and-match calls inn the same script. Here's an example:

using JuMP, Gurobi
model = direct_model(Gurobi.Optimizer())
@variable(model, 0 <= x[i in 1:2] <= 40)
@variable(model, y[1:2])
# Call the C API
column(x::VariableRef) = Gurobi.c_column(backend(owner_model(x)), index(x))
GRBaddgenconstrPow(backend(model), "x1^0.7", column(x[1]), column(y[1]), 0.7, "")
GRBaddgenconstrPow(backend(model), "x2^3", column(x[2]), column(y[2]), 3.0, "")
# Back to JuMP
@objective(model, Min, y[1] + y[2])
optimize!(model)

persistent back and forth between higher level linopy models and lower level optimizer models is definitely a thing that I would envision as the next big step forward.

This would be my biggest suggestion. Persisting in-memory solver objects is a big win. There are fundamental limitations to the file-based I/O.

@aurelije
Copy link
Contributor Author

Tools like GAMS and AMPL can do a variety of things that "modern" modeling systems still cannot do well (nonlinear, various rewrites of logical statements...).

Assembler can do magic but nobody is using it in normal programming used for business stuff. OR is also used for business stuff not for system programming.

Also rewrites should be responsibility of solver. Yes I know some of them have bad optimization... But some of them have great. It is like trying to optimize SQL query because query planner is doing its job poorly.

Linopy (or any other modeling tool) should provide high level API for easy modeling and pluggable architecture that will export that model to some backend solver and maybe map solution back

Pyomo was started in 2008 😄 https://en.wikipedia.org/wiki/Pyomo

Yes, correct, it started as a retro project :) Same happened with GAMS, they could at least take Pascal as a base it was only a decade old back then, but they took language from late 50s as a role

I didn't use JuMP, I am a pure blood software developer (but not only that) so I stick to programming languages. I have heard it is great project and for sure linopy can learn lot from it, it solved some problems that will linopy have to solve... But I think that selling point of linopy is it's simplicity, it's integration to tools people already use (pandas, xarray...) instead of reinventing the wheel, trying to build some domain specific language, do OOP wrongly... Linopy approach with dimension alignment and broadcasting, getting rid of explicit indexes and boring iteration over it (no matter if explicit or in some iterator/generator). I saw OR people which are not devs doing 5 nested loops in python! Yuck!!! I have not seen that from any developer in my life, more than 3 levels of nesting in code is big no-go not only for for loops but for branching also. For that kind of people linopy is pure gold.

My idea is to have linopy as a convenience layer between solver and person who is modeling in a way that whenever you have something not covered by linopy or you need to access solver directly for tweaking, optimization, configuration whatever... you have power to do it, it is not hidden by linopy.

I am glad that we have JuMP developers communicating with us :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants