Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integer Problem with MILX and MIPM #99

Open
howardzzhang opened this issue Sep 8, 2022 · 11 comments
Open

Integer Problem with MILX and MIPM #99

howardzzhang opened this issue Sep 8, 2022 · 11 comments

Comments

@howardzzhang
Copy link

Hi,

Thanks for the great package.

I've been testing out an integer minimization problem with GA using mutation = MIPM(lx,ux) and crossover = MILX() based on https://github.com/wildart/Evolutionary.jl/blob/master/examples/mixedint.jl.

I have been getting an Int(some float) error. I am wondering if it's due to the fact that MILX needs a truncation that MIPM performs here

if isa(x, Integer)

Thanks!

@wildart
Copy link
Owner

wildart commented Sep 8, 2022

Do you have an error stacktrace?

@howardzzhang
Copy link
Author

howardzzhang commented Sep 8, 2022

Hi,

Thanks, the code works fine with SPX, but with MILX, it returns me the following inexact error

ERROR: InexactError: Int64(12.310155117105213)
Stacktrace:
  [1] Int64
    @ .\float.jl:788 [inlined]
  [2] convert
    @ .\number.jl:7 [inlined]
  [3] setindex!
    @ .\array.jl:966 [inlined]
  [4] _unsafe_copyto!(dest::Vector{Int64}, doffs::Int64, src::Vector{Float64}, soffs::Int64, n::Int64)
    @ Base .\array.jl:253
  [5] unsafe_copyto!
    @ .\array.jl:307 [inlined]
  [6] _copyto_impl!
    @ .\array.jl:331 [inlined]
  [7] copyto!
    @ .\array.jl:317 [inlined]
  [8] copyto!
    @ .\array.jl:343 [inlined]
  [9] copyto_axcheck!
    @ .\abstractarray.jl:1127 [inlined]
 [10] Array
    @ .\array.jl:626 [inlined]
 [11] convert
    @ .\array.jl:617 [inlined]
 [12] setindex!
    @ .\array.jl:966 [inlined]
 [13] recombine!(offspring::Vector{Vector{Int64}}, parents::Vector{Vector{Int64}}, selected::Vector{Int64}, method::GA{Evolutionary.var"#tournamentN#267"{Evolutionary.var"#tournamentN#266#268"{typeof(argmin), Int64}}, Evolutionary.var"#milxxvr#176"{Evolutionary.var"#milxxvr#172#177"{Float64, Float64, Float64}}, Evolutionary.var"#mutation#214"{Evolutionary.var"#mutation#209#215"{Vector{Int64}, Vector{Int64}, Float64, Float64, Evolutionary.var"#pm_mutation#213"}}}, n::Int64; rng::TaskLocalRNG)
    @ Evolutionary .julia\packages\Evolutionary\65hL6\src\ga.jl:108
 [14] update_state!(objfun::EvolutionaryObjective{var"#compute_performance_anonymous#15"{Vector{Vector{Matrix{Float64}}}, Vector{Array{Float64}}, Vector{Array{Float64}}, Int64, Vector{Float64}, Int64}, Float64, Vector{Int64}, Val{:serial}}, constraints::BoxConstraints{Int64}, state::Evolutionary.GAState{Float64, Vector{Int64}}, parents::Vector{Vector{Int64}}, method::GA{Evolutionary.var"#tournamentN#267"{Evolutionary.var"#tournamentN#266#268"{typeof(argmin), Int64}}, Evolutionary.var"#milxxvr#176"{Evolutionary.var"#milxxvr#172#177"{Float64, Float64, Float64}}, Evolutionary.var"#mutation#214"{Evolutionary.var"#mutation#209#215"{Vector{Int64}, Vector{Int64}, Float64, Float64, Evolutionary.var"#pm_mutation#213"}}}, options::Evolutionary.Options{Nothing, TaskLocalRNG}, itr::Int64)
    @ Evolutionary .julia\packages\Evolutionary\65hL6\src\ga.jl:76
 [15] optimize(objfun::EvolutionaryObjective{var"#compute_performance_anonymous#15"{Vector{Vector{Matrix{Float64}}}, Vector{Array{Float64}}, Vector{Array{Float64}}, Int64, Vector{Float64}, Int64}, Float64, Vector{Int64}, Val{:serial}}, constraints::BoxConstraints{Int64}, method::GA{Evolutionary.var"#tournamentN#267"{Evolutionary.var"#tournamentN#266#268"{typeof(argmin), Int64}}, Evolutionary.var"#milxxvr#176"{Evolutionary.var"#milxxvr#172#177"{Float64, Float64, Float64}}, Evolutionary.var"#mutation#214"{Evolutionary.var"#mutation#209#215"{Vector{Int64}, Vector{Int64}, Float64, Float64, Evolutionary.var"#pm_mutation#213"}}}, population::Vector{Vector{Int64}}, options::Evolutionary.Options{Nothing, TaskLocalRNG}, state::Evolutionary.GAState{Float64, Vector{Int64}})
    @ Evolutionary .julia\packages\Evolutionary\65hL6\src\api\optimize.jl:105
 [16] optimize(objfun::EvolutionaryObjective{var"#compute_performance_anonymous#15"{Vector{Vector{Matrix{Float64}}}, Vector{Array{Float64}}, Vector{Array{Float64}}, Int64, Vector{Float64}, Int64}, Float64, Vector{Int64}, Val{:serial}}, constraints::BoxConstraints{Int64}, method::GA{Evolutionary.var"#tournamentN#267"{Evolutionary.var"#tournamentN#266#268"{typeof(argmin), Int64}}, Evolutionary.var"#milxxvr#176"{Evolutionary.var"#milxxvr#172#177"{Float64, Float64, Float64}}, Evolutionary.var"#mutation#214"{Evolutionary.var"#mutation#209#215"{Vector{Int64}, Vector{Int64}, Float64, Float64, Evolutionary.var"#pm_mutation#213"}}}, population::Vector{Vector{Int64}}, options::Evolutionary.Options{Nothing, TaskLocalRNG})
    @ Evolutionary .julia\packages\Evolutionary\65hL6\src\api\optimize.jl:70
 [17] optimize(f::var"#compute_performance_anonymous#15"{Vector{Vector{Matrix{Float64}}}, Vector{Array{Float64}}, Vector{Array{Float64}}, Int64, Vector{Float64}, Int64}, constraints::BoxConstraints{Int64}, method::GA{Evolutionary.var"#tournamentN#267"{Evolutionary.var"#tournamentN#266#268"{typeof(argmin), Int64}}, Evolutionary.var"#milxxvr#176"{Evolutionary.var"#milxxvr#172#177"{Float64, Float64, Float64}}, Evolutionary.var"#mutation#214"{Evolutionary.var"#mutation#209#215"{Vector{Int64}, Vector{Int64}, Float64, Float64, Evolutionary.var"#pm_mutation#213"}}}, population::Vector{Vector{Int64}}, opts::Evolutionary.Options{Nothing, TaskLocalRNG})
    @ Evolutionary .julia\packages\Evolutionary\65hL6\src\api\optimize.jl:55
 [18] optimize
    @ .julia\packages\Evolutionary\65hL6\src\api\optimize.jl:42 [inlined]
 [19] ga_optimize(win_matrix_all_seasons::Vector{Vector{Matrix{Float64}}}, win_outcomes_matrix_seasons::Vector{Array{Float64}}, win_matrix_act_seasons::Vector{Array{Float64}}, nseasons::Int64, nweeks_seasons::Vector{Float64}, nteams::Int64)
    @ Main analysis.jl:163

@wildart
Copy link
Owner

wildart commented Sep 9, 2022

Did you use MILX with some mutation function other then MIPM? If that is the case, you'll get the type conversion error. MIPM preserve type of the element value in the individual vector (the integer check you pointed). Any other mutation will change element type, and as the result, mutation operation won't be able to assign values to the offspring vector because the type mismatch.

Also check the offspring vector type, it should be Real to allow mix of floats and integers.

@howardzzhang
Copy link
Author

howardzzhang commented Sep 9, 2022

No, I have been using MIPM throughout.

optimizer = GA(;
      populationSize = npop,
      selection = tournament(5),
      crossover = MILX(),
      mutation = MIPM(lower,upper),
      mutationRate = 0.05,
      crossoverRate = 0.8
      #epsilon = population_size ÷ 5,
   )

I see that MIPM preserves integer types correctly

if isa(x, Integer)
but I don't see the same code in MILX. Shouldn't MILX also truncate integers? At lines 337-339
S = βs .* abs.(v1 - v2)
it seems like an integer input can be converted into a float since βs is a float. Candidates in my problem should always be integer vectors, so the children should also be integer vectors.

For example, in the paper you cite, it states "In order to ensure that, after crossover and mutation operations have been performed, the integer restrictions are satisfied, the following truncation procedure is applied."

@wildart
Copy link
Owner

wildart commented Sep 9, 2022

The error trace points to recombine! function not to the MILX crossover code.

@howardzzhang
Copy link
Author

howardzzhang commented Sep 9, 2022

This is definitely your code project and so I am not as familiar, but it seems to me that the error is appearing because you can't copy a float into an int in line 108 of ga.jl where recombine! is called? This seems to be because MILX changes ints to floats with the S equation.

@wildart
Copy link
Owner

wildart commented Sep 9, 2022

Correct, so your offspring array should allow such change. If the offspring is Int vector, you'll get the error.

@howardzzhang
Copy link
Author

howardzzhang commented Sep 9, 2022

Thanks. Exactly, but the paper states that "In order to ensure that, after crossover and mutation operations have been performed, the integer restrictions are satisfied, the following truncation procedure is applied."

The offspring array is defined here automatically to inherit parent's type

offspring = similar(parents)

Shouldn't MILX be automatically truncating (if it supports integer inputs) for the elements of the parent vectors that are integers?

@wildart
Copy link
Owner

wildart commented Sep 9, 2022

The paper discusses an additional truncation step (in section 2.4 of the paper), which is missing. The MIPM mutationhas it, but not the MILX crossover. That may be the case.

@howardzzhang
Copy link
Author

howardzzhang commented Sep 9, 2022

I agree; just so we're on the same page, I think you're doing the truncation in MIPM already here

if isa(x, Integer)

but not in MILX! The opposite of what you just stated.

@wildart
Copy link
Owner

wildart commented Sep 9, 2022

Yes, you're correct. However, I'm not sure that adding the truncation to the MILX will be a good move. The resulting offspring will always have integer values. I need to reread the paper about that issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants