Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-compute loss function in finite_horizon #160

Open
LarrySnyder opened this issue May 10, 2024 · 0 comments
Open

Pre-compute loss function in finite_horizon #160

LarrySnyder opened this issue May 10, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@LarrySnyder
Copy link
Owner

Tried working on this, but it didn't seem to speed things up much. Why?

Added this code right before main loop:

	# Pre-calculate standard normal loss function values.
	min_z, max_z, step_z = -4, 4, 0.01
	loss_table = lf.standard_normal_loss_dict(start=min_z, stop=max_z, step=step_z)
	comp_table = lf.standard_normal_loss_dict(start=min_z, stop=max_z, step=step_z, complementary=True)

And this code when n(y) and \bar{n}(y) are calculated:

				if [...some option is set...]:
					z = (y - demand_mean[t]) / demand_sd[t]
					if z < min_z:
						n = demand_mean[t] - y
						n_bar = 0
					elif z > max_z:
						n = 0
						n_bar = y - demand_mean[t]
					else:
						n = nearest_dict_value(z, loss_table) * demand_sd[t]
						n_bar = nearest_dict_value(z, comp_table) * demand_sd[t]
				else:
					n, n_bar = lf.normal_loss(y, demand_mean[t], demand_sd[t])

Maybe nearest_dict_value is too slow?

@LarrySnyder LarrySnyder added the enhancement New feature or request label May 10, 2024
@LarrySnyder LarrySnyder self-assigned this May 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant