Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Integration of Opencraft into Continuum #13

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

linuswagner
Copy link
Contributor

@linuswagner linuswagner commented Jan 29, 2023

Adds Opencraft to Continuum.
Currently, only Ticks are taken into account. Other metrics are possible.

Among the more surprising things are:

  • kubectl method to wait for pods until they are ready
  • busy waiting for server component to be ready
  • server is killed after last worker stopped and then prints log into stdout

Limitations:

  • Bots are currently being spawned on same position
  • Server only compiles to amd64, because the Alpine image is only available for this architecture

@linuswagner
Copy link
Contributor Author

linuswagner commented Jul 17, 2023

Hello, this is future-me. It only took 50 minutes to come up with a somewhat extensive documentation of what past-me did. I want to share that with the rest of you:

What did I do?

  • I make Opencraft deploy on an multiple endpoints - multiple cloud xor edge configuration
  • each endpoints gets one bot that moves X steps (configurable) randomly within a box of 20 (starts at 0,0 I think)
  • we get the average ticks and std from server for evaluation

How did I do that?

definition of images

  • publisher (bot on endpoints): basicially copy of Opencraft simple bot limited to certain number of steps
  • subscriber (server on edge/cloud): uses original configuration to launch and expose server on port 3000%ITEM

getting it to run

  • start.py
    • server gets launched as part of normal Continuum process
      • we just wait until all pods are ready. We can't determine if Opencraft on them is ready as well, so we need to busy wait
      • then we can launch the bots using a bunch of parameters

getting it to stop

  • simply pkill the server -> will print out the serverlogs (I think, it blocks until endpoints are done, which is controlled by the number of steps they make)

getting it to evaluate

  • output.py
    • for each worker we fill the worker_set which we then through together in worker_metrics (sorted by id)
    • logs have all kinds of things inside, but the server log has a specific format, that we just skip forward to :)
    • we take only the ticks from the metrics (should be space separated metrics)
    • put it into data frame and transform

Limitations

  • only edge xor cloud -> no hybrids supported
  • bots always start at 0,0 at the moment
  • bots are equally distributed over servers
  • only ticks as metrics, but extensible easily (TM)
  • no AMD64 support, because of use of Alpine image
  • pushes to my dockerhub -> stupid :(

Side-note: Connect client to Opencraft

You are also be able to use port-forwarding to your local machine and visually expect what's happening on the Opencraft server. I'm not sure anymore if the port for the server is exposed to the physical server on which Continuum runs or if it is only exposed on the VM that hosts the server.
So it could be that you need to do a double-forward (forward a port from your machine to the physical server. From there ssh on the server VM, also with port forwarding).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant