Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional methods for keeping score and informing users #313

Open
jspjutNV opened this issue Aug 2, 2021 · 1 comment
Open

Additional methods for keeping score and informing users #313

jspjutNV opened this issue Aug 2, 2021 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@jspjutNV
Copy link
Contributor

jspjutNV commented Aug 2, 2021

Right now, score is primarily kept within a trial, and for a few things (like remaining time) across a session. It would be useful to be able to keep an experiment-designer-defined score both running across trials within a session as well as across sessions within an experiment.

This defined score would then be reportable to the user feedback messages as well as to the database. If a shared leaderboard is ever implemented, then this score would also be reported to that leaderboard, though that functionality is purely theoretical at this point.

Open topics for discussion before beginning implementation:

  • How to express what values contribute to "score"
    • What values can contribute toward score (hits, misses, time taking, time remaining, etc.)
  • How the score should be reported to the user
  • Any potential issues with logging scores to the database? Does the score expression need to be logged? Do the sources consumed to create the score get logged?
@jspjutNV jspjutNV added the enhancement New feature or request label Aug 2, 2021
@jspjutNV jspjutNV added this to the v21.09.01 milestone Aug 2, 2021
@jspjutNV jspjutNV added this to To Do in Issue Tracking via automation Aug 2, 2021
@bboudaoud-nv
Copy link
Collaborator

bboudaoud-nv commented Aug 2, 2021

This is an interesting and useful feature idea, but the bounds of what could be done here are quite broad. Options I would consider include:

  • User specified, string-based "score expressions" that might include math done between one or more "base metrics" we provide. These could become arbitrarily complex and require a nearly complete math library to be defined. For example score = "max(1/taskTimeS, 100) * shotsHit / (shotsHit + shotsMissed)". Alternatively this could be simplified to only allow particular, predefined "base metrics" for score
  • Working from @jspjutNV's initial list I'd say a good set of base metrics (for trials or sessions) would include:
    • hits
    • misses
    • targets destroyed
    • damage done
    • total shots
    • accuracy (as ratio or percentage)
    • task time
    • time remaining
    • time spent firing (as a total or, more likely, a ratio)
    • trial successes (session only)
    • trial failures (session only)
  • I believe score should be reported to the user in 1 of 2 core ways:
    • Via the in-game banner (where different field visibility should probably be controllable, i.e. score but not time)
    • Through formatted feedback messages
  • I agree that w/ more configurability logging score to the database isn't a bad idea (you could support logging the score expression through the sessParamsToLog array); however, you should certainly be able to reproduce any score metric from the results written to the database (this might be a good test of what our db is missing "easy access" to).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

2 participants