Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Benchmarks #246

Open
bd82 opened this issue Jul 20, 2020 · 1 comment
Open

Performance Benchmarks #246

bd82 opened this issue Jul 20, 2020 · 1 comment

Comments

@bd82
Copy link
Member

bd82 commented Jul 20, 2020

In most flows it is the perceived human performance that counts, so things that take < 100ms are virtually instantaneous.

The question if there are some flows that may become slow enough that a human would notice, e.g:

  • Certain validations in 10,000 lines large xml view.
  • Auto-Complete suggestions in a very large xml view
  • Time for downloading and transforming UI5 SDK -> Semantic Model.
@bd82 bd82 added the dev-ops label Jul 20, 2020
@bd82
Copy link
Member Author

bd82 commented Jul 20, 2020

Should be some automation in the CI, and should measure versus previously release version to detect regressions.

@bd82 bd82 added Tech Debt and removed dev-ops labels Jul 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant