-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run tests in parallel #509
Conversation
217cd7e
to
bd1f839
Compare
test/utils/container.go
Outdated
} | ||
|
||
//nolint:staticcheck | ||
n, err := testcontainers.GenericNetwork(ctx, testcontainers.GenericNetworkRequest{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It says GenericNetworkRequest is going to be removed in the future. Why don't we use network.New
as the deprecation message says?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that docs (for deprecated) doesn't seems right, this whole function is basic network.New
, only it support specify a network name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OIC your comment now. That's weird. I guess we can go this way, but it does seem like a random network name would be fine because we can get it after creating with localNetwork.Name
. NBD. If these methods get removed in the future we can rewrite then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah right, I forget now i'm create a new network for each test. I did this originally because I need to re-use the same network. I'll remove this function
e2d8fc6
to
2b34864
Compare
test/acceptance/helper.go
Outdated
r.t.Logf("teardown docker compose") | ||
dc.Down(ctx, compose.RemoveOrphans(true), compose.RemoveImagesLocal) | ||
}) | ||
|
||
r.t.Cleanup(func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will you condition this on if !r.cfg.NoCleanup {
?
Getting this running directly with Docker Daemon (not Desktop) was pretty frustrating and I had to use the -messy
flag to see what was going on.
Also, can we have nodecfg write a log file in addition to stdout so that if the container nukes we at least can see the log file?
i.e.
diff --git a/cmd/kwil-admin/nodecfg/generate.go b/cmd/kwil-admin/nodecfg/generate.go
index bec1d4f5..09f0e71a 100644
--- a/cmd/kwil-admin/nodecfg/generate.go
+++ b/cmd/kwil-admin/nodecfg/generate.go
@@ -167,6 +167,7 @@ func GenerateNodeFiles(outputDir string, originalCfg *config.KwildConfig, silenc
return nil, fmt.Errorf("failed to get absolute path for output directory: %w", err)
}
cfg.RootDir = rootDir
+ cfg.Logging.OutputPaths = append(cfg.Logging.OutputPaths, "kwild.log")
cometbftRoot := filepath.Join(outputDir, abciDir)
I don't think writing a log file would be a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, I'll add it.
I frequently get failures with
|
@charithabandi I think 6772e5d is breaking things somehow. Even on |
did you rebuild docker? I don't get this on my machine. |
My machine was in a funny state, but the error has to do with a pg service health check. When a container running these I'm trying the following:
This makes it connect to the "kwild" database using the "kwild" role, both of which are created by the init script. It can still get a false start, but it's harder since the postgres process restarts immediately after creating the kwild database, so there's a much briefer window. For whatever reason, I don't see this false start behavior with Docker Desktop, only when using the Docker Daemon/Engine directly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. It's really hitting some edge cases with pg service initialization on my machine, but that's not a strong reason to hold this up. Please merge when you are ready so that @charithabandi can update here branch since there's some overlap.
Moving on, but wanted to note depends_on support in testcontainers is pending: testcontainers/testcontainers-go#2035 |
Support to run acceptance tests and integration tests in parallel mode, by map test containers to random host port, with an exception of `TestKGW`.
Support to run acceptance tests and integration tests in parallel mode, by map test containers to random host port, with an exception of `TestKGW`.
Support to run acceptance tests and integration tests in parallel mode, by map test containers to random host port, with an exception of `TestKGW`.
Support to run acceptance tests and integration tests in parallel mode, by map test containers to random host port, with an exception of `TestKGW`.
Support to run acceptance tests and integration tests in parallel mode, by map test containers to random host port, with an exception of `TestKGW`.
Support to run acceptance tests and integration tests in parallel mode, by map test containers to random host port, with an exception of `TestKGW`.
This pr support run tests in parallel, and CI use this mode by default, but not enabled in
task test:xx
since it give us better logging.task test:it:nb -- -parallel-mode
finished in 90s on my machine.Things changed:
Teardown
helper function, properly utilizet.Cleanup
dev
docker-compose file for acceptance test(fortask dev:up
)dev
docker-compose file for integration test(for running locally using docker-compose directly)-parallel-mode
to run test in parallel mode.env
file manipulationNotice:
domain
. It takes around 100s to finish on my machine