Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup module structure #219

Open
bartoszmajsak opened this issue May 4, 2014 · 20 comments
Open

Cleanup module structure #219

bartoszmajsak opened this issue May 4, 2014 · 20 comments

Comments

@bartoszmajsak
Copy link
Contributor

As we are growing in size having everything under one big module bag is not really the best idea. It's very unfriendly for IDEs (not only for Eclipse to be honest). Therefore I would like to open the discussion on how we should slice the repo. We already have main modules covering different aspects of JavaEE platform in the dedicated sub-folders so those would be good candidates for me. What are your thoughts?

@arun-gupta
Copy link
Contributor

AFAIK the repo is structured by technology only. Do you have alternative cleaner proposals ?

@bartoszmajsak
Copy link
Contributor Author

I was thinking about changing the project structure, so that it won't be one big bag of everything taking ages and tons of disk space to build, but rather parent-pom and submodules being a projects on their own and just referring to it.

On the other hand we might think about putting wildfly container when executed in the managed mode in a single location instead of spawning it for each and every module build and putting it to all the **/target folders.

@radcortez
Copy link
Contributor

I understand your points, but I think that if we split by the submodules, its going to explode too many projects. Maybe aggregate them by layers? Web (Servlet, JSF, etc), Server (JPA, EJB, etc) and so on? Probably not the best solution.

Agree on the Wildfly container single location. Don't know if you can do that, but worth investigating. Feel free to create an issue and assign it to me :)

@arun-gupta
Copy link
Contributor

IIRC single container installation need to be implemented in Arquillian. @aslakknutsen ?

@aslakknutsen
Copy link
Member

The single install is just a Maven setting. The problem was I couldn't find any option in Maven that defined parent-root. It was always relative to target module.

e.g.

mvn clean install <-- could get all to target a single root
mvn clean install -F jaxrs/pom.xml <-- this would create a new install under jaxrs

Unless we make it some 'external file ref', e.g. java.io.tmpdir

@bartoszmajsak
Copy link
Contributor Author

I was thinking about "external file ref" actually. This adds minimal setup effort (IDE mainly), but should give us a bit more resource-friendly approach. I think DeltaSpike folks did that @LightGuard?

@bartoszmajsak
Copy link
Contributor Author

Any thoughts on that? Stuff is growing.... and failing constantly. We don't see any problems with it?

Do we actually know which modules are healthy?

@arjantijms
Copy link
Contributor

and failing constantly. We don't see any problems with it?

No, I don't think so. It's the intention of tests to fail, since they are pointing out bugs in the servers. For instance I just added a JASPIC test that fails on GlassFish but succeeds on JBoss. The fact that the tests fails on GlassFish is not a problem with the test or an indicator of the health of a module, but it points out GlassFish doesn't implement the JASPIC spec correctly here.

Of course sometimes failing tests ARE an issue, since for some tests JBoss needs to configured in standalone.xml, and when a new JBoss server is installer or migrated that configuration is now always done again and the tests depending on it fail.

However, that too is not really an issue with the tests themselves, but an issue for the environment configuration.

It's very unfriendly for IDEs (not only for Eclipse to be honest).

In Eclipse Mars you can now have hierarchical projects. See e.g. http://tools.jboss.org/blog/2015-02-23-alpha1-for-mars.html

This makes working with the modules much friendlier. We could additionally go for @radcortez suggestion of organizing per layer.

@radcortez
Copy link
Contributor

I think it would be cool if we set up the sample in a way that people just need to checkout the sample that they need. I already have a few persons asking me for help because they were only interested in one particular sample and needed to checkout and mount in the IDE the full project.

@arjantijms
Copy link
Contributor

. I already have a few persons asking me for help because they were only interested in one particular sample and needed to checkout and mount in the IDE the full project.

I have somewhat of the same issue. I work with several vendors to help improve their JASPIC implementation based on the test results, and often tell them to just comment out all modules except for jaspic and the common one.

For that particular situation the checkout itself isn't the real problem, but waiting for all tests to have run before it finally starts doing the tests you're interested in is problematic. Maven has some provisions for running specific tests but they are not it either.

You'd want to say: build just the jaspic module run its tests.

@bartoszmajsak
Copy link
Contributor Author

No, I don't think so. It's the intention of tests to fail, since they are pointing out bugs in the servers. For instance I just added a JASPIC test that fails on GlassFish but succeeds on JBoss. The fact that the tests fails on GlassFish is not a problem with the test or an indicator of the health of a module, but it points out GlassFish doesn't implement the JASPIC spec correctly here.

I do see several problems with such an approach. First of all with having unstable builds we do not really know what is working where exactly. The structure is far from what I would personally consider fast feedback loop. This can easily lead to sneaking in regressions or simply bad code (I would also strongly advise some informal code reviews for any PR). At the end of the day build is simply useless.

Another problem is how we should actually treat these "samples" - as a TCK or a library of simple, yet useful examples which can be used as a foundation in Java EE project. If we consider former we should think about having test categories as well as several builds running our "TCK" against different servers. If we think about latter having a project in such a condition does not advertise Java EE as a solid technology choice to build your app on. Just my personal feeling.

In Eclipse Mars you can now have hierarchical projects. See e.g. http://tools.jboss.org/blog/2015-02-23-alpha1-for-mars.html

How well does it work? Any experience with "building workspace" issue?

You'd want to say: build just the jaspic module run its tests.

How about mvn clean verify -pl jaspic-module?

@johnament
Copy link
Contributor

I think the key thing is the sheer quantity of modules in the project. To me, there are too many and even someone just looking at it, way too many. I feel bad because last year at JavaOne I volunteered to fix the build issues for Arun, but can't due to how its structured.

I would prefer a module per technology, and tests representing the functionality needed.

@bartoszmajsak
Copy link
Contributor Author

I feel bad because last year at JavaOne I volunteered to fix the build issues for Arun, but can't due to how its structured.

This is exactly the probIem I see. I don't feel like it is really easy for people to contribute. Honestly even for me it's simply a counter productive setup.

@radcortez
Copy link
Contributor

@bartoszmajsak what do you suggest then?

@bartoszmajsak
Copy link
Contributor Author

I would love to brainstorm about possibilities to slice this repository and also about smarter builds.

Modules are self-contained and they do not depend on each other. This opens up several possibilities. We can try to run them in parallel (but that might be a bit challenging with Arquillian and managed containers). However it would be great to achieve such a setup and improve Arquillian (if needed) to provide such a capability. It is very important for any reasonable sized project.

Another thing which comes to my mind is a "smart build". We have all the needed information about which modules have been changed within give commit / PR. Therefore we can have some sort of hook either on the CI server level or even build tool (however with mvn this could be painful to implement) to only build/test affected modules.

@arjantijms
Copy link
Contributor

I do see several problems with such an approach. First of all having unstable builds we do not really know what is working where exactly.

Well, in my opinion the build is not unstable at all. It's an unfortunate fact of life that not all functionality works on every server. That's why there are several profiles for different servers in the first place. If every server perfectly implemented all functionality then we only needed a single server profile.

This project is simply not about a single business product which indeed has to be build with zero tests failing.

In this case, if I'm not mistaken, JBoss is used to run the tests. This is just one server. The current version for instance doesn't support propagation of a principal from the web layer to an EJB bean. If you'd run the same test testing that aspect against WebLogic or GlassFish it would pass.

This doesn't make "the build" unstable. It means JBoss doesn't support principal propagation.

Of course, if a specific test fails on ALL supported servers, it warrants a closer look. A truly faulty test, say one that accidentally does a divide by zero in Java code will always fail, independent of which server you'd test it on.

I agree that it's currently hard to distinguish between a bug in the server and a bug in the test. It's not entirely trivial to work around this. You could use annotations perhaps to indicate that some tests are "supposed" to fail for certain known servers, but don't forget that the number of failed tests is also an indication of how correct a specific server is. Without that, you'd probably want to create a report based on the number of annotations present for a given server, but who's going to code that one?

I would also strongly advise some informal code reviews for any PR

Well, you can start with reviewing some of the existing code I guess.

There are a couple of areas that can be reviewed:

  • Code style (as indicated by project rules, tabs vs spaces, line width, etc)
  • Test style (but we would need some guidelines first if we want to review)
  • Actual functionality that's tested; are the test assumptions really correct?

I'd love if you could take a third look at my JASPIC test assumptions. As said, they already got a second look, but a third one would never hurt, especially as some assumptions are quite subtle and are based on intricate spec text.

as a TCK or a library of simple, yet useful examples which can be used as a foundation in Java EE project.

A good TCK(-like) can IMHO actually be both. I often looked at the Mojarra tests (not the official TCK, but the project specific tests) to get some details how a specific functionality is working. Same for CDI, where I do look at the actual TCK.

If we think about latter having a project in such a condition does not advertise Java EE as a solid technology choice to build your app on.

Arguably it's better to know upfront that something is not working than having the user see something fail while right before a deadline or worse in production. Hopefully it also motivates vendors to improve their product so less tests fail for their server, which surely improves Java EE.

One way or the other, when something does not work it simply is what it is. Not telling the user about this will not make it work. And to counter this argument; most servers, even closed ones, have public bug trackers. I don't think we should hide bugs, either via this project or via public trackers, to make Java EE look better than it is.

How well does it work? Any experience with "building workspace" issue?

Projects indeed nest ;) It's a visual thing, functionality should not change. Tested it with some hobby projects and not yet on Java EE samples, but I intend to try this soon.

How about mvn clean verify -pl jaspic-module?

Have to admit I never tried this, but let's see how it works, thanks! But suppose it does, then in combination with nested projects in Eclipse there are not so much problems with the structure, are there?

@radcortez
Copy link
Contributor

I agree with @arjantijms about the build not being unstable. After all, one of the purposes of this projects was also to make sure that the test run on every serve and if not, make sure that the test is right and that the server is the problem. So kinda acting like a TCK.

@bartoszmajsak why don't you do some work in your fork and then we can review it?

@bartoszmajsak
Copy link
Contributor Author

@radcortez

I agree with @arjantijms about the build not being unstable.

There is substantial difference between broken / failing build and unstable one (at least if we stick to Jenkins definition). When build fails or ends due to some error (like we are facing now with Travis) we loose a chance to run other tests (so we don't know if changes we made are actually working unless we run them locally which pretty much contradicts with the idea of CI to some extent). In our current situation we have all builds failing and my fear was that we were slowly heading towards "broken windows theory" situation, where we just leave it like that without really caring about it.

I totally understand that this repository is not a complete product which should have all tests passing. If TCK approach is what we have in mind it's obvious that not all the servers (even though I assume most of them are Java EE 7 certified to great extent, or reference implementation) will have builds always green. We should have at least some way of knowing immediately why the build is not good and on which server (maybe Travis hook can put such an information as the PR comment saying it's good on GF, TomEE and JBoss but not on WLS or sth). Also what is the criteria in terms of build health to accept a PR.

@bartoszmajsak why don't you do some work in your fork and then we can review it?

Sure I can do "some work", otherwise I wouldn't be here. But it would be good first to know exactly what work is desired and what we all agree on. From "merge vs rebase" story I am not really keen to use the only important resource I have (time) for something which will not be used afterwards. Therefore I wanted to hear other's opinion about the structure and if we really need changes in this area.

As for the smarter builds - it is definitely something I want to work on in general, but it's not really stuff which is "Pull Requestable". I would be thrilled to hear others' experiences and ideas so I can get some inspiration for this work.

@arjantijms

Well, you can start with reviewing some of the existing code I guess.

Sure, I will start reviewing our code base soon.

Code style (as indicated by project rules, tabs vs spaces, line width, etc)

This we can automate if we agree on the conventions. We can also use editor config or proprietary IDE settings to share will all contributors.

Test style (but we would need some guidelines first if we want to review)

I'm totally up for it. Shall we discuss it in the separated thread?

Actual functionality that's tested; are the test assumptions really correct?

This should be also part of every PR review before merging to master.

@aheusingfeld
Copy link
Contributor

Maybe not everyone remembers but we still have this awesome piece http://javaee.support/ created by @aslakknutsen. IMHO this not only has some indication how we could separate the project but also provides an overview of failing tests. Unfortunately I don't remember exactly on which build this is based on. @aslakknutsen could you shed some light?

@aslakknutsen
Copy link
Member

I agree it's important to be able to separate between 'New Failing / Somethings wrong' and 'Expected to fail in this version'. As pointed out earlier, some of my initial thoughts by adding Arquillian tests to all the samples were to show 'what works where'.

A fairly simple solution could be to expose some variable and a little helper class in the Maven profiles and use JUnit Assume to weed out Failures from Expected failures. e.g.

<profile>
  <id>arq_wildfly_9.0.0.CR1</id>
  ...
  <properties>
    <server_name>wildfly</server_name>
    <server_version>9.0.0.CR1</server_version>
  </properties>
</profile>
@Test
public void testX() throws Exception {
   Assume.assumeThat("Ref https://issues.jboss.org/browse/WFLY-4690", Samples.notRunning(WILDFLY).version("9.0.0.CR1"));

  // do test stuff
}

That should show it at least as 'success but ignored' in Test report. As @aheusingfeld points out we can add some extra oil to javaee.support to display the status differently based on the new Assume. e.g. List all 'working samples but failing for given server' as a 'what to expect' type of listing.

@aheusingfeld Not sure when the last time I updated it, but it needs some updating to read the new cloudbess/travis test results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants