On 17.08.24 22:23, Svata Dedic wrote:
Hi, all.

Hi Svata,


this ended up being a long one, sorry in advance. I honestly didn't expect that this would be so controversial, otherwise i would have left a few things out from the original mail.



I have a feeling that the thread combines (IMHO) unrelated things together; unrelated things may lead to different conclusions.

maybe I was not able to convey this properly, but what I tried to do is to pick examples from completely different areas of the IDE and connect them to the question if problem x should be solved by the IDE or the build tool.

Should the IDE build the project or the build tool?

Should the IDE run a single test case from this test or should the IDE tell the build tool to run a specific test case?

Should the IDE download maven dependencies or should maven download maven dependencies?

...

I also tried to point out what the consequence is if things are implemented in the IDE in parallel to the build tool. Lets try again ;)



If replying, please consider to split your messages to cover just a single issue (+ change the subject). I specifically ask for a favour to discuss "priming" separately ;) since it touches my ongoing efforts in project support area (yes, I plan to do something with maven priming). I will do that for the subsequent replies.

Issue #1: Failing when the CLI fails.
----------------------------------------

On 14. 08. 24 23:26, Michael Bien wrote:

  - if the build fails in CLI, it should also fail in your IDE

This is not exactly correct. We want the _build_ in the IDE to fail, definitely _run_ to fail instead of some magic ...

If the maven build fails in my terminal, it should also fail if I trigger the same build from NB.

If the project is so misconfigured that the maven exec plugin can't run a file (on windows), I don't expect NB to run that file using the same project configuration (!) (on windows). NB may be able to run that file by dragging it into a different project - but this doesn't matter here! Since the user asked NB to run the file with a _very specific_ project configuration which happens to be not configured for the OS in question.

Yes the IDE or maven itself should help to resolve the issue (fully agreed), so that the project is working fine again and can be run. I disagree that "works in my IDE" is an intrinsic value of IDEs, to me its a bug when this situation exists since it indicates that the _result_ might be different than from CLI too. It leads to more problems down the road.

If something works in IDE A, it should work in other IDEs too since the magic is in the build tool. Devs can typically choose IDEs freely these days without having to worry about this.



... but I think (!) we would REALLY like to display at least some project structure or information EVEN THOUGH the build is terribly misconfigured to the point that mere "gradle tasks" or "mvn dependency:tree" would fail on on bad pom/build.gradle.

This is jumping to project loading/priming now I think. Yes I agree NB should be able to open projects even if plugins/dependencies are not downloaded yet. User should be able to open the pom, browse sources, and use common project actions at the very least etc.

NB can also open any folder outside of project support. So even if the pom isn't even parseable there could be a last-resort fallback which opens pom and folder completely outside of the project support system. Once fixed, right click -> open project (action already exists). (drag folders into NB, there is already support for project-less folders too)



This is where the tool does 'extra magic' to recover from errors, to get at least some information, and possibly process other parts of the project unlike CLI that tends to fail fast on the first problem.

I mean yes, but this really depends on the situation. The xml editor will tell if the pom isn't valid xml, once its valid maybe it already opens as project. Once its open, the project support will be able to display in-editor hints for the remaining problems etc. So yes those kind of recovery paths are of great value, IDEs are like a toolbox. But its a path and it has a beginning and an end and some steps might need to be solved in particular order.

I don't see the value of being able build a maven project with a half parsed pom.xml for example (its a configuration file, no list of instructions). I don't even think anyone would expect that IDEs would be able to do that. If there is a red marker in a project config file, it should become priority 1.

Lets do things in the right order: step one: fix the project :)

(Then we don't have to worry about running broken projects)



Issue #2: Forcing user go "our way"
--------------------------------------------

Windows has process arg-length limitations (additional to the better known filepath limitations) [...] So instead of adding magic [...] a much better approach would be to detect the situation and tell the user that the build is broken. Since often all the user would have to do is to add '<longClasspath>true</longClasspath>' to the right plugin. (#7582, ...)
[...]
Older versions of the maven wrapper for example had a bug which broke execution when certain file paths were used. Instead of adding a workaround for this in NB (which would have to be maintained forever and could itself cause problems), all NB could do is to notify the user and then run 'maven wrapper:wrapper' once - thats it!

While I agree this is the _best solution_ and the IDE should:
- warn the user
- offer the user to fix it
the user may not be in position to do such change. He may not own the project, or some process may prevent him for changing it. Yes, it may be done locally - and is the right choice if one just wants to play with an ancient project at local machine, but annoying if one has to work with codebase unable to make 'the right thing' happen.

This is the reality of software development though. Code ages since the environment moves forward.

I have some old ant projects I wrote back with NB ~4.5, those can't be opened with NB 22 without upgrading them first to a newer project revision.

"Upgrade Working Copy" dialog will show up, if I press "no" it won't open. If i press "yes" it will update some ant build files and it opens - this is the right way of handling things IMO and it is already in NB since ages.


The "outdated maven wrapper" issue is almost the exact analog to the outdated ant project issue (just 20y later). Adding workarounds to NB which allows edge cases with certain paths to run on outdated maven wrappers and having to maintain those workarounds forever, makes no sense to me. *Esp* when the alternative is to simply update a single wrapper file by advising the user to run wrapper:wrapper or risk not being able to run maven.

#7558 the workaround itself could cause problems since it is applied to all (!) wrapper versions. The IDE invokes the wrapper differently to the terminal equivalent now. Even when the wrapper is already up2date!

-> less magic is the better option here again.



So if the user does not follow the good advice, the IDE should not _fail_ - and maybe he should be even allowed to suppress the warning.

Depends on the concrete situation. Both examples (wrapper and exec classpath) have very simple solutions which can be directly applied (update wrapper or switch to bundled maven or take the risk to fail / update plugin config in pom or set -DlongClasspath=true). No workarounds required within NB.

In other situations, fallbacks might not be possible.



The tool should not force a specific way of operation to the user; our tools should help the user even in an imperfect world.

Right, IDEs shouldn't force anything upon the user (IDEs are fancy text editors), they should help the user to make decisions and resolve an issue if something is wrong. One of NBs strengths is that there never was a "netbeans project", NB could run ant, maven and later gradle projects with no or minimal extra configuration. (NB predates ant, but you know what I mean)

There never was a vendor or IDE lock-in.

But if the project literally can't be built in the terminal using ant/maven/gradle (due to spaces in the path or whatever), NB should also not be able to build it! *Esp* if the solution to the problem is as trivial as bumping a wrapper to the next point release, or using a combo box to switch to a non-wrapper maven version.

A few releases from now, nobody will care about this outdated wrapper version anymore - it would be wasted effort to keep supporting it.

-> less magic is better again, lets help users to fix the projects in cases where the fix is obvious



Issue #3: CoS
--------------------------------------------
Simple counter-example: Eclipse and vscode with Microsoft Java extension pack have far greater market share than NetBeans. Yet Eclipse (and the vscode extension pack that is derived from Eclipse) uses CoS by default. If a crowd jumps off a cliff, it's worth to think one more time before following it over the edge...

But still, the concept survives so broad usage. So either all that crowd using JDT-based tools is incompetent or it is not the concept itself, but rather our implementation what is bad. More probably probably, there are scenarios that do not fit our working style, but the 'market survey' done by our "competitors" shows that it still fits many.

CoS comes naturally to IDEs like Eclipse since they started out by building projects. There was the concept of an "eclipse project" (I honestly don't know if it still exists, I haven't seen those in a while), and you could literally build things using eclipse, without ant, maven or gradle using it's own eclipse compiler and configuration files etc.

In early days i worked in eclipse projects which could not be built with javac anymore. This was the extreme version of works-in-my IDE.

NB _integrates_ with ant/maven/gradle. In a way, NetBeans was usually the better "integrated development environment" than the competition (speaking of early days again), since Sun tried to integrate with tools, servers etc, instead of reinventing things inside the IDE again. NB was probably the IDE with the best ant support.



With Maven and its very bad non-caching, non-incremental behaviour, CoS may be more important. If Netbeans' "run / compile" project action worked on the reactor level rather than on a single project, maven's behaviour would quickly become a major PITA. But doing a reactor build with --also-make is The Right Thing(tm) after a change.

I am mostly indifferent to CoS these days. I used it frequently back in my "web application" days, when webapps run exploded and the testing container simply recycled the classloader. I wholeheartedly agree that CoS will be always broken, no matter how good the implementation could be.

agreed! Since we here on this mailing list are NB maintainers we have the responsibility to figure out how broken it can become before we have to do something about it.

Today its quite broken already. If there is still some life in it - lets keep it a bit longer and make the warning dialog a bit more scary.

CoS is a non-trivial feature which goes through a big part of the code base (from maven event spies which communicate over loggers with NB to code scanning and debugger). Since we have limited resources as apache project (devs willing to spend time fixing/improving things), it could be healthy to focus on tool integration instead of the "NetBeans build system".

When did you see the last CoS bug fix? Is there empirical evidence that this is still maintained?



Issue #4: Priming.
--------------------------------------------
This lead to more magic during project load. If you open a maven project in NB and don't have a ~/.m2/ folder yet, you will notice that it automatically downloads and installs plugins and dependencies
I think there's a conceptual misunderstanding what "priming" should achieve. Forget about "priming build" - that's just an implementation, and not necessarily a good one.

Priming is here mostly to ensure that Code Completion can work, the IDE can resolve symbols and underline errors (without actually compiling through the build system, right ?) etc. In the case of Maven, priming also makes sure that (non-local) parent POM gets to the machine. Without a parent POM many properties (i.e. versions) are not defined, plugins are not bound to phases. As a result, the IDE's view on what the project consists of (dependencies) and what technologies are used (plugins invoked in the lifecycle) is completely broken.

The misconception is that the priming HAS TO be done through the build tool. Priming is NOT for the build tool, priming is here for the IDE. Things that the "priming" (whatever the implementation is) does must produce results that the IDE can consume.

I don't think we disagree on what priming is. Getting a dependency is one aspect of priming. A dependency can be the nbm-maven-plugin which is required before NB knows what "nbm" packaging is or a parent pom yes.

My point is that priming _should_ be done with help of the build tool - never said that it _can't_ be done outside of build tools.


In a way, downloading the 2.5gb maven index and then processing it for 5 minutes is also priming - but lets not get crazy here ;)



If the maven execution produces "something" in a way the IDE's code is not prepared to consume (such as downloading artifacts to different locations) ... the IDE will not get it anyway, although the machine may be fully set up from Maven's perspective. There are pieces of code that *assume* certain local repository layout (also true for gradle support). If the 'maven outcome' misses these assumptions, the IDE will be broken despite proper build tool execution. In these parts, we need to do "more IDE magic" by extracting more information from the Maven model or use Maven's libraries to solve more queries.

Sure NB scans ~/.m2, settings.xml, pom.xml, sources, JDK, other opened projects etc. In no way I advocate to remove any of this. (it works pretty well too I think)


I sincerely doubt someone in Maven will implement this binding as it is of no value for Maven.

I am actually not so sure about this. Some things might be worth implementing as maven plugins. Gradle has even a tooling API.

maveniverse:toolbox is a user facing CLI toolbox, who says there couldn't be something like this for IDEs if there is the need for it? MIMA (https://github.com/maveniverse/mima) exists already too as maven resolver "tooling API".

However, so far it looks to me like the core maven plugins would already bring us quite close to what we need for priming (some, more complex projects will always need to be built).



For maven, priming was historically implemented as a "Project Problem" resolving action executing "package" or "install" goals. But I think this is wrong way: for example, we do not _need_ to install build plugins. The IDE will not use them -- it will just load the Maven configuration model. No build plugins (and their dependencies) -> less download -> less time.

I am not advocating to install everything but at the same time I also highly doubt that this is the right attribute to optimize for. The local repo is a cache - lets use the cache. Downloading (missing) build plugins should not be something NB should worry about - those are required to build the project.

Lets ask the user if the pom is trusted, then let maven download the plugins, refresh the project and call it a day.

And optimizing this is still possible, even go-offline can be configured to ignore transitive dependencies and various other things. I could even imagine running it in multiple rounds if we want the first results as early as possible.


regarding not wanting to "install" everything into the local repo: https://maven.apache.org/resolver/local-repository.html#split-local-repository

this is another reason why we should prime using maven - since right now NB breaks this feature already while opening the first project.


Doing build (as it is done now) actually compiles the source. But the source is often broken; so the compilation fails silently. We use specific flags to "compile" as much of the reactor as possible ... but we do not _need_ sources compilation for IDE operation. All what is needed is to "somehow" get the referenced dependencies to the local machine. With a maven build, it is hard to interpret if 'priming' actually fails - or if it is "the other stuff" during install/package execution that failed.

The notable exception is a scenario when a Plugin generates some resources - which are then used in a compilation. That seems not as common as I would expect - but has to be supported, even if going, for example, through the "dependency:go-offline" path.

Sure, the user will have to press build at some point (often even as first action after opening the project). Esp when code generation is involved. Only the build tool truly knows how to build the project. NB has no chance to keep up with this without reimplementing maven or interpreting the build instructions on the project's README ;)

Covering all edge cases is not worth the effort and/or realistic when the fix to the problem is something between "dependency:go-offline" or in worst case recursive "maven install --fail-never".



Even downloading dependencies may not be enough: as Maven relies on reactor project outcomes to be "installed", the model building may fail on an never-compiled (and installed) project part; I don't remember exactly but I tried to use go-offline about 1.5 year ago, and my attempt to ditch the current "priming build" failed on something like that. Surely solvable by 'more magic' during project model (for IDE!) build - I run out of time in that attempt, and I need to retry.

Well, it depends. Sometimes it is not even desired to resolve to installed artifact dependencies. If a user opens two projects NB is often smart enough to resolve/navigate to the other project's source code if it is the dependency, instead of the artifacts source code jar in the repo.

go-offline will not work in all cases (obviously). But no priming logic outside of maven will! We would replace one imperfect solution which we have to maintain, with another imperfect solution we don't have to maintain + there is the option to contribute something to go-offline (or some other plugin).

Just ask the user what to do:

 1) build project right away (perfect project support apart of maven-indexer priming for power users)

 2) try downloading dependencies, invoke a phase like "validate" or a goal directly (possible imperfect project support)

 3) do nothing and live with error markers and do 1) or 2) later (likely imperfect project support unless there is enough in the cache already to work well enough for the user)



Running a (regular) build by the user is not a good option, if it is the only way: the sources may be broken, the build will fail in the middle ... leaving the dependencies half initialized (up to the failed module). Regular (maven) builds do not use --fail-at-end

again: lets not think binary here. Sometimes "maven verify --fail-never" will be the best option, sometimes some shortcut will be better.

"dependency issue detected, try building the project or its dependencies" would already communicate the problem to the user.



I insist on that the IDE must be able to open and operate well on a fresh checkout on a pristine machine - even though the [java] sources (!) are not compilable at the moment. And that means "do not fail, when the CLI fails".

I would even like to see NB being still somewhat useful while temporarily offline. But this is a "nice to have" no "must have".



Gradle support downloads its plugins (it cannot configure the project without plugin beans, actually) - but does not execute build tasks, just uses the *build system* to download, and build the configuration model - which is then dumped and read by the IDE.

To get more technical - even having 'an agent' in a buildsystem's process or daemon (as Gradle does now) is not 'easy and clean' solution. The agent has to vary between buildsystem major versions (gradle now has conditional parts, and even version-specific reflective access, as gradle API changes rapidly and even incompatibly); that's comparable to modularized "embedder". Maven internal processing allows to get huge amount of positional information (cannot compare with Gradle :) as gradle does not retain much AST information internally); in a computed MavenProject, one can find declaration origins in merged configuration -- this is hard to reproduce, as it would replicate Maven's own logic. An agent would have to serve this information.

Using dependency:go-offline itself achieves only very little in terms of multiple Maven version support, reliability of the IDE project model.

any priming logic outside of maven which literally downloads stuff into the local repo will face the exact same problems. dependency:go-offline is one tool in the box.

And yes the fact that NB embeds maven is already a compromise. Some in-editor hints struggle with that issue. This will become interesting again once someone attempts to upgrade the embedder to maven 4.


I do also expect mvnd to become more used once mvnw is able to deploy it. Since many projects use mvnw which excludes mvnd a bit atm from being experimented with.

-mbien



-Svata.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@netbeans.apache.org
For additional commands, e-mail: dev-h...@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists





---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@netbeans.apache.org
For additional commands, e-mail: dev-h...@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists



Reply via email to