Re: purpose of repository configuration in the database?

2007-09-08 Thread Brett Porter
Joakim - any thoughts on these 2 additional questions? I'd like to  
finish tidying this section up soon.


Cheers,
Brett

On 21/08/2007, at 1:05 PM, Brett Porter wrote:



On 21/08/2007, at 1:03 PM, Joakim Erdfelt wrote:


Simple: Reporting.

I didn't want to have the merge contents from multiple sources in  
order to produce a report.




Makes sense, but I can't see where it is actually used?

And the structure of the plexus-registry basically forced me into  
using the configuration approach over the db approach.


how so?

- Brett



- Joakim

Brett Porter wrote:

Hi,

Can someone explain the need for having the repository  
configuration stored in the database?


I've observed that:
- it is only saved there from the configuration files
- it is never modified, nor does it store additional information  
than in the configuration

- it is never removed
- it does not appear to be referenced from any other jdo queries  
(except some constraint classes that in turn are not used)

- some of the fields are never referenced (only the id/name/url are)

We do a lot of looking up the list of repositories from the  
database, which isn't as efficient as using the equivalent stuff  
already in memory from the configuration. Is there any reason not  
to make a few substitutions there?


Thanks,
Brett



--
Brett Porter - [EMAIL PROTECTED]
Blog: http://www.devzuz.org/blogs/bporter/


SCM Matrix for Continuum?

2007-09-08 Thread Wendy Smoak
I'm in need of an SCM Matrix for a Continuum talk... can someone
familiar with maven-scm take a look at the one on the wiki and let me
know if it's up to date?

http://docs.codehaus.org/display/SCM/SCM+Matrix

Also, what columns are relevant for Continuum?

Thanks,
-- 
Wendy


Re: [proposal] Make like reactor build mode

2007-09-08 Thread Jason Dillon

On Sep 4, 2007, at 6:10 PM, Brian E. Fox wrote:

http://docs.codehaus.org/display/MAVEN/Make+Like+Reactor+Mode
Make like build behavior mode

Maven currently has a top down build approach where you start at  
the top

of a reactor and build all children. One of the most common requests I
get from my developers is an easy way to build certain artifacts from
the bottom up. Often times a build, especially large ones, will  
contain

many modules needed for a full build, but are actually made up of
pockets of interdependencies. It's not always possible to logically
group these all together with a common parent to enable easily  
building

a subtree.

+---services

|   +---a-service

|   +---b-service

|   \---c-service

\---ui

+---a-ui

+---b-ui

\---c-ui

The packages inherit from the package parent, etc. Assume that
A-package depends on a-service b-service and a-ui

In Maven, there is currently no easy way to make a change to a- 
service

and build it and the package at once. This can be controlled to some
extent with modules in profiles, but this quickly becomes unwieldy and
doesn't support the granularity needed.


I'm confused... can't you just:

(cd services/a-service; mvn install)

Or perhaps even:

mvn -f services/a-service/pom.xml install



Proposed Solution

The ideal use case for this issue is:

1. Developer makes change to code in a-service

2. Developer goes to a-package and executes mvn -A install  (-A  
for

all)

3. Maven walks up the parent tree to see how far up the source goes.
Then any dependencies in the graph that are found in the contained
source on disk are ordered and built. Everything else is ignored in  
the

build.


Ohhh... I see, you want the build of a-service to also build its  
dependencies if they are local?  That would be rather handy ;-)  Even  
handier if mvn plugins were a wee smarter about skipping things when  
they don't need to be done too, but thats another problem...




Alternate Use Case:

2. Instead of going to  a-package and executing mvn, the developer
goes to the top level parent and executes mvn -Aa-package (in this
case defining the module that should be considered for dependency
graphing)

3. Maven builds the graph and builds what is needed.

This use case isn't ideal but is probably easier to implement since  
the

top level parent doesn't need to be located and everything to be built
is included in the subtree.


Hey, I'm not sure how much this would help other folks... but  
sometimes that graph building stuff that gets done before the reactor  
gets kicked off takes a long time.  For huge projects like Apache  
Geronimo (gosh, 200+ modules now) it does take a while to get things  
started.  Would there be any value in caching a serialized version of  
the object graph and using that unless a pom.xml has changed which  
would invalidate it?  perhaps toss the cached pom into each  
modules .m2/pom.ser or something?


I dunno... might only hurt folks with massive projects that really  
should refactor into smaller chucks so let them live with the pain  
until they learn ;-)


--jason



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: The importance of working in branches

2007-09-08 Thread Jason Dillon
Have you guys tried using SVK on the maven tree in the ASF SVN?  This  
is a really kick ass tool for managing branches and merging repos.   
Its like Perforce for SVN... er, well kinda.


I really wanted to use this for Geronimo, but the huge singleton ASF  
SVN repo is not really helping out, actually its causing problems.   
SVK works by making a local SVN repository and then sync'ing the  
entire revisions for whatever paths you're working on.  This is so  
that it can easily look up revisions fo merge points and handle  
merging automatically for the most part.  The huge SVN repo makes  
that process slow and often times will kill an SVK sync due to eh,  
bug in SVN, bug in SVK, bug blah...


The tool is still getting better and better... but the huge singleton  
ASF SVN repo for everything-under-the-sun (sans uber-secret-private- 
muck) isn't really helping things at all.


IMO, each TLP at the ASF should have its own SVN repo... um, say  
kinda like they have over at the Codehaus ;-)


 * * *

But anyways, I really think that using branches is a very good  
idea... its just that with SVN as it is... sometimes merging changes  
back form your branches can be more trouble than its worth and can in  
some cases be problematic.


If you guys can get SVK working for you... then I'd highly recommend  
using it to create small feature branches for isolated work, then  
once the feature is ready simply merge it back in.  If you keep your  
feature branch updated frequently with changes to the limb you  
branched from (which is very easy) then when its time to merge your  
work back then you can most often do it with your eyes closed.


--jason


On Sep 7, 2007, at 3:52 PM, John Casey wrote:

I thought we'd agreed awhile back to do this, but apparently that's  
not the case.


When I updated from Subversion today, I got a nasty surprise. Lo  
and behold, it would not bootstrap! I didn't find much discussion  
on the dev list about a massive reorganization (none, in fact), and  
the number of commits makes it pretty difficult to parse out  
exactly where things went wrong. Suffice it to say that I'm doing  
all of my build/plugin work today based on 2.1-SNAPSHOT artifacts,  
not on a trunk build.


Can we please, please, PLEASE start using feature friggin  
branches?? It's not so difficult to merge them in when you're done,  
and I'm willing to help anyone who has trouble.


-john

---
John Casey
Committer and PMC Member, Apache Maven
mail: jdcasey at commonjava dot org
blog: http://www.ejlife.net/blogs/john





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [vote] bring shade-maven-plugin to apache

2007-09-08 Thread Jason Dillon

On Sep 7, 2007, at 4:38 PM, Brian E. Fox wrote:

Results:
+7 (Binding) Brian, Jason, Brett, Stephane, Lukas, Dennis, Arnaud
+2 (Non Binding) Rafale, Andy,

This vote has passed. I will wrap up the move most likely over the
weekend. It will first go to the sandbox where we will perform the
refactoring of packages and to include apache headers.

Since I'm also a mojo committer, I'll call a vote over there to decide
what happens with the source in mojo.


I'd just do it... I don't think anyone even knows it there actually.   
But I guess its nice to ya know give folks a chance ;-)


--jason


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [vote] Mauro Talevi committer access

2007-09-08 Thread Mauro Talevi

Brian E. Fox wrote:


This vote has passed. Jason, since Mauro is already an Apache committer,
I believe only granting of Karma is required.

Please join me in welcoming Mauro!


Thanks to all.  It's a pleasure to join the Maven community and look forward to collaborating with 
everybody.


Cheers



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using HTTP repositories for consumption only

2007-09-08 Thread Mauro Talevi

Jason van Zyl wrote:


On 7 Sep 07, at 5:20 PM 7 Sep 07, Brian E. Fox wrote:


I don't currently, but have in the past used file:// for remote.



For deploying or actually pulling? Deploying is a totally different 
story. I know tons of people who use file, dav, scp, and ftp. Strictly 
for pulling I'm saying. And I'm not saying it will satisfy our users, 
just throwing out the idea. But HTTP is pretty much ubiquitous, handles 
all security concerns, easily distributed ...



I agree that http is the most widely used and will satisfy the majority of use 
cases.

But consider the following use case:  a commercial product delivering in the form of multiple 
artifacts, which then the user will build upon (API level artifacts).  Supporting a file:// protocol 
would enable the artifacts to be delivered as a repo and would not require http access to import in 
the local repo.



The use case was that we had to mirror our internal repo to another corp
network. We essentially zipped up the repo and transferred it to their
machine (regularly and automatically via scm), which set a mirror entry
pointing to the local fs. This had to be done this way because a proxied
connection to our internal repo was not allowed, they needed full copies
of the entire build in scm.


I think these tools could could probably be special tools using Wagon, 
or something else like an rsync tool would be ideal.


I agree though that file:// is pretty useful. Just looking to make the 
mechanism as streamlined, simple, and consistent as possible.


How is removing file:// going to simplify significantly the mechanism?

Cheers


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using HTTP repositories for consumption only

2007-09-08 Thread Jason van Zyl


On 8 Sep 07, at 5:43 AM 8 Sep 07, Mauro Talevi wrote:


Jason van Zyl wrote:

On 7 Sep 07, at 5:20 PM 7 Sep 07, Brian E. Fox wrote:

I don't currently, but have in the past used file:// for remote.

For deploying or actually pulling? Deploying is a totally  
different story. I know tons of people who use file, dav, scp, and  
ftp. Strictly for pulling I'm saying. And I'm not saying it will  
satisfy our users, just throwing out the idea. But HTTP is pretty  
much ubiquitous, handles all security concerns, easily  
distributed ...



I agree that http is the most widely used and will satisfy the  
majority of use cases.


But consider the following use case:  a commercial product  
delivering in the form of multiple artifacts, which then the user  
will build upon (API level artifacts).  Supporting a file://  
protocol would enable the artifacts to be delivered as a repo and  
would not require http access to import in the local repo.


I still think we can make tools to deal with the repository for  
delivery a repository and accessing a local file://. But still for  
remote, actually getting artifacts over the wire, that HTTP (the more  
that I think about it) is all we really need. I think the vast  
majority of users are doing HTTP(S).


Even if a product was delivered as a repo, which my last client  
started working with their vendor to do, can be delivered using the  
repository builder and using another tool like a repository importer.  
The exporter importer could be made to work with file and http. But  
for remote fetching just HTTP, and ditching things like FTP, SCP and  
DAV for actually retrieving artifacts remotely.




The use case was that we had to mirror our internal repo to  
another corp
network. We essentially zipped up the repo and transferred it to  
their
machine (regularly and automatically via scm), which set a mirror  
entry
pointing to the local fs. This had to be done this way because a  
proxied
connection to our internal repo was not allowed, they needed full  
copies

of the entire build in scm.
I think these tools could could probably be special tools using  
Wagon, or something else like an rsync tool would be ideal.
I agree though that file:// is pretty useful. Just looking to make  
the mechanism as streamlined, simple, and consistent as possible.


How is removing file:// going to simplify significantly the mechanism?



Not using Wagon, our abstraction, and directly focusing on HTTP.


Cheers


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Thanks,

Jason

--
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
--




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using HTTP repositories for consumption only

2007-09-08 Thread Wendy Smoak
On 9/7/07, Jason van Zyl [EMAIL PROTECTED] wrote:

 On 7 Sep 07, at 5:20 PM 7 Sep 07, Brian E. Fox wrote:

  I don't currently, but have in the past used file:// for remote.
 

 For deploying or actually pulling? Deploying is a totally different
 story. I know tons of people who use file, dav, scp, and ftp.
 Strictly for pulling I'm saying. And I'm not saying it will satisfy
 our users, just throwing out the idea. But HTTP is pretty much
 ubiquitous, handles all security concerns, easily distributed ...

I've used file:// it many times this year to ensure that a set of
artifacts delivered as remote repository contents actually contains
everything needed to build, and for testing pre-release artifacts
locally that we don't want to put in the internal corporate repo just
yet.

And it has applications for a 'self contained' project where the
repository is packaged with the source code, and builds against those
artifacts with no access to external repositories.

Losing file:// would be a pain; the others I can live without.

-- 
Wendy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using HTTP repositories for consumption only

2007-09-08 Thread Jason van Zyl


On 8 Sep 07, at 8:50 AM 8 Sep 07, Wendy Smoak wrote:


On 9/7/07, Jason van Zyl [EMAIL PROTECTED] wrote:


On 7 Sep 07, at 5:20 PM 7 Sep 07, Brian E. Fox wrote:


I don't currently, but have in the past used file:// for remote.



For deploying or actually pulling? Deploying is a totally different
story. I know tons of people who use file, dav, scp, and ftp.
Strictly for pulling I'm saying. And I'm not saying it will satisfy
our users, just throwing out the idea. But HTTP is pretty much
ubiquitous, handles all security concerns, easily distributed ...


I've used file:// it many times this year to ensure that a set of
artifacts delivered as remote repository contents actually contains
everything needed to build, and for testing pre-release artifacts
locally that we don't want to put in the internal corporate repo just
yet.

And it has applications for a 'self contained' project where the
repository is packaged with the source code, and builds against those
artifacts with no access to external repositories.

Losing file:// would be a pain; the others I can live without.



I agree that file should be supported and that's very easy.


--
Wendy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Thanks,

Jason

--
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
--




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: What can possibly go wrong with Maven

2007-09-08 Thread Jorg Heymans

Jason van Zyl wrote:


Anyone have anything else? I'm not trying to consider everything that 


Any chance that mvn could indicate the exact pom.xml locations of 
duplicated projects ?


So instead of this:

[INFO] Project 'testgroup:testartifactA' is duplicated in the reactor
[INFO] 



something like:


[INFO] Project 'testgroup:testartifactA' is duplicated in the reactor
[INFO] This project is defined by following poms:
[INFO] - /tmp/project/module1/pom.xml
[INFO] - /tmp/project/module2/pom.xml 



Regards
Jorg


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: What can possibly go wrong with Maven

2007-09-08 Thread Jason van Zyl


On 8 Sep 07, at 11:38 AM 8 Sep 07, Jorg Heymans wrote:


Jason van Zyl wrote:

Anyone have anything else? I'm not trying to consider everything that


Any chance that mvn could indicate the exact pom.xml locations of  
duplicated projects ?




No reason why it couldn't, we know the source of everything at some  
point.


Noted.


So instead of this:
-- 
--

[INFO] Project 'testgroup:testartifactA' is duplicated in the reactor
[INFO]  
-- 
--


something like:

-- 
--

[INFO] Project 'testgroup:testartifactA' is duplicated in the reactor
[INFO] This project is defined by following poms:
[INFO] - /tmp/project/module1/pom.xml
[INFO] - /tmp/project/module2/pom.xml  
-- 
--


Regards
Jorg


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Thanks,

Jason

--
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
--




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Using HTTP repositories for consumption only

2007-09-08 Thread Brett Porter


On 09/09/2007, at 1:11 AM, Jason van Zyl wrote:



Not using Wagon, our abstraction, and directly focusing on HTTP.


Doesn't that mean adding a bunch of HTTP code, listeners, etc into  
the artifact code - and making two places to maintain something  
essentially the same, that doesn't really buy anything? What problem  
with Wagon are you trying to solve?


This is the opposite direction to what Maven 1.1 did - maybe the guys  
that worked on that could share their experience of whether it ended  
up better off or not?


I do remember one of the reasons I switched to Wagon there was  
because ftp:// was getting a number of bug reports. Not sure if  
anyone still uses ftp://, but I don't currently see a reason to  
remove it.


Cheers,
Brett

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Fwd: [continuum] BUILD FAILURE: Maven Dependency Tree

2007-09-08 Thread Brett Porter
Olivier just flipped the projects that only require JDK 1.4 over to  
that JDK using profiles on Continuum, so we might see some build  
failures for things using APIs from Java 5 that shouldn't be, such as  
this...


Begin forwarded message:

From: [EMAIL PROTECTED]  
[EMAIL PROTECTED]

Date: 9 September 2007 9:30:28 AM
To: [EMAIL PROTECTED]
Subject: [continuum] BUILD FAILURE: Maven Dependency Tree
Reply-To: dev@maven.apache.org
Reply-To: [EMAIL PROTECTED]  
[EMAIL PROTECTED]

List-Id: notifications.maven.apache.org
Message-Id:  
[EMAIL PROTECTED]


Online report : http://maven.zones.apache.org/continuum/ 
buildResult.action?buildId=20861projectId=399


Build statistics:
 State: Failed
 Previous State: Ok
 Started at: Sat 8 Sep 2007 23:30:15 +
 Finished at: Sat 8 Sep 2007 23:30:25 +
 Total time: 9s
 Build Trigger: Schedule
 Build Number: 37
 Exit code: 1
 Building machine hostname: maven.zones.apache.org
 Operating system : SunOS(unknown)
 Java Home version :  java version 1.4.2_06
 Java(TM) 2 Runtime Environment, Standard Edition (build  
1.4.2_06-b03)

 Java HotSpot(TM) Client VM (build 1.4.2_06-b03, mixed mode)
Builder version :
 Maven version: 2.0.7
 Java version: 1.4.2_06
 OS name: sunos version: 5.10 arch: x86

** 
**

SCM Changes:
** 
**

No files changed

** 
**

Dependencies Changes:
** 
**

org.apache.maven:maven-project:2.0.8-SNAPSHOT

** 
**

Test Summary:
** 
**

Tests: 0
Failures: 0
Total time: 0

** 
**

Output:
** 
**

[INFO] Scanning for projects...
[INFO]  
-- 
--

[INFO] Building Maven Dependency Tree
[INFO]task-segment: [clean, install]
[INFO]  
-- 
--

[INFO] [clean:clean]
[INFO] Deleting directory /export/home/build/data/continuum/ 
checkouts/399/target

[INFO] [plexus:descriptor {execution: default}]
[INFO] [resources:resources]
[INFO] Using default encoding to copy filtered resources.
[INFO] Resource directory does not exist: /export/home/build/data/ 
continuum/checkouts/399/src/main/resources

[INFO] Copying 1 resource
[INFO] [compiler:compile]
[INFO] Compiling 18 source files to /export/home/build/data/ 
continuum/checkouts/399/target/classes
[INFO]  
-- 
--

[ERROR] BUILD FAILURE
[INFO]  
-- 
--

[INFO] Compilation failure

/export/home/build/data/continuum/checkouts/399/src/main/java/org/ 
apache/maven/shared/dependency/tree/traversal/ 
SerializingDependencyNodeVisitor.java:[182,18] cannot resolve symbol

symbol  : method append (java.lang.String)
location: class java.io.PrintWriter

/export/home/build/data/continuum/checkouts/399/src/main/java/org/ 
apache/maven/shared/dependency/tree/traversal/ 
SerializingDependencyNodeVisitor.java:[187,18] cannot resolve symbol

symbol  : method append (java.lang.String)
location: class java.io.PrintWriter


[INFO]  
-- 
--

[INFO] For more information, run Maven with the -e switch
[INFO]  
-- 
--

[INFO] Total time: 8 seconds
[INFO] Finished at: Sat Sep 08 23:30:25 GMT+00:00 2007
[INFO] Final Memory: 11M/28M
[INFO]  
-- 
--


** 
**




--
Brett Porter - [EMAIL PROTECTED]
Blog: http://www.devzuz.org/blogs/bporter/



Re: [continuum] BUILD FAILURE: Maven Dependency Tree

2007-09-08 Thread Carlos Sanchez
will look into it

On 9/8/07, Brett Porter [EMAIL PROTECTED] wrote:
 Olivier just flipped the projects that only require JDK 1.4 over to
 that JDK using profiles on Continuum, so we might see some build
 failures for things using APIs from Java 5 that shouldn't be, such as
 this...

 Begin forwarded message:

  From: [EMAIL PROTECTED]
  [EMAIL PROTECTED]
  Date: 9 September 2007 9:30:28 AM
  To: [EMAIL PROTECTED]
  Subject: [continuum] BUILD FAILURE: Maven Dependency Tree
  Reply-To: dev@maven.apache.org
  Reply-To: [EMAIL PROTECTED]
  [EMAIL PROTECTED]
  List-Id: notifications.maven.apache.org
  Message-Id:
  [EMAIL PROTECTED]
 
  Online report : http://maven.zones.apache.org/continuum/
  buildResult.action?buildId=20861projectId=399
 
  Build statistics:
   State: Failed
   Previous State: Ok
   Started at: Sat 8 Sep 2007 23:30:15 +
   Finished at: Sat 8 Sep 2007 23:30:25 +
   Total time: 9s
   Build Trigger: Schedule
   Build Number: 37
   Exit code: 1
   Building machine hostname: maven.zones.apache.org
   Operating system : SunOS(unknown)
   Java Home version :  java version 1.4.2_06
   Java(TM) 2 Runtime Environment, Standard Edition (build
  1.4.2_06-b03)
   Java HotSpot(TM) Client VM (build 1.4.2_06-b03, mixed mode)
  Builder version :
   Maven version: 2.0.7
   Java version: 1.4.2_06
   OS name: sunos version: 5.10 arch: x86
 
  **
  **
  SCM Changes:
  **
  **
  No files changed
 
  **
  **
  Dependencies Changes:
  **
  **
  org.apache.maven:maven-project:2.0.8-SNAPSHOT
 
  **
  **
  Test Summary:
  **
  **
  Tests: 0
  Failures: 0
  Total time: 0
 
  **
  **
  Output:
  **
  **
  [INFO] Scanning for projects...
  [INFO]
  --
  --
  [INFO] Building Maven Dependency Tree
  [INFO]task-segment: [clean, install]
  [INFO]
  --
  --
  [INFO] [clean:clean]
  [INFO] Deleting directory /export/home/build/data/continuum/
  checkouts/399/target
  [INFO] [plexus:descriptor {execution: default}]
  [INFO] [resources:resources]
  [INFO] Using default encoding to copy filtered resources.
  [INFO] Resource directory does not exist: /export/home/build/data/
  continuum/checkouts/399/src/main/resources
  [INFO] Copying 1 resource
  [INFO] [compiler:compile]
  [INFO] Compiling 18 source files to /export/home/build/data/
  continuum/checkouts/399/target/classes
  [INFO]
  --
  --
  [ERROR] BUILD FAILURE
  [INFO]
  --
  --
  [INFO] Compilation failure
 
  /export/home/build/data/continuum/checkouts/399/src/main/java/org/
  apache/maven/shared/dependency/tree/traversal/
  SerializingDependencyNodeVisitor.java:[182,18] cannot resolve symbol
  symbol  : method append (java.lang.String)
  location: class java.io.PrintWriter
 
  /export/home/build/data/continuum/checkouts/399/src/main/java/org/
  apache/maven/shared/dependency/tree/traversal/
  SerializingDependencyNodeVisitor.java:[187,18] cannot resolve symbol
  symbol  : method append (java.lang.String)
  location: class java.io.PrintWriter
 
 
  [INFO]
  --
  --
  [INFO] For more information, run Maven with the -e switch
  [INFO]
  --
  --
  [INFO] Total time: 8 seconds
  [INFO] Finished at: Sat Sep 08 23:30:25 GMT+00:00 2007
  [INFO] Final Memory: 11M/28M
  [INFO]
  --
  --
 
  **
  **
 

 --
 Brett Porter - [EMAIL PROTECTED]
 Blog: http://www.devzuz.org/blogs/bporter/




-- 
I could give you my word as a Spaniard.
No good. I've known too many Spaniards.
 -- The Princess Bride

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]