Re: Proposal: jtreg tests with native components

2014-05-01 Thread Staffan Larsen

On 1 maj 2014, at 07:45, David Holmes  wrote:

> On 30/04/2014 9:39 PM, Staffan Larsen wrote:
>> 
>> On 30 apr 2014, at 11:39, David Holmes  wrote:
>> 
>>> Hi Staffan,
>>> 
>>> On 25/04/2014 10:02 PM, Staffan Larsen wrote:
 There are a couple of jtreg tests today that depend on native components 
 (either JNI libraries or executables). These are handled in one of two 
 ways:
 
 1) The binaries are pre-compiled and checked into the repository (often 
 inside jar files).
 2) The test will try to invoke a compiler (gcc, cl, …) when the test is 
 being run.
 
 Neither of these are very good solutions. #1 makes it hard to run the 
 setup the test for all platforms and requires binaries in the source 
 control system. #2 is hit-and-miss: the correct compiler may or may not be 
 installed on the test machine, and the approach requires platform specific 
 logic to be maintained.
>>> 
>>> #2 is far from perfect but ...
>>> 
 I would like to propose that these native components are instead compiled 
 when the product is built by the same makefile logic as the product. At 
 product build time we know we have access to the (correct) compilers and 
 we have excellent support in the makefiles for building on all platforms.
 
 If we build the native test components together with the product, we also 
 have to take care of distributing the result together with the product 
 when we do testing across a larger number of machines. We will also need a 
 way to tell the jtreg tests where these pre-built binaries are located.
>>> 
>>> don't under estimate the complexity involved in building then 
>>> "distributing" the test binaries.
>> 
>> I don’t. It will be complicated, but I’m sure we can do it.
> 
> The question is whether it is worth it relative to the size of the problem.

I think we will see a large influx of these kinds of tests, especially in the 
hotspot repo.

> 
>>> 
>>> You will still need to maintain platform specific logic as you won't 
>>> necessarily be able to use the CFLAGS etc that the main build process uses.
>> 
>> Can you explain more? Why can’t I use CFLAGS as it is?
> 
> You _may_ be able to, you may not. I know we already had issues where the 
> CFLAGS as being used for the JDK sources also got applied to building the 
> code-generator utility programs and that didn't work correctly. Here's sample 
> CFLAGS from a JDK build
> 
> CFLAGS_JDKLIB:=  -W -Wall -Wno-unused -Wno-parentheses   -pipe  
> -D_GNU_SOURCE -D_REENTRANT -D_LARGEFILE64_SOURCE -fno-omit-frame-pointer  
> -D_LITTLE_ENDIAN -DLINUX -DNDEBUG -DARCH='"i586"' -Di586 
> -DRELEASE='"$(RELEASE)"' 
> -I/export/users/dh198349/ejdk8u-dev/build/b13/linux-i586-ea/jdk/include 
> -I/export/users/dh198349/ejdk8u-dev/build/b13/linux-i586-ea/jdk/include/linux 
>   -I/export/users/dh198349/jdk8u-dev/jdk/src/share/javavm/export 
> -I/export/users/dh198349/jdk8u-dev/jdk/src/solaris/javavm/export 
> -I/export/users/dh198349/jdk8u-dev/jdk/src/share/native/common   
> -I/export/users/dh198349/jdk8u-dev/jdk/src/solaris/native/common -m32  
> -fno-strict-aliasing -fPIC
> 
> Does that make sense for compiling a test? Does it depend on whether we are 
> building a native library or a native executable?

I think they make sense, at least initially. If not, we can tune them, but that 
"tuning” will be in one central location and not spread out in a lot of shell 
scripts. I also plan to allow individual tests to override the flags if 
necessary (for example to link with X11). 

For executables there is CFLAGS_JDKEXE. 

Thanks,
/Staffan

> 
>>> 
>>> Also talk to SQE as I'm pretty sure there is an existing project to look at 
>>> how to better handle this, at least for the internal test suites.
>> 
>> I have talked to SQE. I don’t know of any other projects to handle this.
> 
> :) It wasn't SQE, it was your project as referenced in a few bug reports last 
> August/September.
> 
> David
> 
> 
>> /Staffan
>> 
>> 
>>> 
>>> David
>>> -
>>> 
 I suggest that at the end of a distributed build run, the pre-built test 
 binaries are packaged in a zip or tar file (just like the product bits) 
 and stored next to the product bundles. When we run distributed tests, we 
 need to pick up the product bundle and the test bundle before the testing 
 is started.
 
 To tell the tests where the native code is, I would like to add a flag to 
 jtreg to point out the path to the binaries. This should cause jtreg to 
 set java.library.path before invoking a test and also set a test.* 
 property which can be used by test to find it’s native components.
 
 This kind of setup would make it easier to add and maintain tests that 
 have a native component. I think this will be especially important as more 
 tests are written using jtreg in the hotspot repository.
 
 Thoughts on this? Is the general approach ok? There are lot

Re: Cross-building Windows binaries using the mingw toolchain

2014-05-01 Thread Ivan Krylov
What would this give in practical terms? Suppose you are working on a 
cross-platform bug or feature. You want to make sure that the fix works 
on Windows as well. I would think that testing the fix with 
cross-compiled build would be insufficient for the reasons that Volker 
lists. If the produced bits aren't the same (and they won't be the same) 
we can not trust the results of such testing. And the test matrix for 
openjdk is already rather complex and making it even more complex with 
cross-compilation options will not help. And one still needs at least a 
virtualized copy of Windows to run tests anyways.
I am not sure what is the status of Visual Studio Express support by the 
build chain but I'd rather see  that in place as an alternative to the 
use of Visual Studio Professional.This way the produces jdk binaries 
will be much closer to the stock ones.


Thanks,
Ivan

On 30/04/14 19:00 , Volker Simonis wrote:

On Wed, Apr 30, 2014 at 6:31 PM, Florian Weimer  wrote:

On 04/30/2014 06:16 PM, Volker Simonis wrote:


The first one is to make the OpenJDK compile on Windows with the MinGW
toolchain (instead of Cygwin). This currently doesn't work out of the
box but is relatively easy to achieve (see for example "8022177:
Windows/MSYS builds broken"
https://bugs.openjdk.java.net/browse/JDK-8022177). Magnus and I are
working on this (and actually I have an internal build which works
with MinGW). Hopefully we can fix this in the OpenJDK soon.


Thanks for your input.  If you say "MingW toolchain", you mean the scripting
environment, but not the compiler and linker, right?



The second one is to cross compile the whole OpenJDK on Linux using
gcc and MingGW. If I understood you right that's what you actually
wanted.


Yes, that's what I'm interested in.



I personally think that would be nice to have but at the same

time I also think it would be quite hard to get there and probably not
worth doing it because even if you'd succeed nobody will probably
maintain it and it would break quite soon (see for example the
GCC/Solaris build or the Clang/Linux build).


It's clear to me that this is worthwhile only if I set up a builder which
detects bit rot quickly.



If you want to try it nevertheless, some of the problems you will face
will be at least the following ones:
- convert the HotSpot nmake makefiles to GNU syntax (notice that this
project is currently underway under the umbrella of the new build
system anyway, so you'd probably want to wait to avoid doing double
work)


Ah, interesting.



- convert Visual Studio intrinsics, inline assembler and compiler
idiosyncrasies to GCC syntax


Ahh, I wonder how much I will encounter there.  That would be prerequisite
for a pure-mingw build on Windows as well, right?



- you'll probably also need to cross compile dependencies like
libfreetype with GCC/MinGW


Fedora already covers those, although the paths are somewhat unexpected.



- I'm actually not an expert, but the OpenJDK is linked against some
native Window libraries like DirectX and runtime libraries from the
Microsoft SDKs. I'm not an expert here and I don't know how that would
work for a cross-compile.


We supposedly have the headers and import libraries in Fedora.



I personally think we should rather focus on further improving the
current Windows build. It's already a huge improvement compared to the
old JDK7 Windows build. From what I see, the main remaining problems
are to somehow make it possible to get a stable, defined and free
version of the Microsoft development tools which are "known to work".


Yes, I tried to set up a Windows development environment, but quickly got
confused.

My background here is that I want to contribute some new features and I
expect that feature parity for Windows will increase the likelihood of
acceptance.


But why can't you install Cygwin and the free Microsoft Express/SDK
compilers and do a native build. From my experience that's a matter of
some hours (and you can find quite some documentation/tutorials/help
on the web and on the various mailing lists). Doing that you could be
sure that you really test what others (i.e. especially Oracle) will
get. Cross-compiling your new feature with a MinGW toolchain doesn't
mean that others will be able to compile and run that code with a
native Windows build tool chain (it would be actually quite easy to
introduce changes which work for the MinGW cross-build but break for
the native build) So I don't see how that would increase the
confidence in your change.

 From my experience, native OpenJDK changes (and often even trivial
ones) should be build and tested at least on Linux and Windows (this
already exercises your changes on two different OSs with two different
compilers). Bigger shared changes should also be tested on MacOS X
(which is quite "cheap" and gives you a third OS and compiler) and
Solaris (which is hard nowadays).


I need to think about it, but for my purposes, a pure mingw environment
running on Windows would

Re: Build OpenJDK 8 for armhf (Raspberry Pi)?

2014-05-01 Thread Jim Connors

Hello again,

Just thought I'd provide an update to the forum about my travails in 
trying to build an armhf version of OpenJDK 8...


Gave up trying to cross-compile, and instead built it natively on an Arm 
device.  The Rasberry Pi is woefully underpowered for such and endeavor, 
but I do have a quad-core Cortex-A9 device which is a lot faster.  No 
way near the speed of even a modest laptop, but nonetheless much better.


After fits and starts, here's the configuration which enabled a partial 
build:


$ bash configure --with-jvm-variants=zero --disable-headful 
--with-memory-size=1024 --enable-openjdk-only


Comments:

1.  Added "--disable-headful" because there were errors building what 
looked like part of Java 2d.  As no graphics were required, a headless 
build is just fine.


2. Added "--with-memory-size=1024".  This device only has 1GB RAM, and 
without this option, configure will specify that BOOT_JAVAC be run with 
the following options: "-Xms256M -Xmx512M".  When it comes time to 
compiling the 9416 files for BUILD_JDK, an OutOfMemoryException will be 
thrown.


By providing the "--with-memory-size=1024"option, configure will now 
specify that BOOT_JAVAC will be run with "-Xms400M -Xmx1100M".  The 
astute will notice that -Xmx1100M is actually larger than physical RAM 
which is indeed correct.  Which also means that swap space must be 
configured, and thrashing will take place during certain parts of the 
build.  But hey, that's what was needed.


Also note that these systems traditionally use SD cards as their sole 
filesystem.  The thought of swapping to an SD card based disk sounds 
excruciating and it is.  To alleviate somewhat, I attached a USB hard 
drive and located the build and swap space there.


3. Not sure the "--enable-openjdk-only" is needed.

5. Issued the following command to build:

   $ time make LOG=debug JOBS=4 images

   real42m16.850s
   user   33m42.020s
   sys 4m42.970s


Cheers,
-- Jim C

On 4/29/2014 12:34 PM, Jim Connors wrote:

Hello,

Trying to build OpenJDK 8 for armhf, ultimately to be hosted on a 
Raspberry Pi.  I'm cross compiling from a Ubuntu 12.04 x86 Virtualbox 
image and am using gcc-4.7-linaro-rpi-gnueabihf for a toolchain.


Configuration invocation looks as follows:

$ bash configure 
--with-sys-root=/home/jimc/gcc-4.7-linaro-rpi-gnueabihf/ 
--target=arm-unknown-linux-gnueabihf --with-jvm-variants=zero 
--with-num-cores=2


The make fails like this:

Compiling /home/jimc/openjdk8/hotspot/src/os/posix/vm/os_posix.cpp
/home/jimc/openjdk8/hotspot/src/os/linux/vm/os_linux.cpp: In static 
member function 'static jint os::init_2()':
/home/jimc/openjdk8/hotspot/src/os/linux/vm/os_linux.cpp:4853:42: 
error: 'workaround_expand_exec_shield_cs_limit' was not declared in 
this scope


Might there a set of patches that are required to get this going 
further?  Anything else I'm missing?


Any pointer greatly appreciated,
-- Jim C

--

*Jim Connors,* Principal Sales Consultant
Oracle Alliances & Channels | Java Embedded Global Sales Unit
Office: +1 516.809.5925
Cell: +1 516.782.5501
Email: james.conn...@oracle.com 

*Learn more about Embeddable Java:*
http://www.oracle.com/goto/javaembedded





Re: Proposal: jtreg tests with native components

2014-05-01 Thread Jonathan Gibbons

Dmitry,

Beyond Staffan's proposal for jtreg to provide tests with a pointer to
the location of precompiled native code, there are no plans to change
the way jtreg execute tests, processes action  tags like @build, @run,
etc.

-- Jon

On 05/01/2014 09:12 AM, Dmitry Samersoff wrote:

Staffan,

Couple of tests require both Java and native code, so compiling it's all
into the single directory might be a problem. Some test used to access
additional data in TESTSRC so we have to copy it as well.

One of possible solution is to allow the tests to support two
independent steps - "build" and "run".

on build side - run all tests with "build" parameter, tests that don't
have build step will just do nothing and continue work as today. Then
bundle entire test tree.

on test side - run all tests with the "run" parameters. Test that have
build step have to copy all required files from build output directory
to work directory.

JDK configuration options could be provided through include file for
make/shell or property file.

It allows the test to achieve whatever build and test behavior it
requires and allows us to move to right direction, without breaking
existing things.

-Dmitry

On 2014-04-25 16:02, Staffan Larsen wrote:

There are a couple of jtreg tests today that depend on native components 
(either JNI libraries or executables). These are handled in one of two ways:

1) The binaries are pre-compiled and checked into the repository (often inside 
jar files).
2) The test will try to invoke a compiler (gcc, cl, …) when the test is being 
run.

Neither of these are very good solutions. #1 makes it hard to run the setup the 
test for all platforms and requires binaries in the source control system. #2 
is hit-and-miss: the correct compiler may or may not be installed on the test 
machine, and the approach requires platform specific logic to be maintained.

I would like to propose that these native components are instead compiled when 
the product is built by the same makefile logic as the product. At product 
build time we know we have access to the (correct) compilers and we have 
excellent support in the makefiles for building on all platforms.

If we build the native test components together with the product, we also have 
to take care of distributing the result together with the product when we do 
testing across a larger number of machines. We will also need a way to tell the 
jtreg tests where these pre-built binaries are located.

I suggest that at the end of a distributed build run, the pre-built test 
binaries are packaged in a zip or tar file (just like the product bits) and 
stored next to the product bundles. When we run distributed tests, we need to 
pick up the product bundle and the test bundle before the testing is started.

To tell the tests where the native code is, I would like to add a flag to jtreg 
to point out the path to the binaries. This should cause jtreg to set 
java.library.path before invoking a test and also set a test.* property which 
can be used by test to find it’s native components.

This kind of setup would make it easier to add and maintain tests that have a 
native component. I think this will be especially important as more tests are 
written using jtreg in the hotspot repository.

Thoughts on this? Is the general approach ok? There are lots of details to be 
figured out, but at this stage I would like to hear feedback on the idea as 
such.

Thanks,
/Staffan







Re: RFR: 8034094: SA agent can't compile when jni_x86.h is used

2014-05-01 Thread Erik Helin
On Wednesday 30 April 2014 23:18:40 PM Dmitry Samersoff wrote:
> Erik,
> 
> Sorry, missed the thread.

No problem, thanks for having a look!

On Wednesday 30 April 2014 23:18:40 PM Dmitry Samersoff wrote:
> Changes (option 2) looks good for me.

Thanks!

Erik

On Wednesday 30 April 2014 23:18:40 PM Dmitry Samersoff wrote:
> -Dmitry
> 
> On 2014-02-10 19:21, Erik Helin wrote:
> > Sigh, I forgot the subject...
> > 
> > "RFR: 8034094: SA agent can't compile when jni_x86.h is used"
> > 
> > Thanks,
> > Erik
> > 
> > On 2014-02-10 16:08, Erik Helin wrote:
> >> Hi all,
> >> 
> >> this patch fixes an issue with HotSpot's makefiles, IMPORT_JDK and
> >> jni_md.h.
> >> 
> >> The bug manifests itself when using an IMPORT_JDK which
> >> include/linux/jni_md.h has a timestamp that is older than
> >> hotspot/src/cpu/x86/jni_x86.h. When this happens, the Makefiles will
> >> copy hotspot/src/cpu/x86/jni_x86.h to
> >> hotspot/build/jdk-linux-amd64/fastdebug/include/linux/jni_md.h.
> >> 
> >> The issue is that hotspot/src/cpu/x86/jni_x86.h differs slightly from
> >> jdk/include/jni.h, since it is used for all operating systems:
> >> 
> >> #if defined(SOLARIS) || defined(LINUX) || defined(_ALLBSD_SOURCE)
> >> ... // common stuff
> >> #else
> >> ... // windows stuff
> >> #endif
> >> 
> >> We compile the SA agent, see make/linux/makefiles/saproc.make, without
> >> defining LINUX (LINUX is hotspot's own define, gcc uses __linux__).
> >> 
> >> In my opinion, there are two ways to solve this:
> >> 1. Add -DLINUX to make/linux/makefiles/saproc.make (and corresponding
> >> 
> >> defines for Solaris and BSD).
> >> 
> >> 2. Rewrite the #if check in jni_x86.h to use platform specific "native"
> >> 
> >> defines.
> >> 
> >> I've created a patch for each alternative:
> >> 1: http://cr.openjdk.java.net/~ehelin/8034094/webrev.1/
> >> 2: http://cr.openjdk.java.net/~ehelin/8034094/webrev.2/
> >> 
> >> For the second patch, note that I've inverted the #if check so that it
> >> checks for _WIN32 is defined and treat all others operating systems as
> >> "#else".
> >> 
> >> Bug:
> >> https://bugs.openjdk.java.net/browse/JDK-8034094
> >> 
> >> Testing:
> >> - Compiled both version locally and made sure it worked
> >> - JPRT
> >> 
> >> Thanks,
> >> Erik



Re: RFR: 8034094: SA agent can't compile when jni_x86.h is used

2014-05-01 Thread Erik Helin
Hi Erik,

thanks for having a look at the patches!

On Wednesday 30 April 2014 13:39:01 PM Erik Joelsson wrote:
> I think option 2 seems best.

Ok, that is my preferred option as well, thanks!

Erik

> /Erik
> 
> On 2014-04-30 13:26, Erik Helin wrote:
> > Anyone interested in this patch? I ran into this issue again yesterday...
> > 
> > Thanks,
> > Erik
> > 
> > On 2014-02-10 16:21, Erik Helin wrote:
> >> Sigh, I forgot the subject...
> >> 
> >> "RFR: 8034094: SA agent can't compile when jni_x86.h is used"
> >> 
> >> Thanks,
> >> Erik
> >> 
> >> On 2014-02-10 16:08, Erik Helin wrote:
> >>> Hi all,
> >>> 
> >>> this patch fixes an issue with HotSpot's makefiles, IMPORT_JDK and
> >>> jni_md.h.
> >>> 
> >>> The bug manifests itself when using an IMPORT_JDK which
> >>> include/linux/jni_md.h has a timestamp that is older than
> >>> hotspot/src/cpu/x86/jni_x86.h. When this happens, the Makefiles will
> >>> copy hotspot/src/cpu/x86/jni_x86.h to
> >>> hotspot/build/jdk-linux-amd64/fastdebug/include/linux/jni_md.h.
> >>> 
> >>> The issue is that hotspot/src/cpu/x86/jni_x86.h differs slightly from
> >>> jdk/include/jni.h, since it is used for all operating systems:
> >>> 
> >>> #if defined(SOLARIS) || defined(LINUX) || defined(_ALLBSD_SOURCE)
> >>> ... // common stuff
> >>> #else
> >>> ... // windows stuff
> >>> #endif
> >>> 
> >>> We compile the SA agent, see make/linux/makefiles/saproc.make, without
> >>> defining LINUX (LINUX is hotspot's own define, gcc uses __linux__).
> >>> 
> >>> In my opinion, there are two ways to solve this:
> >>> 1. Add -DLINUX to make/linux/makefiles/saproc.make (and corresponding
> >>> 
> >>> defines for Solaris and BSD).
> >>> 
> >>> 2. Rewrite the #if check in jni_x86.h to use platform specific "native"
> >>> 
> >>> defines.
> >>> 
> >>> I've created a patch for each alternative:
> >>> 1: http://cr.openjdk.java.net/~ehelin/8034094/webrev.1/
> >>> 2: http://cr.openjdk.java.net/~ehelin/8034094/webrev.2/
> >>> 
> >>> For the second patch, note that I've inverted the #if check so that it
> >>> checks for _WIN32 is defined and treat all others operating systems as
> >>> "#else".
> >>> 
> >>> Bug:
> >>> https://bugs.openjdk.java.net/browse/JDK-8034094
> >>> 
> >>> Testing:
> >>> - Compiled both version locally and made sure it worked
> >>> - JPRT
> >>> 
> >>> Thanks,
> >>> Erik



Re: Proposal: jtreg tests with native components

2014-05-01 Thread Dmitry Samersoff
Staffan,

Couple of tests require both Java and native code, so compiling it's all
into the single directory might be a problem. Some test used to access
additional data in TESTSRC so we have to copy it as well.

One of possible solution is to allow the tests to support two
independent steps - "build" and "run".

on build side - run all tests with "build" parameter, tests that don't
have build step will just do nothing and continue work as today. Then
bundle entire test tree.

on test side - run all tests with the "run" parameters. Test that have
build step have to copy all required files from build output directory
to work directory.

JDK configuration options could be provided through include file for
make/shell or property file.

It allows the test to achieve whatever build and test behavior it
requires and allows us to move to right direction, without breaking
existing things.

-Dmitry

On 2014-04-25 16:02, Staffan Larsen wrote:
> There are a couple of jtreg tests today that depend on native components 
> (either JNI libraries or executables). These are handled in one of two ways:
> 
> 1) The binaries are pre-compiled and checked into the repository (often 
> inside jar files).
> 2) The test will try to invoke a compiler (gcc, cl, …) when the test is being 
> run.
> 
> Neither of these are very good solutions. #1 makes it hard to run the setup 
> the test for all platforms and requires binaries in the source control 
> system. #2 is hit-and-miss: the correct compiler may or may not be installed 
> on the test machine, and the approach requires platform specific logic to be 
> maintained.
> 
> I would like to propose that these native components are instead compiled 
> when the product is built by the same makefile logic as the product. At 
> product build time we know we have access to the (correct) compilers and we 
> have excellent support in the makefiles for building on all platforms.
> 
> If we build the native test components together with the product, we also 
> have to take care of distributing the result together with the product when 
> we do testing across a larger number of machines. We will also need a way to 
> tell the jtreg tests where these pre-built binaries are located.
> 
> I suggest that at the end of a distributed build run, the pre-built test 
> binaries are packaged in a zip or tar file (just like the product bits) and 
> stored next to the product bundles. When we run distributed tests, we need to 
> pick up the product bundle and the test bundle before the testing is started.
> 
> To tell the tests where the native code is, I would like to add a flag to 
> jtreg to point out the path to the binaries. This should cause jtreg to set 
> java.library.path before invoking a test and also set a test.* property which 
> can be used by test to find it’s native components.
> 
> This kind of setup would make it easier to add and maintain tests that have a 
> native component. I think this will be especially important as more tests are 
> written using jtreg in the hotspot repository.
> 
> Thoughts on this? Is the general approach ok? There are lots of details to be 
> figured out, but at this stage I would like to hear feedback on the idea as 
> such.
> 
> Thanks,
> /Staffan
> 


-- 
Dmitry Samersoff
Oracle Java development team, Saint Petersburg, Russia
* I would love to change the world, but they won't give me the sources.


Re: RFR [9] : get_source.sh should be more friendly to MQ

2014-05-01 Thread Chris Hegarty
John, Mike,

Thanks for your comments. I’ve been using rebase for a while now and it 
certainly makes resolving conflicts in patches much easier, as opposed to 
manually inspecting reject files. My workflow is as per your suggestion,

bash common/bin/hgforest.sh push -a
bash common/bin/hgforest.sh pull —rebase

Give that not everyone wants to operate this way, using rebase, I’m not sure 
what, if anything, can be added to get_source.sh to support this. Maybe I just 
need to omit get_source.sh from my workflow after the initial clone? 

-Chris.

On 28 Apr 2014, at 20:43, John Coomes  wrote:

> Chris Hegarty (chris.hega...@oracle.com) wrote:
>> On 11/04/14 15:59, Jonathan Gibbons wrote:
>>> Popping all patches beforehand is reasonable, but afterwards, it would
>>> be better to reset to the patches that were previously applied than to
>>> try and push all of them.
>> 
>> Michael as requested same.
>> 
>>> What is the behavior if you cannot qpush patches after the pull, because
>>> of merge issues?
>> 
>> The parts of the specific patch that applied cleanly remain applied, 
>> them that did not are written out to reject files to be analyzed. The 
>> remainder of the patches in that repository are not applied.  get_source 
>> will then exit with an appropriate error exit code and you can take action.
> 
> [Sorry for resurrecting this thread, I just became aware of it--was not
> subscribed.]
> 
> My workflow relies heavily on mq and (IMHO, of course) reject files
> are needless tedium.  So I depend heavily on rebase.  In order to
> avoid reject files when pulling, I do:
> 
>   hg qpush -a # push everything
> 
>   hg pull --rebase# pull and rebase in one step, will
>   # invoke merge tools if necessary
> 
>   hg qpop # optional
> 
> If you omit the 'hg qpush -a' before pulling, it becomes tedious
> (sometimes impossible) to get your merge tools invoked when you later
> want to push the patches.
> 
> -John
> 
>>> On 04/11/2014 07:58 AM, Chris Hegarty wrote:
 Anyone using MQ for their daily development will know about this,
 forgetting to qpop before sync'ing up. It would be nice it get_source
 would pop and push patches ( only if you are using MQ ) automatically.
 If you do not have patch repos, then there is no change.
 
 diff --git a/get_source.sh b/get_source.sh
 --- a/get_source.sh
 +++ b/get_source.sh
 @@ -28,6 +28,21 @@
 # Get clones of all nested repositories
 sh ./common/bin/hgforest.sh clone "$@" || exit 1
 
 +patchdirs=`ls -d ./.hg/patches ./*/.hg/patches ./*/*/.hg/patches \
 + ./*/*/*/.hg/patches ./*/*/*/*/.hg/patches 2>/dev/null`
 +
 +# Pop all patches, if any, before updating
 +if [ "${patchdirs}"  != "" ] ; then
 +  echo "Found queue repository, qpop."
 +  sh ./common/bin/hgforest.sh qpop -a || exit 1
 +fi
 +
 # Update all existing repositories to the latest sources
 -sh ./common/bin/hgforest.sh pull -u
 +sh ./common/bin/hgforest.sh pull -u || exit 1
 
 +# Push all patches, if any, after updating
 +if [ "${patchdirs}" != "" ] ; then
 +  echo "Found queue repository, qpush."
 +  sh ./common/bin/hgforest.sh qpush -a
 +fi
 +
 
 -Chris.
>>> 
> 
> -- 
> John CoomesOracle, MS USCA22-3??
> john.coo...@oracle.com 4220 Network Circle
> 408-276-7048   Santa Clara, CA 95054-1778
>*** Support GreenPeace and we'll all breathe easier. ***