[jira] [Resolved] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11997.
---
Resolution: Duplicate

 CMake CMAKE_C_FLAGS are non-portable
 

 Key: HADOOP-11997
 URL: https://issues.apache.org/jira/browse/HADOOP-11997
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Critical

 hadoop-common-project/hadoop-common/src/CMakeLists.txt 
 (https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
  contains the following unconditional assignments to CMAKE_C_FLAGS:
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -g -Wall -O2)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE 
 -D_FILE_OFFSET_BITS=64)
 There are several issues here:
 1. -D_GNU_SOURCE globally enables the use of all Linux-only extensions in 
 hadoop-common native source. This is probably a major contributor to the poor 
 cross-platform portability of Hadoop native code to non-Linux platforms as it 
 makes it easy for developers to use non-portable Linux features without 
 realising. Use of Linux-specific features should be correctly bracketed with 
 conditional macro blocks that provide an alternative for non-Linux platforms.
 2. -g -Wall -O2 turns on debugging for all builds, I believe the correct 
 mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
 necessary to override CFLAGS it should probably be done conditionally 
 dependent on the value of CMAKE_BUILD_TYPE.
 3. -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 On Solaris these flags are 
 only needed for largefile support in ILP32 applications, LP64 applications 
 are largefile by default. I believe the same is true on Linux, so these flags 
 are harmless but redundant for 64-bit compilation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: use of HADOOP_HOME

2015-05-28 Thread Allen Wittenauer

On May 28, 2015, at 9:36 AM, Sangjin Lee sj...@apache.org wrote:

 Hi folks,
 
 I noticed this while setting up a cluster based on the current trunk. It
 appears that setting HADOOP_HOME is now done much later (in
 hadoop_finalize) than branch-2. Importantly this is set *after*
 hadoop-env.sh (or yarn-env.sh) is invoked.
 
 In our version of hadoop-env.sh, we have used $HADOOP_HOME to define some
 more variables, but it appears that we can no longer rely on the
 HADOOP_HOME value in our *-env.sh customization. Is this an intended change
 in the recent shell script refactoring? What is the right thing to use in
 hadoop-env.sh for the location of hadoop?

a) HADOOP_HOME was deprecated on Unix systems as part of (IIRC) 0.21.  
HADOOP_PREFIX was its replacement.  (No, I never understood the reasoning for 
this either.)  Past 0.21, it was never safe to rely upon HADOOP_HOME in 
*-env.sh files unless it is set prior to running the shell commands.

b) That said, functionality-wise, HADOP_HOME is being set in pretty 
much the same place in the code flow.  *-env.sh has already been processed in 
both branch-2 and trunk by the time HADOOP_HOME is configured.  trunk only 
configures HADOOP_HOME for backward compatibility.  The rest of the code uses 
HADOOP_PREFIX as expected and very very early on the lifecycle.  

What you are likely seeing is the result of a bug fix:  trunk doesn’t 
reprocess *-env.sh files when using the shin commands whereas branch-2 does it 
several times over. (This is also one of the reasons why Java command line 
options are duplicated too.)  So it likely worked for you because of this 
broken behavior.

In my mind, it is a better practice to configure 
HADOOP_HOME/HADOOP_PREFIX outside of the *-env.sh files (e.g., /etc/profile.d 
on Linux) so that one can use them for PATH, etc.  That should guarantee 
expected behavior.






Re: use of HADOOP_HOME

2015-05-28 Thread Sangjin Lee
Thanks Chris and Allen for the info! Yes, we can use HADOOP_PREFIX
until/unless HADOOP-11393 is resolved.

Just to clarify, we're not setting HADOOP_HOME/HADOOP_PREFIX in our
*-env.sh; we simply use them. I don't know that it is always feasible to
set them at the machine level. Some setups may have multiple hadoop
installs and want to switch between them, and so on.

On Thu, May 28, 2015 at 10:13 AM, Allen Wittenauer a...@altiscale.com wrote:


 On May 28, 2015, at 9:36 AM, Sangjin Lee sj...@apache.org wrote:

  Hi folks,
 
  I noticed this while setting up a cluster based on the current trunk. It
  appears that setting HADOOP_HOME is now done much later (in
  hadoop_finalize) than branch-2. Importantly this is set *after*
  hadoop-env.sh (or yarn-env.sh) is invoked.
 
  In our version of hadoop-env.sh, we have used $HADOOP_HOME to define some
  more variables, but it appears that we can no longer rely on the
  HADOOP_HOME value in our *-env.sh customization. Is this an intended
 change
  in the recent shell script refactoring? What is the right thing to use in
  hadoop-env.sh for the location of hadoop?

 a) HADOOP_HOME was deprecated on Unix systems as part of (IIRC)
 0.21.  HADOOP_PREFIX was its replacement.  (No, I never understood the
 reasoning for this either.)  Past 0.21, it was never safe to rely upon
 HADOOP_HOME in *-env.sh files unless it is set prior to running the shell
 commands.

 b) That said, functionality-wise, HADOP_HOME is being set in
 pretty much the same place in the code flow.  *-env.sh has already been
 processed in both branch-2 and trunk by the time HADOOP_HOME is
 configured.  trunk only configures HADOOP_HOME for backward compatibility.
 The rest of the code uses HADOOP_PREFIX as expected and very very early on
 the lifecycle.

 What you are likely seeing is the result of a bug fix:  trunk
 doesn't reprocess *-env.sh files when using the shin commands whereas
 branch-2 does it several times over. (This is also one of the reasons why
 Java command line options are duplicated too.)  So it likely worked for you
 because of this broken behavior.

 In my mind, it is a better practice to configure
 HADOOP_HOME/HADOOP_PREFIX outside of the *-env.sh files (e.g.,
 /etc/profile.d on Linux) so that one can use them for PATH, etc.  That
 should guarantee expected behavior.







[jira] [Resolved] (HADOOP-11975) Native code needs to be built to match the 32/64 bitness of the JVM

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11975.
---
Resolution: Duplicate

 Native code needs to be built to match the 32/64 bitness of the JVM
 ---

 Key: HADOOP-11975
 URL: https://issues.apache.org/jira/browse/HADOOP-11975
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison

 When building with a 64-bit JVM on Solaris the following error occurs at the 
 link stage of building the native code:
  [exec] ld: fatal: file 
 /usr/jdk/instances/jdk1.8.0/jre/lib/amd64/server/libjvm.so: wrong ELF class: 
 ELFCLASS64
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [target/usr/local/lib/libhadoop.so.1.0.0] Error 1
  [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
 The compilation flags in the makefiles need to explicitly state if 32 or 64 
 bit code is to be generated, to match the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11987) JNI build should use default cmake FindJNI.cmake

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11987.
---
Resolution: Duplicate

 JNI build should use default cmake FindJNI.cmake
 

 Key: HADOOP-11987
 URL: https://issues.apache.org/jira/browse/HADOOP-11987
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor

 From 
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201505.mbox/%3C55568DAC.1040303%40oracle.com%3E
 --
 Why does  hadoop-common-project/hadoop-common/src/CMakeLists.txt use 
 JNIFlags.cmake in the same directory to set things up for JNI 
 compilation rather than FindJNI.cmake, which comes as a standard cmake 
 module? The checks in JNIFlags.cmake make several assumptions that I 
 believe are only correct on Linux whereas I'd expect FindJNI.cmake to be 
 more platform-independent.
 --
 Just checked the repo of cmake and it turns out that FindJNI.cmake is
 available even before cmake 2.4. I think it makes sense to file a bug
 to replace it to the standard cmake module. Can you please file a jira
 for this?
 --
 This also applies to 
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/JNIFlags.cmake



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: use of HADOOP_HOME

2015-05-28 Thread Allen Wittenauer

On May 28, 2015, at 11:29 AM, Sangjin Lee sjl...@gmail.com wrote:

 Thanks Chris and Allen for the info! Yes, we can use HADOOP_PREFIX
 until/unless HADOOP-11393 is resolved.
 
 Just to clarify, we're not setting HADOOP_HOME/HADOOP_PREFIX in our
 *-env.sh; we simply use them. I don't know that it is always feasible to
 set them at the machine level. Some setups may have multiple hadoop
 installs and want to switch between them, and so on.

Yup.  Understood.  In fact, it’s probably worth pointing out that if 
you do a tar-ball style install (e.g, all the hadoop gunk is in one dir), trunk 
will figure all these vars out based upon the hadoop/yarn/etc bin in your path. 
:)  … and HADOOP_PREFIX should be set to something by the time *-env.sh gets 
set, so should be safe to use there.  It’s just HADOOP_HOME that’s problematic. 
 If HADOOP-11393 gets committed, then the rules will be a bit different….

trunk’s most powerful env var is probably HADOOP_LIBEXEC_DIR, actually. 
 But I’ll leave that as an exercise for the reader to as to why.


Re: use of HADOOP_HOME

2015-05-28 Thread Chris Nauroth
Hi Sangjin,

In the new scripts, HADOOP_PREFIX is set very early in execution.  This
happens inside the hadoop_bootstrap function, which executes before
hadoop_exec_hadoopenv, so I expect you can use that.

However, HADOOP-11393 proposes reverting HADOOP_PREFIX and switching
things back to HADOOP_HOME, so this might be a moving target right now.  I
haven't looked at the patch in detail yet.

https://issues.apache.org/jira/browse/HADOOP-11393


--Chris Nauroth




On 5/28/15, 9:36 AM, Sangjin Lee sj...@apache.org wrote:

Hi folks,

I noticed this while setting up a cluster based on the current trunk. It
appears that setting HADOOP_HOME is now done much later (in
hadoop_finalize) than branch-2. Importantly this is set *after*
hadoop-env.sh (or yarn-env.sh) is invoked.

In our version of hadoop-env.sh, we have used $HADOOP_HOME to define some
more variables, but it appears that we can no longer rely on the
HADOOP_HOME value in our *-env.sh customization. Is this an intended
change
in the recent shell script refactoring? What is the right thing to use in
hadoop-env.sh for the location of hadoop?

Thanks,
Sangjin



[jira] [Created] (HADOOP-12041) Get rid of current GaloisField implementation and re-implement the Java Reed-Solomon algorithm

2015-05-28 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-12041:
--

 Summary: Get rid of current GaloisField implementation and 
re-implement the Java Reed-Solomon algorithm
 Key: HADOOP-12041
 URL: https://issues.apache.org/jira/browse/HADOOP-12041
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


Currently existing Java RS coders based on {{GaloisField}} implementation have 
some drawbacks or limitations:
* The decoder computes not really erased units unnecessarily (HADOOP-11871);
* The decoder requires parity units + data units order for the inputs in the 
decode API (HADOOP-12040);
* Need to support or align with native erasure coders regarding concrete coding 
algorithms and matrix, so Java coders and native coders can be easily swapped 
in/out and transparent to HDFS (HADOOP-12010);
* It's unnecessarily flexible but incurs some overhead, as HDFS erasure coding 
is totally a byte based data system, we don't need to consider other symbol 
size instead of 256.

This desires to re-implement the underlying facilities for the Java RS coders, 
getting rid of existing {{GaliosField}} from HDFS-RAID. Based on this work, 
Java RS coders will be re-implemented easily as well to resolve related issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12039) Build native codes modularly even in a component

2015-05-28 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-12039:
--

 Summary: Build native codes modularly even in a component
 Key: HADOOP-12039
 URL: https://issues.apache.org/jira/browse/HADOOP-12039
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kai Zheng
Assignee: Kai Zheng


It might be good to be able to build native codes in modular approach instead 
of coupling all things together. For example, in Hadoop common side, it has to 
build for compression, encryption and new erasure coding feature. This desires 
to separate the aspects, have separate cmake file and make, base on 
HADOOP-12036, thus avoiding the following too lengthy make target:
{code}
idmake/id
phasecompile/phase
goalsgoalrun/goal/goals
configuration
  target
exec executable=cmake 
dir=${project.build.directory}/native failonerror=true
  arg line=${basedir}/src/ 
-DGENERATED_JAVAH=${project.build.directory}/native/javah 
-DJVM_ARCH_DATA_MODEL=${sun.arch.data.model} -DREQUIRE_BZIP2=${require.bzip2} 
-DREQUIRE_SNAPPY=${require.snappy} -DCUSTOM_SNAPPY_PREFIX=${snappy.prefix} 
-DCUSTOM_SNAPPY_LIB=${snappy.lib} -DCUSTOM_SNAPPY_INCLUDE=${snappy.include} 
-DREQUIRE_OPENSSL=${require.openssl} -DCUSTOM_OPENSSL_PREFIX=${openssl.prefix} 
-DCUSTOM_OPENSSL_LIB=${openssl.lib} -DCUSTOM_OPENSSL_INCLUDE=${openssl.include} 
-DEXTRA_LIBHADOOP_RPATH=${extra.libhadoop.rpath}/
/exec
exec executable=make 
dir=${project.build.directory}/native failonerror=true
  arg line=VERBOSE=1/
/exec
exec executable=make 
dir=${project.build.directory}/native failonerror=true/exec
  /target
/configuration
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-05-28 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-12040:
--

 Summary: Adjust inputs order for the decode API in raw erasure 
coder
 Key: HADOOP-12040
 URL: https://issues.apache.org/jira/browse/HADOOP-12040
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


Currently we used the parity units + data units order for the inputs, 
erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
[~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
natural for HDFS usage, where usually data blocks are before parity blocks in a 
group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Common-trunk #1509

2015-05-28 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1509/changes



Build failed in Jenkins: Hadoop-common-trunk-Java8 #211

2015-05-28 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/211/changes

Changes:

[aajisaka] HADOOP-11242. Record the time of calling in tracing span of IPC 
server. Contributed by Mastake Iwasaki.

[wheat9] Update CHANGES.txt for HDFS-8135.

[wangda] YARN-3647. RMWebServices api's should use updated api from 
CommonNodeLabelsManager to get NodeLabel object. (Sunil G via wangda)

[wangda] MAPREDUCE-6304. Specifying node labels when submitting MR jobs. 
(Naganarasimha G R via wangda)

[cnauroth] YARN-3626. On Windows localized resources are not moved to the front 
of the classpath when they should be. Contributed by Craig Welch.

[gera] MAPREDUCE-6336. Enable v2 FileOutputCommitter by default. (Siqi Li via 
gera)

[wangda] YARN-3581. Deprecate -directlyAccessNodeLabelStore in RMAdminCLI. 
(Naganarasimha G R via wangda)

[wang] HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe 
Zhang.

[aw] HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster 
ClassNotFoundException (Darrell Taylor via aw)

[aw] YARN-2355. MAX_APP_ATTEMPTS_ENV may no longer be a useful env var for a 
container (Darrell Taylor via aw)

[aw] HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't 
have permissions to read the source (Darrell Taylor via aw)

[zjshen] YARN-3700. Made generic history service load a number of latest 
applications according to the parameter or the configuration. Contributed by 
Xuan Gong.

[cnauroth] HDFS-8431. hdfs crypto class not found in Windows. Contributed by 
Anu Engineer.

--
[...truncated 5528 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.549 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestGroupFallback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec - in 
org.apache.hadoop.security.TestGroupFallback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.449 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.371 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.672 sec - in 
org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestShellBasedIdMapping
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.59 sec - in 
org.apache.hadoop.security.TestShellBasedIdMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestDoAsEffectiveUser
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.2 sec - in 
org.apache.hadoop.security.TestDoAsEffectiveUser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.079 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.476 sec - in 
org.apache.hadoop.security.TestJNIGroupsMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.584 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.881 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.087 sec - in 

use of HADOOP_HOME

2015-05-28 Thread Sangjin Lee
Hi folks,

I noticed this while setting up a cluster based on the current trunk. It
appears that setting HADOOP_HOME is now done much later (in
hadoop_finalize) than branch-2. Importantly this is set *after*
hadoop-env.sh (or yarn-env.sh) is invoked.

In our version of hadoop-env.sh, we have used $HADOOP_HOME to define some
more variables, but it appears that we can no longer rely on the
HADOOP_HOME value in our *-env.sh customization. Is this an intended change
in the recent shell script refactoring? What is the right thing to use in
hadoop-env.sh for the location of hadoop?

Thanks,
Sangjin


[jira] [Resolved] (HADOOP-11655) Native compilation fails on Solaris due to use of getgrouplist function.

2015-05-28 Thread Malcolm Kavalsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Malcolm Kavalsky resolved HADOOP-11655.
---
  Resolution: Fixed
Release Note: Updating Solaris 11.2 to the latest SRU ( 17th Match 2015) 
fixes this issue

 Native compilation fails on Solaris due to use of getgrouplist function.
 

 Key: HADOOP-11655
 URL: https://issues.apache.org/jira/browse/HADOOP-11655
 Project: Hadoop Common
  Issue Type: Sub-task
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
Assignee: Malcolm Kavalsky
   Original Estimate: 168h
  Remaining Estimate: 168h

 getgrouplist() does not exist in Solaris, thus preventing compilation of the 
 native libraries. 
 The easiest solution would be to port this function from Linux or FreeBSD to 
 Solaris and add it to the library if compiling for Solaris.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)