[CentOS-docs] Installing Java on Centos
Hello, I have recently tried installing Java on Centos 6.4 and have found that the wiki tutorial for this subject is quite outdated (the proposed method doesn't work on the new version as the spec file is outdated and there is no need to rebuild the rpm with nosrc file anymore). I would like to post a tutorial on installing Java on Centos that would be more up-to-date and referring to the newer available versions of both Centos and Java. Regards, Olga ___ CentOS-docs mailing list CentOS-docs@centos.org http://lists.centos.org/mailman/listinfo/centos-docs
Re: [CentOS-docs] Installing Java on Centos
On 07/04/2013 02:37 PM, Olga Maciaszek-Sharma wrote: Hello, I have recently tried installing Java on Centos 6.4 and have found that the wiki tutorial for this subject is quite outdated (the proposed method doesn't work on the new version as the spec file is outdated and there is no need to rebuild the rpm with nosrc file anymore). I would like to post a tutorial on installing Java on Centos that would be more up-to-date and referring to the newer available versions of both Centos and Java. Hello Please use http://wiki.centos.org/HowTos/JavaRuntimeEnvironment. As far as I know it is correct and applies to all current CentOS and Oracle Java versions. manuel ___ CentOS-docs mailing list CentOS-docs@centos.org http://lists.centos.org/mailman/listinfo/centos-docs
[CentOS-announce] CESA-2013:1014 Important CentOS 5 java-1.6.0-openjdk Update
CentOS Errata and Security Advisory 2013:1014 Important Upstream details at : https://rhn.redhat.com/errata/RHSA-2013-1014.html The following updated files have been uploaded and are currently syncing to the mirrors: ( sha256sum Filename ) i386: 2c38bf51cef2befcf717f8f486ae37867a5bd29ee42908ddf4d5d3b55436f2d3 java-1.6.0-openjdk-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 861d7c8fa3ff46b78a64187b45609921e49bec920fa00614fa52533b36db15ed java-1.6.0-openjdk-demo-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 992c01478b0662346ccb9561e46d5dadf281f9c523eb79112f9e2e5edc80cbae java-1.6.0-openjdk-devel-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 145d3a5749d448b83077533e949591e6f732b108eb948769510cc51d11ed java-1.6.0-openjdk-javadoc-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 0a60d36f1a3ed0ad11c5d3a18890b079ed54ea3d9e1d757f78a0973b74120d61 java-1.6.0-openjdk-src-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm x86_64: ca6b524ba111aaf9481e070a1185376dbde8d3c6a9a166b0a4af4a6f36619224 java-1.6.0-openjdk-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm 26e9b66d4794be564bfe4e3c17d85cc244b22b2b1a923cb35e6ba245ff72c022 java-1.6.0-openjdk-demo-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm 1b4c55b1209f5f3af0bb58b249e0a96eba4ae2fb576c38d8fc5f086915d4af2e java-1.6.0-openjdk-devel-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm ef1484a976bd9f8f7db3a5943d19e12da3ea2497ca3e449c2a501ed640fdd5bc java-1.6.0-openjdk-javadoc-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm 0b3a822767944484bd5393e0ac6899f04efcfabf0aed83ae971141c4b547cf56 java-1.6.0-openjdk-src-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm Source: f9c9ff547c9c99d5bd3a7971d42c71c980392a4102274da09d02f8af2197fe62 java-1.6.0-openjdk-1.6.0.0-1.41.1.11.11.90.el5_9.src.rpm -- Johnny Hughes CentOS Project { http://www.centos.org/ } irc: hughesjr, #cen...@irc.freenode.net ___ CentOS-announce mailing list CentOS-announce@centos.org http://lists.centos.org/mailman/listinfo/centos-announce
[CentOS-announce] CESA-2013:1014 Important CentOS 6 java-1.6.0-openjdk Update
CentOS Errata and Security Advisory 2013:1014 Important Upstream details at : https://rhn.redhat.com/errata/RHSA-2013-1014.html The following updated files have been uploaded and are currently syncing to the mirrors: ( sha256sum Filename ) i386: ec3443d3637c5e1ee3d8ae68cb38ed1083611664bd5156410c8cba63f168c0dd java-1.6.0-openjdk-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm cfa262c2b479163919f273ecc9036b79a2d2ff4c45d47ea27c0f22084cc7c944 java-1.6.0-openjdk-demo-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm 605680de20b8c9f3950cf0352f225bcd9f48bfde19b04610776da09564891712 java-1.6.0-openjdk-devel-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm f16c9b407fc942fa3df87583c295d1ec9f04e2e866f818f3cdae73408c829643 java-1.6.0-openjdk-javadoc-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm 5970ed15745aec0470ebc34577526d484851fc3cf7b7366003a82dd87d98d203 java-1.6.0-openjdk-src-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm x86_64: ee81d5fd1b9e094bdc130a04d1afdc190a90dc0420b6fa7121007f7057d69025 java-1.6.0-openjdk-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm 88a92e6407c23d92db49d56f4741c1107aa96823378df109d1aacee058ffcd6f java-1.6.0-openjdk-demo-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm ccb6b261f4505ad5c4dd72a9f28aea83b50bc3faabe0dd2211c14f9ace725d00 java-1.6.0-openjdk-devel-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm f10a277d98c43b59b82a86733d0eae4604fd42205363280c0e872e6c4a95f0b1 java-1.6.0-openjdk-javadoc-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm 9ade89dc1961212936ee97ce89d545030b8f1dfcf06ced1aa7a14a3259606c97 java-1.6.0-openjdk-src-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm Source: 0398219e27505b5510182c5fdb33b24b23c7dd7e25f3073dfd7c4cafa63add6d java-1.6.0-openjdk-1.6.0.0-1.62.1.11.11.90.el6_4.src.rpm -- Johnny Hughes CentOS Project { http://www.centos.org/ } irc: hughesjr, #cen...@irc.freenode.net ___ CentOS-announce mailing list CentOS-announce@centos.org http://lists.centos.org/mailman/listinfo/centos-announce
[CentOS-announce] CEBA-2013:1015 CentOS 6 tog-pegasus Update
CentOS Errata and Bugfix Advisory 2013:1015 Upstream details at : https://rhn.redhat.com/errata/RHBA-2013-1015.html The following updated files have been uploaded and are currently syncing to the mirrors: ( sha256sum Filename ) i386: 49567c7e1d870681ef6cbfccce96d77c577d19703969bc97c147e8f4f2613f96 tog-pegasus-2.12.0-3.el6_4.i686.rpm 13b4c11cce537e7f15b02d0f712ae64c476ff16a98aa12dc38c9481c9118348d tog-pegasus-devel-2.12.0-3.el6_4.i686.rpm 91d9ad1ff129c9da17b902db9580653aafc5f61fe07b313caf5bb5a43837925a tog-pegasus-libs-2.12.0-3.el6_4.i686.rpm x86_64: 65f919f7cc19be15db5a993774c966d3592b770ed4eba18c7fc1bcff1d6afde0 tog-pegasus-2.12.0-3.el6_4.x86_64.rpm 538ee40984f86a8ab5aad5820cd52914b8986e2cccb21b8ed98688da33a39240 tog-pegasus-devel-2.12.0-3.el6_4.x86_64.rpm 91d9ad1ff129c9da17b902db9580653aafc5f61fe07b313caf5bb5a43837925a tog-pegasus-libs-2.12.0-3.el6_4.i686.rpm 389082fe0c2c16ac4878cc23481b3eba1df734cabecda1d093ef643679bb4116 tog-pegasus-libs-2.12.0-3.el6_4.x86_64.rpm Source: 3a437ee55df1f9e485f0b6065d4bfa40f403e3fa071b98ef32b0501c5b1df9c0 tog-pegasus-2.12.0-3.el6_4.src.rpm -- Johnny Hughes CentOS Project { http://www.centos.org/ } irc: hughesjr, #cen...@irc.freenode.net ___ CentOS-announce mailing list CentOS-announce@centos.org http://lists.centos.org/mailman/listinfo/centos-announce
Re: [CentOS-virt] KVM virtual machine and SAN storage with FC
On Thu, Jul 4, 2013 at 12:44 AM, denis bahati djbah...@yahoo.co.uk wrote: Hi Brett, On my plan is as follows: I have two machine (Server) that will host two VM each. One for database and one for application. Then the two machine will provide (Load Balance and High availability). My intention is that all application files and data file for the database should reside on the SAN storage for easy access and update. Don't... do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have split brain skew, It Does Not Work. Set up a proper database *cluster* with distinct back ends. Therefore the storage should be accessible to both VMs through mounting the SAN storage to the VMs. The connection between SAN storage and the servers is through Fiber Channel. Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don't pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues. I have seen somewhere talking about DM-Multipath but i dont know if this can help or the use of VT-d if can help. I will also appreciate if you provide some links to give me insight of how to do this. Multipath does not mean multiple clients of the same hardware storage. That's effectively like letting two kernels write to the same actual disk at the same time, and it's quite dangerous. Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM's can be migrated from one server to another with the shared storage more safely. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] KVM virtual machine and SAN storage with FC
Hi Team, Thanks for the good explanation. If that is not workable for the database, can anyone recommend me for the setup of the database clients and data files in order to achieve HA and load balancing? How should I set up my VMs and stations (Two machines with two VMs each)? I will appreciate for a workable approach and that is practical for the HA/Load balancing. Regards From: Nico Kadel-Garcia nka...@gmail.com To: denis bahati djbah...@yahoo.co.uk; Discussion about the virtualization on CentOS centos-virt@centos.org Cc: br...@worth.id.au br...@worth.id.au Sent: Thursday, 4 July 2013, 18:32 Subject: Re: [CentOS-virt] KVM virtual machine and SAN storage with FC On Thu, Jul 4, 2013 at 12:44 AM, denis bahati djbah...@yahoo.co.uk wrote: Hi Brett, On my plan is as follows: I have two machine (Server) that will host two VM each. One for database and one for application. Then the two machine will provide (Load Balance and High availability). My intention is that all application files and data file for the database should reside on the SAN storage for easy access and update. Don't... do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have split brain skew, It Does Not Work. Set up a proper database *cluster* with distinct back ends. Therefore the storage should be accessible to both VMs through mounting the SAN storage to the VMs. The connection between SAN storage and the servers is through Fiber Channel. Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don't pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues. I have seen somewhere talking about DM-Multipath but i dont know if this can help or the use of VT-d if can help. I will also appreciate if you provide some links to give me insight of how to do this. Multipath does not mean multiple clients of the same hardware storage. That's effectively like letting two kernels write to the same actual disk at the same time, and it's quite dangerous. Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM's can be migrated from one server to another with the shared storage more safely. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS] odd inconsistency with nfs
If anyone has ideas and/or needs more info, please let me know. Step 1 in debugging and troubleshooting... Use the KISS principle. Right now in that you have NIS, NFS, CentOS server and Solaris client (version? Given the red hat 7.3 instance you had would a safe assumption be not 11 or even 10?) Cut this down to work out which cog in the wheel is broken as there are just too many variables... Can you mount the NFS share locally to another directory on the NFS server? If that works can you mount it with a CentOS 6.4 client on another system? It's been a while since I had to deal with NFS but I see you have symlinks in exports... I thought at least under 3 that was definitely not supported and you should use bind mounts... http://mail-index.netbsd.org/tech-kern/1995/05/28/.html Start there and see how you go... ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] odd inconsistency with nfs
- Original Message - | I'm having an interesting/odd problem with nfs (I think). We recently | (Monday/Tuesday) upgraded our file server from an ancient redhat 7.3 | system to a shiny new centos 6.4 system. We don't see any issues | between | the other centos boxes, but things get a bit weird when we start | mounting on the old solaris clients. | | The initial symptom was that the 'tab complete' wasn't working, and | then | we noticed that typing 'ls *' in the mounted directory was bombing. I | tried forcing the mounting back to nfs3 but it's not consistent. I've | set up two boxes as servers and one of the solaris boxes is my | client. | Each server has two shares that are mounted on the client. Of those | four, one of them works properly and the other three do not. I've | spent | most of the day trying to debug this and I cannot for the life of me | tell why one share works and the rest don't. Nothing seems to be | special | about that share versus the rest. | | Here are notes on how things are set up. | | on duke: (nis server) |vi /etc/ypfiles/automap | scrs1_bolt-soft,intr,retrans=1 boltzmann:/scrs1_bolt | summit_bolt -soft,intr,retrans=1 boltzmann:/summit_bolt | scrs1.mirror -soft,intr,retrans=1 goblin:/scrs1.mirror | summit.mirror -soft,intr,retrans=1 goblin:/summit.mirror |( cd /var/yp ; make ) | | on boltzmann: (nfs server) |df -h | FilesystemSize Used Avail Use% Mounted on | /dev/sdb2 50G 13G 37G 26% / | tmpfs 3.9G 1.2M 3.9G 1% /dev/shm | /dev/sdb3 177G 188M 175G 1% /aux | /dev/sda3 208G 44G 164G 21% /aux2 |mkdir /aux/scrs1_bolt |mkdir /aux2/summit_bolt |ln -s /aux/scrs1_bolt /scrs1_bolt |ln -s /aux2/summit_bolt /summit_bolt |chmod 777 /aux/scrs1_bolt /aux2/summit_bolt |service nfs restart |vi /etc/exports | /scrs1_bolt | xxx.xxx.xxx.0/24(rw,no_root_squash,sync,insecure) | /summit_bolt | xxx.xxx.xxx.0/24(rw,no_root_squash,sync,insecure) |exportfs -rv | | on bigdog: (client) |mkdir /tmp/test/b1 /tmp/test/b2 /tmp/test/g1 /tmp/test/g2 |touch /tmp/test/b1/nothing_is_mounted | /tmp/test/b2/nothing_is_mounted /tmp/test/g1/nothing_is_mounted | /tmp/test/g2/nothing_is_mounted |mount -F nfs -o nfsvers=3 boltzmann:/summit_bolt /tmp/test/b1 |mount -F nfs -o nfsvers=3 boltzmann:/scrs1_bolt /tmp/test/b2 |mount -F nfs -o nfsvers=3 goblin:/summit.mirror /tmp/test/g1 |mount -F nfs -o nfsvers=3 goblin:/scrs1.mirror /tmp/test/g2 |ls -l /tmp/test/* | -rw-r--r-- 1 root other 0 Jul 3 14:39 | /tmp/test/nothing_is_mounted | | /tmp/test/b1: | total 24 | -rw-r--r-- 1 root other 0 Jul 3 12:32 | SUMMIT_BOLT | -rw-r--r-- 1 root other 0 Jul 3 09:26 | boltzmann_test_summit | | /tmp/test/b2: | total 32 | -rw-r--r-- 1 root other 0 Jul 3 12:31 | SCRS1_BOLT | -rw-r--r-- 1 root other 0 Jul 3 09:26 | boltzmann_test_scrs1 | | /tmp/test/g1: | total 280 | -rw-r--r-- 1 root other 0 Jul 3 15:40 | .00_summit_nas_volume | -rw-rw-r-- 1 root other 0 Jul 3 15:03 | SUMMIT_MIRROR | | /tmp/test/g2: | total 120 | -rw-r--r-- 1 root other 0 Jul 3 15:40 | .00_scrs1_nas_volume | -rw-rw-r-- 1 root other 0 Jul 3 15:02 | SCRS1_MIRROR | |ls -la /tmp/test/b1/* | -rw-r--r-- 1 root other 0 Jul 3 12:32 | /tmp/test/b1/SUMMIT_BOLT | -rw-r--r-- 1 root other 0 Jul 3 09:26 | /tmp/test/b1/boltzmann_test_summit | |ls -l /tmp/test/b2/* | ls: No match. | |ls -l /tmp/test/g1/* | ls: No match. | |ls -l /tmp/test/g2/* | ls: No match. | |mount | /tmp/test/b1 on boltzmann:/summit_bolt read/write/remote on | Wed Jul 3 15:41:11 2013 | /tmp/test/b2 on boltzmann:/scrs1_bolt read/write/remote on | Wed | Jul 3 15:41:11 2013 | /tmp/test/g1 on goblin:/summit.mirror read/write/remote on | Wed | Jul 3 15:41:11 2013 | /tmp/test/g2 on goblin:/scrs1.mirror read/write/remote on | Wed | Jul 3 15:41:11 2013 | |umount -a /tmp/test/b1 /tmp/test/b2 /tmp/test/g1 /tmp/test/g2 | | If I use automount to access the shares on the client, the mounts are | made by default with nfs4 and we see this same 'no match' behaviour. | It's probably something really stupid but I'm just not seeing it... | | If anyone has ideas and/or needs more info, please let me know. | | --
[CentOS] Java/Solr - Could not reserve enough space for object heap.
Hi All. # cat /etc/redhat-release CentOS release 6.2 (Final) # uname -r 2.6.32-220.17.1.el6.centos.plus.x86_64 # rpm -qa | grep solr apache-solr-3.5.0-1.5... I have a solr installation which is invoked: /usr/bin/java -Xms25g -Xmx25g -DSTOP.PORT=8079 -DSTOP.KEY=mustard -Dsolr.solr.home=multicore -jar start.jar After start/when the java process is running: # free -m total used free sharedbuffers cached Mem: 32093 23975 8118 0189 5736 -/+ buffers/cache: 18049 14043 Swap: 4095 22 4073 So the machine has 32GB of RAM, and java process needs 25GB to start. When I make a restart the java process dies and in log: Jul 4 08:17:27 test.local solr: Error occurred during initialization of VM Jul 4 08:17:27 test.local solr: Could not reserve enough space for object heap Jul 4 08:17:27 test.local solr: [FAILED] Then a second restart is ok, the process starts and solr is responding. Have you had such problems? As I think during stop jvm gives back the memory to the operating system and then during start is requesting 25GB (can there be a lag in this process?). No other services are running on this machine. Best regards, Rafal. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Java/Solr - Could not reserve enough space for object heap.
On Thu, Jul 04, 2013 at 09:54:49AM +0200, Rafał Radecki wrote: Hi All. # cat /etc/redhat-release CentOS release 6.2 (Final) you should upgrade to 6.4... ... I have a solr installation which is invoked: /usr/bin/java -Xms25g -Xmx25g -DSTOP.PORT=8079 -DSTOP.KEY=mustard -Dsolr.solr.home=multicore -jar start.jar After start/when the java process is running: # free -m total used free sharedbuffers cached Mem: 32093 23975 8118 0189 5736 -/+ buffers/cache: 18049 14043 Swap: 4095 22 4073 So the machine has 32GB of RAM, and java process needs 25GB to start. When I make a restart the java process dies and in log: how do you restart? are you sure that your java is stopped before starting it again? 25G x2 32(ram) + 4(swap) until the 1st java instance is actually stopped. Tru -- Tru Huynh (mirrors, CentOS i386/x86_64 Package Maintenance) http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xBEFA581B pgpBQoHdwn49I.pgp Description: PGP signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] This isn't supposed to be difficult (how to nntp post to the Gmane Pan user group)
I realize this is (mostly) off topic, but I'm befuddled as to *how* one can post to the Gmane Pan Users' group (gmane.comp.gnome.apps.pan.user) using any nntp USENET client (e.g., Pan, on Centos). I'm already subscribed (by having sent an email to pan-us...@nongnu.org); but I just want that USENET group to work like *this* USENET group, where I can post using a server:port login:password combination such as we use here: Server: news.gmane.org Port: 119 Login: blank Password: blank User: Rock I've looked here (http://dir.gmane.org/gmane.comp.gnome.apps.pan.user) and if the answer is there, I don't see it (maybe I missed it?). My basic question is so simply I'm shocked I'm having to ask it (of the wrong group even) ... which is ... the following: Q: How on earth is one supposed to post to the Gmane Pan users using an nntp client (which requires a server name and port login/password)? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Java/Solr - Could not reserve enough space for object heap.
Why 25G x2 - -Xms minimal, -Xmx maximal? 2013/7/4 Tru Huynh t...@centos.org On Thu, Jul 04, 2013 at 09:54:49AM +0200, Rafał Radecki wrote: Hi All. # cat /etc/redhat-release CentOS release 6.2 (Final) you should upgrade to 6.4... ... I have a solr installation which is invoked: /usr/bin/java -Xms25g -Xmx25g -DSTOP.PORT=8079 -DSTOP.KEY=mustard -Dsolr.solr.home=multicore -jar start.jar After start/when the java process is running: # free -m total used free sharedbuffers cached Mem: 32093 23975 8118 0189 5736 -/+ buffers/cache: 18049 14043 Swap: 4095 22 4073 So the machine has 32GB of RAM, and java process needs 25GB to start. When I make a restart the java process dies and in log: how do you restart? are you sure that your java is stopped before starting it again? 25G x2 32(ram) + 4(swap) until the 1st java instance is actually stopped. Tru -- Tru Huynh (mirrors, CentOS i386/x86_64 Package Maintenance) http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xBEFA581B ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Java/Solr - Could not reserve enough space for object heap.
stop/start, I use restart which is stop and start: start () { echo -n $Starting $prog: if [ -e /var/lock/subsys/solr ]; then echo -n $cannot start solr: solr is already running.; failure $cannot start solr: solr already running.; echo return 1 fi cd $SOLR_DIR daemon $JAVA $JAVA_OPTIONS 21 | /usr/bin/logger -t 'solr' -p info -- RETVAL=$? echo [ $RETVAL = 0 ] touch /var/lock/subsys/solr return $RETVAL } stop () { echo -n $Stopping $prog: if [ ! -e /var/lock/subsys/solr ]; then echo -n $cannot stop solr: solr is not running. failure $cannot stop solr: solr is not running. echo return 1; fi cd $SOLR_DIR $JAVA $JAVA_OPTIONS_STOP --stop RETVAL=$? sleep 2 echo [ $RETVAL -eq 0 ] rm -f /var/lock/subsys/solr return $RETVAL } 2013/7/4 Rafał Radecki radecki.ra...@gmail.com Why 25G x2 - -Xms minimal, -Xmx maximal? 2013/7/4 Tru Huynh t...@centos.org On Thu, Jul 04, 2013 at 09:54:49AM +0200, Rafał Radecki wrote: Hi All. # cat /etc/redhat-release CentOS release 6.2 (Final) you should upgrade to 6.4... ... I have a solr installation which is invoked: /usr/bin/java -Xms25g -Xmx25g -DSTOP.PORT=8079 -DSTOP.KEY=mustard -Dsolr.solr.home=multicore -jar start.jar After start/when the java process is running: # free -m total used free sharedbuffers cached Mem: 32093 23975 8118 0189 5736 -/+ buffers/cache: 18049 14043 Swap: 4095 22 4073 So the machine has 32GB of RAM, and java process needs 25GB to start. When I make a restart the java process dies and in log: how do you restart? are you sure that your java is stopped before starting it again? 25G x2 32(ram) + 4(swap) until the 1st java instance is actually stopped. Tru -- Tru Huynh (mirrors, CentOS i386/x86_64 Package Maintenance) http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xBEFA581B ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] odd inconsistency with nfs
Am 04.07.2013 um 04:22 schrieb Miranda Hawarden-Ogata hawar...@ifa.hawaii.edu: I'm having an interesting/odd problem with nfs (I think). We recently (Monday/Tuesday) upgraded our file server from an ancient redhat 7.3 system to a shiny new centos 6.4 system. We don't see any issues between the other centos boxes, but things get a bit weird when we start mounting on the old solaris clients. Just some hints (even in case you know them all :-)): https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-nfs.html cat /etc/sysconfig/nfs tcp-wrappers (/etc/hosts.{allow.deny})? iptables (iptables -L -n)? rpcinfo -p rpcinfo -p nfs-server -- LF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] This isn't supposed to be difficult (how to nntp post to the Gmane Pan user group)
Am 04.07.2013 um 10:34 schrieb Rock rocksock...@gmail.com: I realize this is (mostly) off topic, but I'm befuddled as to *how* one can post to the Gmane Pan Users' group (gmane.comp.gnome.apps.pan.user) using any nntp USENET client (e.g., Pan, on Centos). I'm already subscribed (by having sent an email to pan-us...@nongnu.org); but I just want that USENET group to work like *this* USENET group, where I can post using a server:port login:password combination such as we use here: Server: news.gmane.org Port: 119 Login: blank Password: blank User: Rock I've looked here (http://dir.gmane.org/gmane.comp.gnome.apps.pan.user) and if the answer is there, I don't see it (maybe I missed it?). My basic question is so simply I'm shocked I'm having to ask it (of the wrong group even) ... which is ... the following: Q: How on earth is one supposed to post to the Gmane Pan users using an nntp client (which requires a server name and port login/password)? why not asking them http://gmane.org/faq.php ? :-) -- LF ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] CentOS-announce Digest, Vol 101, Issue 3
Send CentOS-announce mailing list submissions to centos-annou...@centos.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-requ...@centos.org You can reach the person managing the list at centos-announce-ow...@centos.org When replying, please edit your Subject line so it is more specific than Re: Contents of CentOS-announce digest... Today's Topics: 1. CESA-2013:1014 Important CentOS 5 java-1.6.0-openjdk Update (Johnny Hughes) 2. CESA-2013:1014 Important CentOS 6 java-1.6.0-openjdk Update (Johnny Hughes) -- Message: 1 Date: Thu, 4 Jul 2013 10:07:44 + From: Johnny Hughes joh...@centos.org Subject: [CentOS-announce] CESA-2013:1014 Important CentOS 5 java-1.6.0-openjdk Update To: centos-annou...@centos.org Message-ID: 20130704100744.ga21...@chakra.karan.org Content-Type: text/plain; charset=us-ascii CentOS Errata and Security Advisory 2013:1014 Important Upstream details at : https://rhn.redhat.com/errata/RHSA-2013-1014.html The following updated files have been uploaded and are currently syncing to the mirrors: ( sha256sum Filename ) i386: 2c38bf51cef2befcf717f8f486ae37867a5bd29ee42908ddf4d5d3b55436f2d3 java-1.6.0-openjdk-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 861d7c8fa3ff46b78a64187b45609921e49bec920fa00614fa52533b36db15ed java-1.6.0-openjdk-demo-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 992c01478b0662346ccb9561e46d5dadf281f9c523eb79112f9e2e5edc80cbae java-1.6.0-openjdk-devel-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 145d3a5749d448b83077533e949591e6f732b108eb948769510cc51d11ed java-1.6.0-openjdk-javadoc-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm 0a60d36f1a3ed0ad11c5d3a18890b079ed54ea3d9e1d757f78a0973b74120d61 java-1.6.0-openjdk-src-1.6.0.0-1.41.1.11.11.90.el5_9.i386.rpm x86_64: ca6b524ba111aaf9481e070a1185376dbde8d3c6a9a166b0a4af4a6f36619224 java-1.6.0-openjdk-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm 26e9b66d4794be564bfe4e3c17d85cc244b22b2b1a923cb35e6ba245ff72c022 java-1.6.0-openjdk-demo-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm 1b4c55b1209f5f3af0bb58b249e0a96eba4ae2fb576c38d8fc5f086915d4af2e java-1.6.0-openjdk-devel-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm ef1484a976bd9f8f7db3a5943d19e12da3ea2497ca3e449c2a501ed640fdd5bc java-1.6.0-openjdk-javadoc-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm 0b3a822767944484bd5393e0ac6899f04efcfabf0aed83ae971141c4b547cf56 java-1.6.0-openjdk-src-1.6.0.0-1.41.1.11.11.90.el5_9.x86_64.rpm Source: f9c9ff547c9c99d5bd3a7971d42c71c980392a4102274da09d02f8af2197fe62 java-1.6.0-openjdk-1.6.0.0-1.41.1.11.11.90.el5_9.src.rpm -- Johnny Hughes CentOS Project { http://www.centos.org/ } irc: hughesjr, #cen...@irc.freenode.net -- Message: 2 Date: Thu, 4 Jul 2013 10:17:25 + From: Johnny Hughes joh...@centos.org Subject: [CentOS-announce] CESA-2013:1014 Important CentOS 6 java-1.6.0-openjdk Update To: centos-annou...@centos.org Message-ID: 20130704101725.ga30...@n04.lon1.karan.org Content-Type: text/plain; charset=us-ascii CentOS Errata and Security Advisory 2013:1014 Important Upstream details at : https://rhn.redhat.com/errata/RHSA-2013-1014.html The following updated files have been uploaded and are currently syncing to the mirrors: ( sha256sum Filename ) i386: ec3443d3637c5e1ee3d8ae68cb38ed1083611664bd5156410c8cba63f168c0dd java-1.6.0-openjdk-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm cfa262c2b479163919f273ecc9036b79a2d2ff4c45d47ea27c0f22084cc7c944 java-1.6.0-openjdk-demo-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm 605680de20b8c9f3950cf0352f225bcd9f48bfde19b04610776da09564891712 java-1.6.0-openjdk-devel-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm f16c9b407fc942fa3df87583c295d1ec9f04e2e866f818f3cdae73408c829643 java-1.6.0-openjdk-javadoc-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm 5970ed15745aec0470ebc34577526d484851fc3cf7b7366003a82dd87d98d203 java-1.6.0-openjdk-src-1.6.0.0-1.62.1.11.11.90.el6_4.i686.rpm x86_64: ee81d5fd1b9e094bdc130a04d1afdc190a90dc0420b6fa7121007f7057d69025 java-1.6.0-openjdk-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm 88a92e6407c23d92db49d56f4741c1107aa96823378df109d1aacee058ffcd6f java-1.6.0-openjdk-demo-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm ccb6b261f4505ad5c4dd72a9f28aea83b50bc3faabe0dd2211c14f9ace725d00 java-1.6.0-openjdk-devel-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm f10a277d98c43b59b82a86733d0eae4604fd42205363280c0e872e6c4a95f0b1 java-1.6.0-openjdk-javadoc-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm 9ade89dc1961212936ee97ce89d545030b8f1dfcf06ced1aa7a14a3259606c97 java-1.6.0-openjdk-src-1.6.0.0-1.62.1.11.11.90.el6_4.x86_64.rpm Source: 0398219e27505b5510182c5fdb33b24b23c7dd7e25f3073dfd7c4cafa63add6d java-1.6.0-openjdk-1.6.0.0-1.62.1.11.11.90.el6_4.src.rpm -- Johnny Hughes CentOS Project {
[CentOS] Server dies after kernel upgrade
I am running a server with CentOS release 6.4 (Final) and the kernel version of 2.6.32-279.19.1.el6.x86_64 and everything looks ok, but when I do a yum update on the kernel to update it to a newer version 2.6.32-358.11.1.el6. It will not restart after the required reboot. It will start to load until the task bar at the bottom gets to the end than it stops loading I have been patient with it unless it requires more time to start but after 10min it was still just sitting at the task bar. This is a VM machine so I just revert to a previous snapshot, but what should I be looking for that would cause this, and how should I fix it? Thanks, Chris ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server dies after kernel upgrade
I am running a server with CentOS release 6.4 (Final) and the kernel version of 2.6.32-279.19.1.el6.x86_64 and everything looks ok, but when I do a yum update on the kernel to update it to a newer version 2.6.32-358.11.1.el6. It will not restart after the required reboot. It will start to load until the task bar at the bottom gets to the end than it stops loading I have been patient with it unless it requires more time to start but after 10min it was still just sitting at the task bar. This is a VM machine so I just revert to a previous snapshot, but what should I be looking for that would cause this, and how should I fix it? Thanks, Chris ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos I recall this happening to me in the past. Hit esc when the loading bar starts to run across the screen and see what it stops on. Go into single user mode and see if you can look at the logs. Can you select the older kernel when the grub pops up? You may have to use chkconfig to disable services from starting after you see whats in the way. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] sda and sdb reverse order with an external USB drive
Hello I am using 64 bit CentOS 6.4 on an i7 laptop with one sata drive and a CD drive. I installed CentOS by manually partitioning sda as: sda1 as /boot, sda2 as swap, sda3 as /. The booted system works great. When I insert an external USB drive, formatted as ext3, the hard drive on the laptop and the USB drive are either sda or sdb, depending upon the order on which I insert the USB drive and boot the system. Please see the two mount commands below for each of these situations. This seems to work in either order except for the fact that I don't want my USB drive to automount. What I want is that after I insert the USB drive in a running system and wait 15 seconds, I want to mount the USB drive with the command # mount /mnt. To accomplish this I added a line to /etc/fstab but it didn't work. When I uncomment the last line in fstab (see below) the computer hangs and doesn't boot. I was successful with this strategy on a similar laptop with Fedora 18 but not my current one. Thank you, Joe Hesse The following mount command was issued by first completely booting CentOS and then inserting the external USB Drive. Note the sda3 is / and sda1 is /boot. [root@XoticPC ~]# mount /dev/sda3 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sdb1 on /media/GoFlex type ext3 (rw,nosuid,nodev,uhelper=udisks) The following mount command was issued by inserting the external USB drive in a powered down computer and then booting. Note the sdb3 is / and sdb1 is /boot. [root@XoticPC ~]# mount /dev/sdb3 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sdb1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sda1 on /media/GoFlex type ext3 (rw,nosuid,nodev,uhelper=udisks) The /etc/fstab file was generated by the install process. The commented line at the bottom was added by me in an unsuccessful attempt to be able to insert the USB drive in a booted computer, not having it mount, and then control the mounting with # mount /mnt. The last UUID is the UUID of sdb1 determined with the command # blkid /dev/sdb1. # /etc/fstab UUID=1d7606b7-46b8-4b29-9a4e-a50a1f6a1759 / ext4defaults1 1 UUID=e0fdfeb1-e7a7-4a06-b5fa-7730c3c2e60d /boot ext4defaults1 2 UUID=d0e3c2ee-7c66-4d13-b387-1da958020b1a swap swapdefaults0 0 tmpfs /dev/shmtmpfs defaults0 0 devpts /dev/ptsdevpts gid=5,mode=620 0 0 sysfs /syssysfs defaults0 0 proc/proc proc defaults0 0 #UUID=3b550884-8d05-41a5-a205-17b6d7269dd1 /mnt ext3 rw,suid,dev,exec,noauto,nouser,async 0 2 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server dies after kernel upgrade
On 2013-07-04, Chris Taylor chris.tay...@corp.eastlink.ca wrote: I am running a server with CentOS release 6.4 (Final) and the kernel version of 2.6.32-279.19.1.el6.x86_64 and everything looks ok, but when I do a yum update on the kernel to update it to a newer version 2.6.32-358.11.1.el6. As Lubomir pointed out yesterday, the latest kernel is vulnerable to a DoS attack, so you should probably use a different one. See Johnny's message for details, and an untested kernel that you could use instead: http://lists.centos.org/pipermail/centos/2013-July/135671.html It will not restart after the required reboot. It will start to load until the task bar at the bottom gets to the end than it stops loading I have been patient with it unless it requires more time to start but after 10min it was still just sitting at the task bar. You can either try ctrl-d or ESC during the splash screen process (I don't remember exactly which one), or you can edit the grub command line when it runs on boot and remove the rhgb and quiet options from the kernel options. --keith -- kkel...@wombat.san-francisco.ca.us ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] sda and sdb reverse order with an external USB drive
On 07/04/2013 10:46 AM, Joseph Hesse wrote: Hello I am using 64 bit CentOS 6.4 on an i7 laptop with one sata drive and a CD drive. I installed CentOS by manually partitioning sda as: sda1 as /boot, sda2 as swap, sda3 as /. The booted system works great. When I insert an external USB drive, formatted as ext3, the hard drive on the laptop and the USB drive are either sda or sdb, depending upon the order on which I insert the USB drive and boot the system. Please see the two mount commands below for each of these situations. This seems to work in either order except for the fact that I don't want my USB drive to automount. What I want is that after I insert the USB drive in a running system and wait 15 seconds, I want to mount the USB drive with the command # mount /mnt. To accomplish this I added a line to /etc/fstab but it didn't work. When I uncomment the last line in fstab (see below) the computer hangs and doesn't boot. I was successful with this strategy on a similar laptop with Fedora 18 but not my current one. Thank you, Joe Hesse I have had similar issues in the past. The take-away is that you cannot depend on device names being stable, it depends on the order in which devices are enumerated at boot time. In my case, an eSATA drive shows up as the first device if it is turned on when the system boots. It apparently enumerates as sda and the rest of the drives are bumped up one drive letter. The system boots OK, but the drive letters are different. When I want to mount the external drive I use LABEL=. When I formatted the external drive I specified a filesystem label and rather than specifying /dev/sdb1 in my fstab I used LABEL=fslabel. That way it doesn't matter what device name comes up, it mounts the filesystem by that label. The label can be added after-the-fact using tune2fs or the appropriate tool for the on-disk format. You can also use UUID=uuid if you prefer to use UUIDs. See the mount manpage for more information. Of course, I could be wrong about what you are trying to accomplish, but I think it might be applicable. YMMV! -- Jay Leafey - jay.lea...@mindless.com Memphis, TN ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] This isn't supposed to be difficult (how to nntp post to the Gmane Pan user group)
On Thu, 04 Jul 2013 11:16:55 +0200, Leon Fauster wrote: why not asking them http://gmane.org/faq.php ? It's not in that FAQ, nor in the web page for the pan users group. I did ask Lars but he controls gmane, not the pan users group. Amazingly, the pan users group just (apparently) assumes you omnipotently already know what to set the NNTP client server:port, login:password, and user:email to in order to post successfully. For example, this is what you need to post to *this* group: Group name = gmane.linux.centos.general Server = news.gmane.org Port = 119 Login = blank Password = blank Username = Foo Email = f...@bar.com == this is all that needs to be pre-registered in order to post to gmane.linux.centos.general. (I forget how I had pre-registered, but, IIRC, I had sent an email to someone at somewhere and they wrote back with the instructions above - which allows me to post as long as I put that email address in the posting profile). ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] This isn't supposed to be difficult (how to nntp post to the Gmane Pan user group)
On 04.Jul.2013, at 10:34, Rock wrote: I realize this is (mostly) off topic, but I'm befuddled as to *how* one can post to the Gmane Pan Users' group (gmane.comp.gnome.apps.pan.user) using any nntp USENET client (e.g., Pan, on Centos). It is (fully) off topic That said, if you post the first time to a mailing list per gmane then gmane will send you a mail that you must answer. You post per nntp the first time gmane sends you a email per smtp you reply to that email you wait some time ... something like that, but as said, when you have problems with gmane, ask gmane. -- Markus ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Server dies after kernel upgrade
On 07/04/2013 05:54 PM, Keith Keller wrote: On 2013-07-04, Chris Taylor chris.tay...@corp.eastlink.ca wrote: I am running a server with CentOS release 6.4 (Final) and the kernel version of 2.6.32-279.19.1.el6.x86_64 and everything looks ok, but when I do a yum update on the kernel to update it to a newer version 2.6.32-358.11.1.el6. As Lubomir pointed out yesterday, the latest kernel is vulnerable to a DoS attack, so you should probably use a different one. See Johnny's message for details, and an untested kernel that you could use instead: http://lists.centos.org/pipermail/centos/2013-July/135671.html It will not restart after the required reboot. It will start to load until the task bar at the bottom gets to the end than it stops loading I have been patient with it unless it requires more time to start but after 10min it was still just sitting at the task bar. You can either try ctrl-d or ESC during the splash screen process (I don't remember exactly which one), or you can edit the grub command line when it runs on boot and remove the rhgb and quiet options from the kernel options. --keith There are always 3 kernels available for boot. Only one gets updated. So first check if other two you have are booting properly. Then you can list all available kernels with yum list kernel --showduplicate and select one and install it like yum install kernel-2.6.32-358.6.2.el6.centos.plus and see if this one works. If it does, you can use it. A thing to check is if you need to recompile VM drivers for new kernels. -- Ljubomir Ljubojevic (Love is in the Air) PL Computers Serbia, Europe StarOS, Mikrotik and CentOS/RHEL/Linux consultant ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos