Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread JohnS

On Tue, 2010-01-26 at 13:41 +0800, Christopher Chan wrote:
> JohnS wrote:
> > On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
> >> Are complicated relationships being stored in postgresql and not in 
> >> mysql? I do not know how things are now but mysql has a history of only 
> >> being good for simple selects.
> > 
> > Selects can get very upity for mysql as in "VIEWS".  They can do Concat,
> > Inner Join and Outter among many more things.  VIEW myview as SELECT can
> > do some very very logical calcs and predictions.  I promise it is not
> > just for simple selects.
> > 
> 
> By 'being good only for simple selects' I meant performance wise. Which 
> is what this thread is all about - performance. Sure you can make 
> complicated queries on mysql but compared to postgresql they would take 
> quite some time. Again, this is based on stuff in the past. Maybe mysql 
> has improved now.

Sure, I knew what you meant, but we gonna Bang Heads on your definition
of simple selects.  I can't compare performance to postgresql but I am
willing to bet that mysql can do alot more.  Doing something like a
"Breadth First" or "Depth First" logical operation, it is sad for me to
even say MySQL is faster in that area with predictions than MSSQL.
Having said that I really love mssql and sqlce. Now we getting OT.

Great things started to happen with mysql @ version 5 >.  Now it's just
probally going to wither away.  Who really knows?

> I am just happy that more stuff started supporting postgresql before the 
> Sun buyout. They would have had some time to mature instead of a frantic 
> 'we need to add/convert to postgresql just in case'. But I will still go 
> for mysql with connection caching if it is just a simple table lookup 
> that needs to be remotely querable.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAR from Console

2010-01-25 Thread Christoph Maser
Am Dienstag, den 26.01.2010, 07:04 +0100 schrieb Alberto García Gómez:
> Hi fellows, how can I unrar (.rar of course) from my console? What
> package do I need?
>
> Saludos Fraternales

Alberto please do not use "reply to" and then change the topic. Instead
use "new mail" in you mail client to start a new topic.





financial.com AG

Munich head office/Hauptsitz München: Maria-Probst-Str. 19 | 80939 München | 
Germany
Frankfurt branch office/Niederlassung Frankfurt: Messeturm | 
Friedrich-Ebert-Anlage 49 | 60327 Frankfurt | Germany
Management board/Vorstand: Dr. Steffen Boehnert | Dr. Alexis Eisenhofer | Dr. 
Yann Samson | Matthias Wiederwach
Supervisory board/Aufsichtsrat: Dr. Dr. Ernst zur Linden (chairman/Vorsitzender)
Register court/Handelsregister: Munich – HRB 128 972 | Sales tax ID 
number/St.Nr.: DE205 370 553
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAR from Console

2010-01-25 Thread Anthony Kamau
On Tue, 2010-01-26 at 01:04 -0500, Alberto García Gómez wrote:
> Hi fellows, how can I unrar (.rar of course) from my console? What
> package do I need?
> 

yum search unrar

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAR from Console

2010-01-25 Thread Neil Aggarwal
> Hi fellows, how can I unrar (.rar of course) from my console?

RPMForge has an unrar package.
 
Neil

--
Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net/cpanel
cPanel/WHM preinstalled on a virtual server for only $40/month!
No overage charges, 7 day free trial, PayPal, Google Checkout 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAR from Console

2010-01-25 Thread Frank Cox

On Tue, 2010-01-26 at 01:04 -0500, Alberto García Gómez wrote:
> 
> Hi fellows, how can I unrar (.rar of course) from my console? What
> package do I need?

unrar is in the rpmfusion-nonfree repository.
-- 
MELVILLE THEATRE ~ Melville Sask ~ http://www.melvilletheatre.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] RAR from Console

2010-01-25 Thread Alberto García Gómez
Hi fellows, how can I unrar (.rar of course) from my console? What package do I 
need?

Saludos Fraternales
_
Atte.
Alberto García Gómez M:.M:.
Administrador de Redes/Webmaster
IPI "Carlos Marx", Matanzas. Cuba.
0145-2887(30-33) ext 124


__ Información de ESET NOD32 Antivirus, versión de la base de firmas de 
virus 4805 (20100125) __

ESET NOD32 Antivirus ha comprobado este mensaje.

http://www.eset.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread Christopher Chan
JohnS wrote:
> On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
>> Are complicated relationships being stored in postgresql and not in 
>> mysql? I do not know how things are now but mysql has a history of only 
>> being good for simple selects.
> 
> Selects can get very upity for mysql as in "VIEWS".  They can do Concat,
> Inner Join and Outter among many more things.  VIEW myview as SELECT can
> do some very very logical calcs and predictions.  I promise it is not
> just for simple selects.
> 

By 'being good only for simple selects' I meant performance wise. Which 
is what this thread is all about - performance. Sure you can make 
complicated queries on mysql but compared to postgresql they would take 
quite some time. Again, this is based on stuff in the past. Maybe mysql 
has improved now.

I am just happy that more stuff started supporting postgresql before the 
Sun buyout. They would have had some time to mature instead of a frantic 
'we need to add/convert to postgresql just in case'. But I will still go 
for mysql with connection caching if it is just a simple table lookup 
that needs to be remotely querable.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DNS issue.. help ?!

2010-01-25 Thread Rajagopal Swaminathan
Greetings,

On Mon, Jan 25, 2010 at 8:05 PM, Roland Roland wrote:

> Hi All,
>
> but when i do nslookup example.com on the clients machine. the website
> resolves to another IP ( the one set in the initial public dns records)
>
>
Could it be because of the dns cache onthe client side?

Regards

Rajagopal
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread JohnS

On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
> Are complicated relationships being stored in postgresql and not in 
> mysql? I do not know how things are now but mysql has a history of only 
> being good for simple selects.

Selects can get very upity for mysql as in "VIEWS".  They can do Concat,
Inner Join and Outter among many more things.  VIEW myview as SELECT can
do some very very logical calcs and predictions.  I promise it is not
just for simple selects.

John

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] movie software

2010-01-25 Thread Robert Heller
At Mon, 25 Jan 2010 18:49:19 -0800 (PST) CentOS mailing list 
 wrote:

> 
> Hi
> 
> Any open source software can open quick time?
> 
> and can covert from quick to other movie also?

mplayer / mencoder (from the rpmforge repo)

> 
> Thank you
> 
> Send instant messages to your online friends http://uk.messenger.yahoo.com 
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
> 
>

-- 
Robert Heller -- 978-544-6933
Deepwoods Software-- Download the Model Railroad System
http://www.deepsoft.com/  -- Binaries for Linux and MS-Windows
hel...@deepsoft.com   -- http://www.deepsoft.com/ModelRailroadSystem/
  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] movie software

2010-01-25 Thread Michael A. Peters
adrian kok wrote:
> Hi
> 
> Any open source software can open quick time?
> 
> and can covert from quick to other movie also?
> 
> Thank you

ffmpeg2theora does a good job at converting the h.264 that modern 
quicktime uses into Ogg Theora.

VLC does a good job at playing just about any format.

You will need to make sure you have the right libraries if you build 
them from source.

For ffmpeg2theora (and probably VLC too) there is a static binary for 
Linux at the project website.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] movie software

2010-01-25 Thread adrian kok
Hi

Any open source software can open quick time?

and can covert from quick to other movie also?

Thank you

Send instant messages to your online friends http://uk.messenger.yahoo.com 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] autofs with nfs plus local directories

2010-01-25 Thread Agile Aspect
On Mon, Jan 25, 2010 at 3:35 PM, Carlos Santana  wrote:
> Hi,
>
> I have a autofs configured to mount home dir from NFS. All user
> accounts lookup is done using LDAP. All is working fine with this
> setup. Now I need to create a local user account and have its home dir
> also on local system. So I added a new user account and changed
> auto.home as follows:
>
> test1         -rw,hard,intr   /home/test1
> *             -rw,hard,intr   nfs1:/export/users/&
>
> But this stuff is not working. If I change test1 user's home dir to
> '/opt/test1', it works fine. Log messages indicate:
> 'umount_autofs_indirect: ask umount returned busy /home'. I have some
> LDAP users logged on to system. Do I need to tell them to logout to
> successfully reload autofs? Any clues on this would be really helpful.

You can't use the path /home because the autofs uses it to mount the
home directories.

cd /home
df .

Your entry for test1 is trying to mount /home/test1 on /home/test1
which won't work.

The local user can not use the path /home as long as autofs is using /home.

-- 
  Enjoy global warming while it lasts.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] autofs with nfs plus local directories

2010-01-25 Thread Jorge Fábregas
On Monday 25 January 2010 19:35:07 Carlos Santana wrote:
> Now I need to create a local user account and have its home dir
> also on local system

If it's a local user you want (with its fils on local system) why are you using 
the autofs facility? Isn't it just a matter of creating the user locally and 
make sure it resides in the local system's /etc/passwd file?  Did you check 
/etc/nsswitch.conf to find out the order the databases are searched?   What do 
you get when you do:  getent passwd | grep test1

HTH,
Jorge
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread Kevin Krieser


-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of James B. Byrne
Sent: Monday, January 25, 2010 10:06 AM
To: Robert Nichols
Cc: centos@centos.org
Subject: Re: [CentOS] The directory that I am trying to clean up is huge

On Mon, January 25, 2010 10:31, Robert Nichols wrote:
\
>
> Now if the "{}" string appears more than once then the command line
> contains that path more than once, but it is essentially impossible
> to exceed the kernel's MAX_ARG_PAGES this way.
>
> The only issue with using "-exec command {} ;" for a huge number of
> files is one of performance.  If there are 100,000 matched files,
> the command will be invoked 100,000 times.
>
> --
> Bob Nichols rnichol...@comcast.net
>

Since the OP reported that the command he used:

  find -name "*.access*" -mtime +2 -exec rm {} \;

in fact failed, one may infer that more than performance is at issue.

The OP's problem lies not with the -exec construction but with the
unstated, but nonetheless present, './' of his find invocation.
Therefore he begins a recursive descent into that directory tree.
Since the depth of that tree is not given us, nor its contents, we
may only infer that there must be some number of files therein which
are causing the MAXPAGES limit to be exceeded before the recursion
returns.

I deduce that he could provide the -prune option or the -maxdepth= 0
option to avoid this recursion instead. I have not tried either but
I understand that one, or both, should work.




I still suspect that the OP had an unquoted wildcard someplace on his
original command.  Either a find * -name ..., or find . -name *.access*...

I see people all the time forget to quote the argument to -name, which would
normally work if the wildcard doesn't match more than 1 file in the current
directory.  But if there is more than 1 file, then find will return an error
since the second file would likely not match an option to find.  

If there are too many matches in the current directory, the unquoted example
would fail even before the find command could execute.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread nate
Noob Centos Admin wrote:

> The web application is written in PHP and runs off MySQL and/or
> Postgresql. So I don't think I can access the raw disk data directly,
> nor do I think it would be safe since that bypasses the DBMS's checks.

This is what I use for MySQL (among other things)

log-queries-not-using-indexes
long_query_time=3
key_buffer = 50M
bulk_insert_buffer_size = 8M
table_cache = 1000
sort_buffer_size = 8M
read_buffer_size = 4M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 8M
thread_cache = 40
query_cache_size = 256M
query_cache_type=1
query_cache_limit=20M

default-storage-engine=innodb
innodb_file_per_table
innodb_buffer_pool_size=20G <-- assumes you have a decent amount of ram,
this is the max I can set the buffers with 32G of RAM w/o swapping
innodb_additional_mem_pool_size=20M
innodb_log_file_size=1999M
innodb_flush_log_at_trx_commit=2
innodb_flush_method=O_DIRECT <-- this turns on Direct I/O
innodb_lock_wait_timeout=120
innodb_log_buffer_size=13M
innodb_open_files=1024
innodb_thread_concurrency=16
sync_binlog=1
set-variable = tmpdir=/var/lib/mysql/tmp <- force tmp to be on the SAN
rather than local disk

Running MySQL 5.0.51a (built from SRPMS)

nate



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread Christopher Chan
Noob Centos Admin wrote:
> Hi,
> 
>> If you want a fast database forget about file system caching,
>> use Direct I/O and put your memory to better use - application
>> level caching.
> 
> The web application is written in PHP and runs off MySQL and/or
> Postgresql. So I don't think I can access the raw disk data directly,
> nor do I think it would be safe since that bypasses the DBMS's checks.

Which is it? mysql or postgresql or both? Have you actually determined 
that i/o is in fact the bottleneck?

Is the webapp maintaining persistent connections to mysql or is it 
continually connecting and disconnecting? mysql sucks big time if you 
have a good few hundred connections being setup and torn down all the time.

Are complicated relationships being stored in postgresql and not in 
mysql? I do not know how things are now but mysql has a history of only 
being good for simple selects.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread Ross Walker
On Jan 25, 2010, at 7:02 PM, JohnS  wrote:

>
> On Mon, 2010-01-25 at 18:51 -0500, Ross Walker wrote:
>
>>> Instead look at the way your PHP Code is
>>> Encoding the BLOB Data and if your really need the speed since now
>>> it's
>>> MySQL DB, make you own custom C API for mysql to encode the BLOB.   
>>> The
>>> DB can do this like that much faster then your PHP app ever thought
>>> of.
>>
>> I believe the OP said he was running postgresql.
>>
>
> Quoted from OPs previous mail hes not sure lol
>
> """The web application is written in PHP and runs off MySQL and/or
> Postgresql."""

Ah, well #1 on his list then is to figure out what he is running!

If there are BLOB/TEXT fields those should really be queried  
separately and put in some cache shared by the PHP app (or discarded  
depending on how frequently used). Not knowing what they are, probably  
avatars, which should be cached, as opposed to uploads or such which  
shouldn't. Who knows without more info.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Firewire issues with CentOS/RH?

2010-01-25 Thread Michael A. Peters
MHR wrote:
> I read in another forum that CentOS has problems with Firewire drives,
> something along the lines of whenever a new kernel is booted, the
> drives are gone.
> 
> Can anyone elaborate on that?  I don't use Firewire drives (at all,
> yet), but information about this would be nice to have

Not personally experienced that issue, but when using kino to import 
from my dv camera, it almost always crashed in CentOS whenever doing 
anything that needed talking to the camera, but works *almost* 
flawlessly in Ubuntu. Same version of kino, some of libs built against 
may be different, but the biggest difference was firewire subsystem and 
firewire is where the crashes happened.

My firewire ipod however worked well in CentOS (until the ipod broke), 
so the issue may have been kino just working better with modern firewire 
subsystem than what CentOS has.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread JohnS

On Mon, 2010-01-25 at 18:51 -0500, Ross Walker wrote:

> >  Instead look at the way your PHP Code is
> > Encoding the BLOB Data and if your really need the speed since now  
> > it's
> > MySQL DB, make you own custom C API for mysql to encode the BLOB.  The
> > DB can do this like that much faster then your PHP app ever thought  
> > of.
> 
> I believe the OP said he was running postgresql.
> 

Quoted from OPs previous mail hes not sure lol

"""The web application is written in PHP and runs off MySQL and/or
Postgresql."""

John

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread Ross Walker

On Jan 25, 2010, at 6:22 PM, JohnS  wrote:

>
> On Mon, 2010-01-25 at 09:45 -0500, Ross Walker wrote:
>> On Jan 25, 2010, at 6:41 AM, Noob Centos Admin
>>  wrote:
>>
>>> Hi,
>>>
 20 feilds or columns is really nothing. BUT That's dependant on the
 type
 of data being inserted.
>>>
>>> 20 was an arbitary number :)
>>>
 Ok so break the one table down create 2 or more, then you will have
 "Joins" & clustered indexes thus slowing you down more possibly.
 That
 is greatly dependant on your select, delete, and update scripts.
>>>
>>> That was the reason the original develop gave for having these  
>>> massive
>>> rows! Admittedly it is easier to read but when each row also  
>>> contains
>>> text/blob fields, they tend to grow rather big. Some users have been
>>> complaining the server seems to be getting sluggish so I'm trying to
>>> plan ahead and make changes before it becomes a real problem.
>>
>> Split the TEXT/BLOB data out of the primary table into tables of  
>> their
>> own indexed to the primary table by it's key column.
>
> Would seem like a good idea in Theory only.  Your back to all but  
> Basic
> Joins on the Table now.

I'm sure not all transactions will require the BLOB/TEXT fields.

>  Instead look at the way your PHP Code is
> Encoding the BLOB Data and if your really need the speed since now  
> it's
> MySQL DB, make you own custom C API for mysql to encode the BLOB.  The
> DB can do this like that much faster then your PHP app ever thought  
> of.

I believe the OP said he was running postgresql.

>
 Possibly very correct, but Nate is very correct on how you are
 accessing
 the DB ie direct i/o also.  Your fastest access come in optimized
 SPROCS
 and Triggers and TSQL.  Slam enough memory into the server and load
 it
 in memory.
>>>
>>> It's an old server with all slots populated so adding memory is  
>>> not an
>>> option. I thought of doing an image and porting it into a VM on a
>>> newer/faster machine. But then at the rate this client's usage
>>> growing, I foresee that as simply delaying the inevitable.
>>
>> Think about distributing the parts to different boxes as necessary.
>> You can start with the DBMS which is the logical candidate.
>>
 If speed is what your after why are you worried about VFS?
 CentOS does support Raw Disk Access (no filesystem).
>>>
>>> To be honest, I don't really care about VFS since I didn't know it
>>> existed until I started looking up Linux file/disk caching :D
>>>
>>> So I assumed that was what PHP and DBMS like MySQL/Postgresql  
>>> would be
>>> working through. It made sense since they wouldn't need to worry  
>>> about
>>> what filesystem was really used.
>>
>> On the DBMS backend, give it plenty of memory, good storage for the
>> workload and good networking.
>>
>> On the Apache/PHP side, look for a good DBMS inter-connect and some
>> PHP caching module and of course enough CPU for the PHP code and
>> network for Apache+DBMS inter-connect.
>
> Make sure PHP is Creating and Tearing down connections on insert and
> flush() connection.close()
>
>> If you wanted to split it up even more you could look into some sort
>> of PHP distributed cache/processing system and have PHP processed
>> behind Apache.
>
> You really need a good SQL Book to sit down and read like the one by:
> Peter Brawley and Arthur Fuller @ artfullsoftware.com, coauthored by  
> the
> original mysql owners.  It is the best one you will get.

Is it restricted to purely MySQL?

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [Fwd: Re: The directory that I am trying to clean up is huge]

2010-01-25 Thread Ross Walker
On Jan 25, 2010, at 10:48 AM, Corey Chandler   
wrote:

> On 1/25/10 9:33 AM, Robert Nichols wrote:
>>
>> When using the -exec action with the ";" terminator, the constructed
>> command line always contains the path for exactly one matched file.
>> Try it.  Run "find /usr -exec echo {} ;" and see that you get one
>> path per line and output begins almost instantly.
>
> Don't forget to backspace delimit the semicolon; the proper way to get
> data out of this example would be:
>
> find /usr -exec echo {} \;

And don't forget to escape the curly braces which are expanded in some  
shells:

# find /usr -exec echo \{\} \;

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] autofs with nfs plus local directories

2010-01-25 Thread Carlos Santana
Hi,

I have a autofs configured to mount home dir from NFS. All user
accounts lookup is done using LDAP. All is working fine with this
setup. Now I need to create a local user account and have its home dir
also on local system. So I added a new user account and changed
auto.home as follows:

test1 -rw,hard,intr   /home/test1
* -rw,hard,intr   nfs1:/export/users/&

But this stuff is not working. If I change test1 user's home dir to
'/opt/test1', it works fine. Log messages indicate:
'umount_autofs_indirect: ask umount returned busy /home'. I have some
LDAP users logged on to system. Do I need to tell them to logout to
successfully reload autofs? Any clues on this would be really helpful.

Thanks,
CS.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread JohnS

On Mon, 2010-01-25 at 09:45 -0500, Ross Walker wrote:
> On Jan 25, 2010, at 6:41 AM, Noob Centos Admin  
>  wrote:
> 
> > Hi,
> >
> >> 20 feilds or columns is really nothing. BUT That's dependant on the  
> >> type
> >> of data being inserted.
> >
> > 20 was an arbitary number :)
> >
> >> Ok so break the one table down create 2 or more, then you will have
> >> "Joins" & clustered indexes thus slowing you down more possibly.   
> >> That
> >> is greatly dependant on your select, delete, and update scripts.
> >
> > That was the reason the original develop gave for having these massive
> > rows! Admittedly it is easier to read but when each row also contains
> > text/blob fields, they tend to grow rather big. Some users have been
> > complaining the server seems to be getting sluggish so I'm trying to
> > plan ahead and make changes before it becomes a real problem.
> 
> Split the TEXT/BLOB data out of the primary table into tables of their  
> own indexed to the primary table by it's key column.

Would seem like a good idea in Theory only.  Your back to all but Basic
Joins on the Table now.  Instead look at the way your PHP Code is
Encoding the BLOB Data and if your really need the speed since now it's
MySQL DB, make you own custom C API for mysql to encode the BLOB.  The
DB can do this like that much faster then your PHP app ever thought of.

> >> Possibly very correct, but Nate is very correct on how you are  
> >> accessing
> >> the DB ie direct i/o also.  Your fastest access come in optimized  
> >> SPROCS
> >> and Triggers and TSQL.  Slam enough memory into the server and load  
> >> it
> >> in memory.
> >
> > It's an old server with all slots populated so adding memory is not an
> > option. I thought of doing an image and porting it into a VM on a
> > newer/faster machine. But then at the rate this client's usage
> > growing, I foresee that as simply delaying the inevitable.
> 
> Think about distributing the parts to different boxes as necessary.  
> You can start with the DBMS which is the logical candidate.
> 
> >> If speed is what your after why are you worried about VFS?
> >> CentOS does support Raw Disk Access (no filesystem).
> >
> > To be honest, I don't really care about VFS since I didn't know it
> > existed until I started looking up Linux file/disk caching :D
> >
> > So I assumed that was what PHP and DBMS like MySQL/Postgresql would be
> > working through. It made sense since they wouldn't need to worry about
> > what filesystem was really used.
> 
> On the DBMS backend, give it plenty of memory, good storage for the  
> workload and good networking.
> 
> On the Apache/PHP side, look for a good DBMS inter-connect and some  
> PHP caching module and of course enough CPU for the PHP code and  
> network for Apache+DBMS inter-connect.

Make sure PHP is Creating and Tearing down connections on insert and
flush() connection.close()

> If you wanted to split it up even more you could look into some sort  
> of PHP distributed cache/processing system and have PHP processed  
> behind Apache.

You really need a good SQL Book to sit down and read like the one by:
Peter Brawley and Arthur Fuller @ artfullsoftware.com, coauthored by the
original mysql owners.  It is the best one you will get.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Yum update failure (was a digest reference)

2010-01-25 Thread Tobias Weisserth
Hi,

I think I have found the problem. It's rather annoying. My host is a vps
hosted at alfahosting.de. They have migrated my guest to another host system
recently as there seemed to be problems with the old host. By migrating they
seem to have messed up the RPM package database and some other things that
seem to be filesystem related. That really sucks. I have opened a support
ticket with them and told them what I think about such an amateur service.

I really hope they can restore the state of my host to the way it has been.
I have backups of the most critical configuration files and data but I lack
the time to set up another host from scratch right now. It's a hobby, not a
job... And I was depending on them to do THEIR JOB right...

regards and thanks for the ideas,

Tobias

On Mon, Jan 25, 2010 at 10:32 PM, Eero Volotinen wrote:

> 2010/1/25 Tobias Weisserth :
> > Thanks for the hint, but still no luck:
> > [r...@hostname ~]# yum clean all
> > Loaded plugins: fastestmirror, priorities
> > Cleaning up Everything
> > Cleaning up list of fastest mirrors
> > [r...@hostname ~]# yum update
> > Loaded plugins: fastestmirror, priorities
> > Determining fastest mirrors
> >  * addons: mirror.netcologne.de
> >  * atomic: www4.atomicorp.com
> >  * base: mirror.netcologne.de
> >  * centosplus: mirror.netcologne.de
> >  * extras: mirror.netcologne.de
> >  * rpmforge: ftp-stud.fht-esslingen.de
> >  * updates: mirror.netcologne.de
> > addons
> >
> >|  951 B 00:00
> > addons/primary
> >
> >|  201 B 00:00
> > atomic
> >
> >| 1.9 kB 00:00
> > atomic/primary_db
> >
> > | 304 kB 00:00
> > base
> >
> >| 2.1 kB 00:00
> > base/primary_db
> >
> > | 1.6 MB 00:00
> > centosplus
> >
> >|  951 B 00:00
> > centosplus/primary
> >
> >| 157 kB 00:00
> > centosplus
> >
> >   183/183
> > extras
> >
> >| 1.1 kB 00:00
> > extras/primary
> >
> >| 107 kB 00:00
> > extras
> >
> >   325/325
> > rpmforge
> >
> >| 1.1 kB 00:00
> > rpmforge/primary
> >
> >| 3.6 MB 00:00
> > rpmforge
> >
> >   10064/10064
> > updates
> >
> > | 1.9 kB 00:00
> > updates/primary_db
> >
> >| 459 kB 00:00
> > utterramblings
> >
> >|  951 B 00:00
> > utterramblings/primary
> >
> >|  23 kB 00:00
> > utterramblings
> >
> > 69/69
> > Excluding Packages from CentOS-5 - Base
> > Finished
> > Excluding Packages from CentOS-5 - Plus
> > Finished
> > Reducing CentOS-5 - Plus to included packages only
> > Finished
> > Excluding Packages from CentOS-5 - Updates
> > Finished
> > 556 packages excluded due to repository priority protections
> > Setting up Update Process
> > Resolving Dependencies
> > --> Running transaction check
> > ---> Package util-linux.i386 0:2.13-0.52.el5_4.1 set to be updated
> > --> Processing Dependency: libc.so.6(GLIBC_2.4) for package: util-linux
> > --> Processing Dependency: libc.so.6(GLIBC_2.0) for package: util-linux
> > --> Processing Dependency: rtld(GNU_HASH) for package: util-linux
> > --> Processing Dependency: libutil.so.1(GLIBC_2.0) for package:
> util-linux
> > --> Processing Dependency: libc.so.6(GLIBC_2.2) for package: util-linux
> > --> Processing Dependency: libc.so.6 for package: util-linux
> > --> Processing Dependency: libc.so.6(GLIBC_2.3) for package: util-linux
> > --> Processing Dependency: libcrypt.so.1 for package: util-linux
> > --> Processing Dependency: libc.so.6(GLIBC_2.3.4) for package: util-linux
> > --> Processing Dependency: libc.so.6(GLIBC_2.1) for package: util-linux
> > --> Processing Dependency: libutil.so.1 for package: util-linux
> > --> Running transaction check
> > ---> Package glibc.i686 0:2.5-42.el5_4.3 set to be updated
> > --> Processing Dependency: glibc-common = 2.5-42.el5_4.3 for package:
> glibc
> > --> Running transaction check
> > ---> Package glibc-common.i386 0:2.5-42.el5_4.3 set to be updated
> > --> Finished Dependency Resolution
> > Dependencies Resolved
> >
> ===
> >  Package  Arch
> >   Version   Repository
> > Size
> >
> ===
> > Updating:
> >  util-linux   i386
> >   2.13-0.52.el5_4.1 updates
> > 1.8 M
> > Installing for dependencies:
> >  glibci686
> >   2.5-42.el5_4.3updates
> > 5.2 M
> >  glibc-common i386
> >   2.5-42.el5_4.3 

Re: [CentOS] OT: reliable secondary dns provider

2010-01-25 Thread Eero Volotinen
2010/1/25 Matt :
>> Sorry about a bit offtopic, but I am looking reliable (not free)
>> secondary dns provider.
>
> Why not just rent a VPS and install CentOS and use it as your
> secondary.  You would have total control then and it would be cheap
> and reliable.

Well, my work still costs a lot of money. Adding more servers for
maintenance is not wise and not even cost effective.

Cheap dns hosting providers are available and they take care of
security issues also.

--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: reliable secondary dns provider

2010-01-25 Thread Steve Huff


On Jan 25, 2010, at 4:21 PM, Eero Volotinen wrote:


Sorry about a bit offtopic, but I am looking reliable (not free)
secondary dns provider.



i've had consistently good experiences with RollerNet (http://rollernet.us 
).


-steve

--
If this were played upon a stage now, I could condemn it as an  
improbable fiction. - Fabian, Twelfth Night, III,v

http://five.sentenc.es



PGP.sig
Description: This is a digitally signed message part
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: reliable secondary dns provider

2010-01-25 Thread Matt
> Sorry about a bit offtopic, but I am looking reliable (not free)
> secondary dns provider.

Why not just rent a VPS and install CentOS and use it as your
secondary.  You would have total control then and it would be cheap
and reliable.

Matt
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Yum update failure (was a digest reference)

2010-01-25 Thread Eero Volotinen
2010/1/25 Tobias Weisserth :
> Thanks for the hint, but still no luck:
> [r...@hostname ~]# yum clean all
> Loaded plugins: fastestmirror, priorities
> Cleaning up Everything
> Cleaning up list of fastest mirrors
> [r...@hostname ~]# yum update
> Loaded plugins: fastestmirror, priorities
> Determining fastest mirrors
>  * addons: mirror.netcologne.de
>  * atomic: www4.atomicorp.com
>  * base: mirror.netcologne.de
>  * centosplus: mirror.netcologne.de
>  * extras: mirror.netcologne.de
>  * rpmforge: ftp-stud.fht-esslingen.de
>  * updates: mirror.netcologne.de
> addons
>
>    |  951 B     00:00
> addons/primary
>
>    |  201 B     00:00
> atomic
>
>    | 1.9 kB     00:00
> atomic/primary_db
>
>     | 304 kB     00:00
> base
>
>    | 2.1 kB     00:00
> base/primary_db
>
>     | 1.6 MB     00:00
> centosplus
>
>    |  951 B     00:00
> centosplus/primary
>
>    | 157 kB     00:00
> centosplus
>
>                   183/183
> extras
>
>    | 1.1 kB     00:00
> extras/primary
>
>    | 107 kB     00:00
> extras
>
>                   325/325
> rpmforge
>
>    | 1.1 kB     00:00
> rpmforge/primary
>
>    | 3.6 MB     00:00
> rpmforge
>
>               10064/10064
> updates
>
>     | 1.9 kB     00:00
> updates/primary_db
>
>    | 459 kB     00:00
> utterramblings
>
>    |  951 B     00:00
> utterramblings/primary
>
>    |  23 kB     00:00
> utterramblings
>
>                     69/69
> Excluding Packages from CentOS-5 - Base
> Finished
> Excluding Packages from CentOS-5 - Plus
> Finished
> Reducing CentOS-5 - Plus to included packages only
> Finished
> Excluding Packages from CentOS-5 - Updates
> Finished
> 556 packages excluded due to repository priority protections
> Setting up Update Process
> Resolving Dependencies
> --> Running transaction check
> ---> Package util-linux.i386 0:2.13-0.52.el5_4.1 set to be updated
> --> Processing Dependency: libc.so.6(GLIBC_2.4) for package: util-linux
> --> Processing Dependency: libc.so.6(GLIBC_2.0) for package: util-linux
> --> Processing Dependency: rtld(GNU_HASH) for package: util-linux
> --> Processing Dependency: libutil.so.1(GLIBC_2.0) for package: util-linux
> --> Processing Dependency: libc.so.6(GLIBC_2.2) for package: util-linux
> --> Processing Dependency: libc.so.6 for package: util-linux
> --> Processing Dependency: libc.so.6(GLIBC_2.3) for package: util-linux
> --> Processing Dependency: libcrypt.so.1 for package: util-linux
> --> Processing Dependency: libc.so.6(GLIBC_2.3.4) for package: util-linux
> --> Processing Dependency: libc.so.6(GLIBC_2.1) for package: util-linux
> --> Processing Dependency: libutil.so.1 for package: util-linux
> --> Running transaction check
> ---> Package glibc.i686 0:2.5-42.el5_4.3 set to be updated
> --> Processing Dependency: glibc-common = 2.5-42.el5_4.3 for package: glibc
> --> Running transaction check
> ---> Package glibc-common.i386 0:2.5-42.el5_4.3 set to be updated
> --> Finished Dependency Resolution
> Dependencies Resolved
> ===
>  Package                                      Arch
>       Version                                           Repository
>                     Size
> ===
> Updating:
>  util-linux                                   i386
>       2.13-0.52.el5_4.1                                 updates
>                     1.8 M
> Installing for dependencies:
>  glibc                                        i686
>       2.5-42.el5_4.3                                    updates
>                     5.2 M
>  glibc-common                                 i386
>       2.5-42.el5_4.3                                    updates
>                      16 M
> Transaction Summary
> ===
> Install      2 Package(s)
> Update       1 Package(s)
> Remove       0 Package(s)
> Total download size: 24 M
> Is this ok [y/N]: y
> Downloading Packages:
> (1/3): util-linux-2.13-0.52.el5_4.1.i386.rpm
>
>    | 1.8 MB     00:00
> (2/3): glibc-2.5-42.el5_4.3.i686.rpm
>
>    | 5.2 MB     00:00
> (3/3): glibc-common-2.5-42.el5_4.3.i386.rpm
>
>     |  16 MB     00:02
> ---
> Total
>                                                                        8.2
> MB/s |  24 MB     00:02
> Running rpm_check_debug
> Running Transaction Test
> Finished Transaction Test
> Transaction Test Succeeded
> Running Transaction
>   Installing     : glibc-common
>
>        

Re: [CentOS] OT: reliable secondary dns provider

2010-01-25 Thread nate
Eero Volotinen wrote:
> Sorry about a bit offtopic, but I am looking reliable (not free)
> secondary dns provider.


My company uses Dynect as primary and seconary though they can
do secondary as well

http://dyn.com/dynect

We also use their DNS based global load balancing as well.

So far 100% uptime(about 7-8 months of usage). Thousands of queries
per second.

nate


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Yum update failure (was a digest reference)

2010-01-25 Thread Tobias Weisserth
Thanks for the hint, but still no luck:

[r...@hostname ~]# yum clean all
Loaded plugins: fastestmirror, priorities
Cleaning up Everything
Cleaning up list of fastest mirrors
[r...@hostname ~]# yum update
Loaded plugins: fastestmirror, priorities
Determining fastest mirrors
 * addons: mirror.netcologne.de
 * atomic: www4.atomicorp.com
 * base: mirror.netcologne.de
 * centosplus: mirror.netcologne.de
 * extras: mirror.netcologne.de
 * rpmforge: ftp-stud.fht-esslingen.de
 * updates: mirror.netcologne.de
addons

   |  951 B 00:00
addons/primary

   |  201 B 00:00
atomic

   | 1.9 kB 00:00
atomic/primary_db

| 304 kB 00:00
base

   | 2.1 kB 00:00
base/primary_db

| 1.6 MB 00:00
centosplus

   |  951 B 00:00
centosplus/primary

   | 157 kB 00:00
centosplus

  183/183
extras

   | 1.1 kB 00:00
extras/primary

   | 107 kB 00:00
extras

  325/325
rpmforge

   | 1.1 kB 00:00
rpmforge/primary

   | 3.6 MB 00:00
rpmforge

  10064/10064
updates

| 1.9 kB 00:00
updates/primary_db

   | 459 kB 00:00
utterramblings

   |  951 B 00:00
utterramblings/primary

   |  23 kB 00:00
utterramblings

69/69
Excluding Packages from CentOS-5 - Base
Finished
Excluding Packages from CentOS-5 - Plus
Finished
Reducing CentOS-5 - Plus to included packages only
Finished
Excluding Packages from CentOS-5 - Updates
Finished
556 packages excluded due to repository priority protections
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package util-linux.i386 0:2.13-0.52.el5_4.1 set to be updated
--> Processing Dependency: libc.so.6(GLIBC_2.4) for package: util-linux
--> Processing Dependency: libc.so.6(GLIBC_2.0) for package: util-linux
--> Processing Dependency: rtld(GNU_HASH) for package: util-linux
--> Processing Dependency: libutil.so.1(GLIBC_2.0) for package: util-linux
--> Processing Dependency: libc.so.6(GLIBC_2.2) for package: util-linux
--> Processing Dependency: libc.so.6 for package: util-linux
--> Processing Dependency: libc.so.6(GLIBC_2.3) for package: util-linux
--> Processing Dependency: libcrypt.so.1 for package: util-linux
--> Processing Dependency: libc.so.6(GLIBC_2.3.4) for package: util-linux
--> Processing Dependency: libc.so.6(GLIBC_2.1) for package: util-linux
--> Processing Dependency: libutil.so.1 for package: util-linux
--> Running transaction check
---> Package glibc.i686 0:2.5-42.el5_4.3 set to be updated
--> Processing Dependency: glibc-common = 2.5-42.el5_4.3 for package: glibc
--> Running transaction check
---> Package glibc-common.i386 0:2.5-42.el5_4.3 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

===
 Package  Arch
  Version   Repository
Size
===
Updating:
 util-linux   i386
  2.13-0.52.el5_4.1 updates
1.8 M
Installing for dependencies:
 glibci686
  2.5-42.el5_4.3updates
5.2 M
 glibc-common i386
  2.5-42.el5_4.3updates
 16 M

Transaction Summary
===
Install  2 Package(s)
Update   1 Package(s)
Remove   0 Package(s)

Total download size: 24 M
Is this ok [y/N]: y
Downloading Packages:
(1/3): util-linux-2.13-0.52.el5_4.1.i386.rpm

   | 1.8 MB 00:00
(2/3): glibc-2.5-42.el5_4.3.i686.rpm

   | 5.2 MB 00:00
(3/3): glibc-common-2.5-42.el5_4.3.i386.rpm

|  16 MB 00:02
---
Total
   8.2
MB/s |  24 MB 00:02
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : glibc-common

   1/4
Error unpacking rpm package glibc-common-2.5-42.el5_4.3.i386
error: unpacking of archive failed on file /usr/share/i18n: cpio: mkdir
  Installing : glibc

  2/4
Error unpacking rpm package glibc-2.5-42.el5_4.3.i686
warning: /etc/localtime created as /etc/loc

[CentOS] OT: reliable secondary dns provider

2010-01-25 Thread Eero Volotinen
Sorry about a bit offtopic, but I am looking reliable (not free)
secondary dns provider.

--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bash script for backup

2010-01-25 Thread Christoph Maser
Am Montag, den 25.01.2010, 19:48 +0100 schrieb Alan Hoffmeister:
> Hello guyz!
>
> I'm new here, and this is my very first truble...
>
> I need a script that will backup & compress the folder /media/system in
> the folder /media/backups
>
> But that's not the problem, I need that only the last 7 backups (last 7
> days, yeah I know, cronjob...) will stay in that folder...
>
> The script need:
> 1 - Compress folder /media/system
> 2 - Store in /media/backups
> 3 - Name the compressed backup like day_month_year.tar.gzip
> 4 - Check the other backups and delete backups older than 7 days..
>
> Can some one help me?
>
> Tanks!

Since it is 7 days you want putting `date +%A` in the output filename
would be an easy solution. Another really simple thing is to use the
--backup switch of mv.

Chris


financial.com AG

Munich head office/Hauptsitz München: Maria-Probst-Str. 19 | 80939 München | 
Germany
Frankfurt branch office/Niederlassung Frankfurt: Messeturm | 
Friedrich-Ebert-Anlage 49 | 60327 Frankfurt | Germany
Management board/Vorstand: Dr. Steffen Boehnert | Dr. Alexis Eisenhofer | Dr. 
Yann Samson | Matthias Wiederwach
Supervisory board/Aufsichtsrat: Dr. Dr. Ernst zur Linden (chairman/Vorsitzender)
Register court/Handelsregister: Munich – HRB 128 972 | Sales tax ID 
number/St.Nr.: DE205 370 553
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Firewire issues with CentOS/RH?

2010-01-25 Thread Rick Philbrick
I haven't seen this problem from my drive.  I am using Centos 5.4 and the
configuration change that i found necessary was to uncomment the blacklist
file
/etc/modprobe.d/blacklist-firewire

regards-
Rick

On Mon, Jan 25, 2010 at 11:52 AM, MHR  wrote:

> I read in another forum that CentOS has problems with Firewire drives,
> something along the lines of whenever a new kernel is booted, the
> drives are gone.
>
> Can anyone elaborate on that?  I don't use Firewire drives (at all,
> yet), but information about this would be nice to have
>
> Thanks.
>
> mhr
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Firewire issues with CentOS/RH?

2010-01-25 Thread MHR
I read in another forum that CentOS has problems with Firewire drives,
something along the lines of whenever a new kernel is booted, the
drives are gone.

Can anyone elaborate on that?  I don't use Firewire drives (at all,
yet), but information about this would be nice to have

Thanks.

mhr
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bash script for backup

2010-01-25 Thread Karanbir Singh
Hi Guys,

On 25/01/10 18:56, Lennart Andersen wrote:
> How about something like this..
> 

Dont toppost! and trim the replies. Take a look at
http://wiki.centos.org/GettingHelp/ListInfo for some basic guidelines
that we ask everyone to follow here on the lists.

-- 
Karanbir Singh
kbsi...@karan.org | http://www.karan.org/ | twitter.com/kbsingh
ICQ: 2522219  | Yahoo IM: z00dax  | Gtalk: z00dax
GnuPG Key : http://www.karan.org/publickey.asc
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bash script for backup

2010-01-25 Thread Lennart Andersen
How about something like this..

#!/bin/bash

# This script makes a backup of the files on the primary server directory.

# Change the values of the variables to make the script work:
BACKUPDIR=/data/
BACKUPFILES=*.cdf
GZTARFILE=/var/tmp/data_$(date +%F).tar.gz
SERVER=mcastasp1
REMOTEDIR=/home/admin/DATA_BKP
LOGFILE=/home/admin/DATA_BKP/backup.log
CLEANUP=/home/admin/DATA_BKP

cd $BACKUPDIR

# This creates the archive
tar zcf $GZTARFILE $BACKUPFILES > /dev/null 2>&1

# Create Remote backup Directory
ssh $SERVER 'mkdir -p /home/admin/DATA_BKP'

# Copy the file to another host - we have ssh keys for making this work without 
intervention.
scp $GZTARFILE $SERVER:$REMOTEDIR > /dev/null 2>&1

# Redirect errors because this generates some if the archive
# does not exist.
rm $GZTARFILE 2> /dev/null

# Create a timestamp in a logfile.
date >> $LOGFILE
echo backup succeeded >> $LOGFILE

# Clean up remote server and leave 7 days of backup files
ssh $SERVER 'find /home/admin/DATA_BKP/ -follow -name 'data_*' -ctime +7 -exec 
rm {} \;'





From: Alan Hoffmeister 
To: centos@centos.org
Sent: Mon, January 25, 2010 1:48:19 PM
Subject: [CentOS] Bash script for backup

Hello guyz!

I'm new here, and this is my very first truble...

I need a script that will backup & compress the folder /media/system in 
the folder /media/backups

But that's not the problem, I need that only the last 7 backups (last 7 
days, yeah I know, cronjob...) will stay in that folder...

The script need:
1 - Compress folder /media/system
2 - Store in /media/backups
3 - Name the compressed backup like day_month_year.tar.gzip
4 - Check the other backups and delete backups older than 7 days..

Can some one help me?

Tanks!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Bash script for backup

2010-01-25 Thread Alan Hoffmeister
Hello guyz!

I'm new here, and this is my very first truble...

I need a script that will backup & compress the folder /media/system in 
the folder /media/backups

But that's not the problem, I need that only the last 7 backups (last 7 
days, yeah I know, cronjob...) will stay in that folder...

The script need:
1 - Compress folder /media/system
2 - Store in /media/backups
3 - Name the compressed backup like day_month_year.tar.gzip
4 - Check the other backups and delete backups older than 7 days..

Can some one help me?

Tanks!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread Les Mikesell
James B. Byrne wrote:
> On Mon, January 25, 2010 10:31, Robert Nichols wrote:
> \
>> Now if the "{}" string appears more than once then the command line
>> contains that path more than once, but it is essentially impossible
>> to exceed the kernel's MAX_ARG_PAGES this way.
>>
>> The only issue with using "-exec command {} ;" for a huge number of
>> files is one of performance.  If there are 100,000 matched files,
>> the command will be invoked 100,000 times.
>>
>> --
>> Bob Nichols rnichol...@comcast.net
>>
> 
> Since the OP reported that the command he used:
> 
>   find -name "*.access*" -mtime +2 -exec rm {} \;
> 
> in fact failed, one may infer that more than performance is at issue.
> 
> The OP's problem lies not with the -exec construction but with the
> unstated, but nonetheless present, './' of his find invocation.
> Therefore he begins a recursive descent into that directory tree.
> Since the depth of that tree is not given us, nor its contents, we
> may only infer that there must be some number of files therein which
> are causing the MAXPAGES limit to be exceeded before the recursion
> returns.

Find just emits the filenames as encountered, so _no_ number of files should be 
able to cause an error.  An infinitely deep directory tree might, or 
recursively 
linked directories, but only after a considerable amount of time and churning 
to 
exhaust the machine's real and virtual memory.

> I deduce that he could provide the -prune option or the -maxdepth= 0
> option to avoid this recursion instead. I have not tried either but
> I understand that one, or both, should work.

I'd say it is more likely that the command that resulted in an error wasn't 
exactly what was posted or there is a filesystem problem.

-- 
   Les Mikesell
lesmikes...@gmail.com


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread James B. Byrne
On Mon, January 25, 2010 10:31, Robert Nichols wrote:
\
>
> Now if the "{}" string appears more than once then the command line
> contains that path more than once, but it is essentially impossible
> to exceed the kernel's MAX_ARG_PAGES this way.
>
> The only issue with using "-exec command {} ;" for a huge number of
> files is one of performance.  If there are 100,000 matched files,
> the command will be invoked 100,000 times.
>
> --
> Bob Nichols rnichol...@comcast.net
>

Since the OP reported that the command he used:

  find -name "*.access*" -mtime +2 -exec rm {} \;

in fact failed, one may infer that more than performance is at issue.

The OP's problem lies not with the -exec construction but with the
unstated, but nonetheless present, './' of his find invocation.
Therefore he begins a recursive descent into that directory tree.
Since the depth of that tree is not given us, nor its contents, we
may only infer that there must be some number of files therein which
are causing the MAXPAGES limit to be exceeded before the recursion
returns.

I deduce that he could provide the -prune option or the -maxdepth= 0
option to avoid this recursion instead. I have not tried either but
I understand that one, or both, should work.



-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [Fwd: Re: The directory that I am trying to clean up is huge]

2010-01-25 Thread Corey Chandler
On 1/25/10 9:33 AM, Robert Nichols wrote:
>
> When using the -exec action with the ";" terminator, the constructed
> command line always contains the path for exactly one matched file.
> Try it.  Run "find /usr -exec echo {} ;" and see that you get one
> path per line and output begins almost instantly.

Don't forget to backspace delimit the semicolon; the proper way to get 
data out of this example would be:

find /usr -exec echo {} \;


-- Corey / KB1JWQ

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [Fwd: Re: The directory that I am trying to clean up is huge]

2010-01-25 Thread Robert Nichols
James B. Byrne wrote:
 > On Sat, January 23, 2010 20:21, Robert Nichols wrote:
 >> Robert Heller wrote:
 >
 >> Gosh, then I guess the manpage for 'find' must be totally wrong
 >> where it
 >> says:
 >>
 >> -exec command ;
 >>...
 >>The specified command is run once for each matched
 >> file.
 >>
 >
 > Not wrong.  The man page on find simply does not speak to the limits
 > of the kernal configuration (MAX_ARG_PAGES) implicitly used by cp,
 > find, ls, etc.  It just lives within its means and fails when these
 > do not suffice.
 >
 > The problem you have is that the selection of all qualified files is
 > completed before any are acted on by find.  So, in the case of
 > overlarge collections, the page limit is exceeded before any are
 > deleted.  Taking each file as it is found and piping it to an
 > external rm command avoids hitting the page limit.

When using the -exec action with the ";" terminator, the constructed
command line always contains the path for exactly one matched file.
Try it.  Run "find /usr -exec echo {} ;" and see that you get one
path per line and output begins almost instantly.  Do you really
believe that 'find' searched the entire /usr tree in that time?

Now if the "{}" string appears more than once then the command line
contains that path more than once, but it is essentially impossible
to exceed the kernel's MAX_ARG_PAGES this way.

The only issue with using "-exec command {} ;" for a huge number of
files is one of performance.  If there are 100,000 matched files,
the command will be invoked 100,000 times.

-- 
Bob Nichols "NOSPAM" is really part of my email address.
 Do NOT delete it.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Automatically check into SVN?

2010-01-25 Thread Mr Gabriel
On 25/01/2010 13:04, Greg Bailey wrote:
> Mr Gabriel wrote:
>
>> I would like to automatically check in a directory to SVN, to maintain
>> the changes that area made to those txt files over time. These files are
>> configuration files, and it would be good to be able to revert back by
>> simply checking out the older versions. But I would to check in the
>> files every files minutes, and I'm sure SVN will not use up any disc
>> space, unless there are any changes.
>>
>> I already have an svn server up and running in production -- Any help
>>  
> You could take a look at FSVS, its description appears similar to your
> situation:
>
> http://fsvs.tigris.org/
>
> -Greg
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
Thank you Greg, this seems to be exactly what I want. Perfect!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DNS issue.. help ?!

2010-01-25 Thread fabien faye
Hi 

Other things,

What is the result you expect ? Could you please exec dig query on your dns 
server and on your client side.

dig example.com @dnsserver
dig example.com

Best regards

Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat

- Original Message -
From: "fabien faye" 
To: "CentOS mailing list" 
Sent: Monday, January 25, 2010 4:48:33 PM
Subject: Re: [CentOS] DNS issue.. help ?!

Hi,

If you are working on real TLD, you have to change you dns server on the dns 
provider side.

If you made a whois of your domain name :

Example :  http://generationip.com/whois?Whois=generationip.com

Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information.

Domain Name: GENERATIONIP.COM
Registrar: EURODNS S.A
Whois Server: whois.eurodns.com
Referral URL: http://www.eurodns.com
Name Server: NS1.EURODNS.COM
Name Server: NS2.EURODNS.COM
Status: clientTransferProhibited
Updated Date: 05-mar-2009
Creation Date: 17-mar-2005
Expiration Date: 17-mar-2011

The named server define in this case is : 
Name Server: NS1.EURODNS.COM
Name Server: NS2.EURODNS.COM

those name server are declared at AUTHORITY server's, if you must define new 
dns server, you have to change the named server of this zone to your new 
authority server.

Best regards

Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat

- Original Message -
From: "Roland Roland" 
To: "CentOS mailing list" 
Sent: Monday, January 25, 2010 3:35:26 PM
Subject: [CentOS] DNS issue.. help ?!

Hi All,

i have DNS configured on my centos 5.2 server.

it's all working fine, right now i want to change the main public dns 
from one IP to another to do some testing (the new public dns ip has 
records which the old one doesnt have and it's done as such for testing)

so i got into /etc/resolv.conf and changed the first nameserver to the 
NEW public DNS.
/etc/init.d/network restart
/etc/init.d/named restart

when i issue an nslookup example.com ON the dns server, i get the exact 
IP i want to do testing on.
but when i do nslookup example.com on the clients machine. the website 
resolves to another IP ( the one set in the initial public dns records)


is there any other changes i need to do for the DNS server redirects its 
requests to the new public dns ?



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread Chan Chung Hang Christopher
Anas Alnaffar wrote:
> I tried to run this command
> 
> find -name "*.access*" -mtime +2 -exec rm {} \;
> 

Should have been: find ./ -name \*.access\* -mtime +2 -exec rm -f {} \;
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] [Fwd: Re: The directory that I am trying to clean up is huge]

2010-01-25 Thread James B. Byrne


On Sat, January 23, 2010 20:21, Robert Nichols wrote:
> Robert Heller wrote:

>
> Gosh, then I guess the manpage for 'find' must be totally wrong
> where it
> says:
>
> -exec command ;
>...
>The specified command is run once for each matched
> file.
>

Not wrong, just not very explicit regarding process.  The man page
does not say that find acts upon each file as it is found, only that
it acts upon each file that is found.  Neither does the man page
speak to the limits of the kernel configuration (MAX_ARG_PAGES)
implicitly used by cp, find, ls, etc.

The problem you have is that the selection of all qualified files is
completed before any are acted on by find.  So, in your case of an
overlarge collection, the page limit is exceeded before any are
deleted.  Taking each file as it is found and piping it to an
external rm command explicitly defines the process as find, delete
and find again, and thereby avoids hitting the page limit.

CentOS-5.3 was supposed to address this issue:

> Previously, the MAX_ARG_PAGES limit that is set in the kernel
> was too low, and may have resulted in the following error:
>
> execve: Argument list too long
>
> In this update, this limit has been increased to 25 percent of
> the stack size, which resolves this issue.

So, perhaps if you update to 5.3+ the problem might go away?
Although, in my opinion, piping find results through xargs is far
more reliable and portable.

Regards,

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] DNS issue.. help ?!

2010-01-25 Thread fabien faye
Hi,

If you are working on real TLD, you have to change you dns server on the dns 
provider side.

If you made a whois of your domain name :

Example :  http://generationip.com/whois?Whois=generationip.com

Whois Server Version 2.0

Domain names in the .com and .net domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information.

Domain Name: GENERATIONIP.COM
Registrar: EURODNS S.A
Whois Server: whois.eurodns.com
Referral URL: http://www.eurodns.com
Name Server: NS1.EURODNS.COM
Name Server: NS2.EURODNS.COM
Status: clientTransferProhibited
Updated Date: 05-mar-2009
Creation Date: 17-mar-2005
Expiration Date: 17-mar-2011

The named server define in this case is : 
Name Server: NS1.EURODNS.COM
Name Server: NS2.EURODNS.COM

those name server are declared at AUTHORITY server's, if you must define new 
dns server, you have to change the named server of this zone to your new 
authority server.

Best regards

Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat

- Original Message -
From: "Roland Roland" 
To: "CentOS mailing list" 
Sent: Monday, January 25, 2010 3:35:26 PM
Subject: [CentOS] DNS issue.. help ?!

Hi All,

i have DNS configured on my centos 5.2 server.

it's all working fine, right now i want to change the main public dns 
from one IP to another to do some testing (the new public dns ip has 
records which the old one doesnt have and it's done as such for testing)

so i got into /etc/resolv.conf and changed the first nameserver to the 
NEW public DNS.
/etc/init.d/network restart
/etc/init.d/named restart

when i issue an nslookup example.com ON the dns server, i get the exact 
IP i want to do testing on.
but when i do nslookup example.com on the clients machine. the website 
resolves to another IP ( the one set in the initial public dns records)


is there any other changes i need to do for the DNS server redirects its 
requests to the new public dns ?



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread Ross Walker
On Jan 25, 2010, at 6:41 AM, Noob Centos Admin  
 wrote:

> Hi,
>
>> 20 feilds or columns is really nothing. BUT That's dependant on the  
>> type
>> of data being inserted.
>
> 20 was an arbitary number :)
>
>> Ok so break the one table down create 2 or more, then you will have
>> "Joins" & clustered indexes thus slowing you down more possibly.   
>> That
>> is greatly dependant on your select, delete, and update scripts.
>
> That was the reason the original develop gave for having these massive
> rows! Admittedly it is easier to read but when each row also contains
> text/blob fields, they tend to grow rather big. Some users have been
> complaining the server seems to be getting sluggish so I'm trying to
> plan ahead and make changes before it becomes a real problem.

Split the TEXT/BLOB data out of the primary table into tables of their  
own indexed to the primary table by it's key column.

>> Possibly very correct, but Nate is very correct on how you are  
>> accessing
>> the DB ie direct i/o also.  Your fastest access come in optimized  
>> SPROCS
>> and Triggers and TSQL.  Slam enough memory into the server and load  
>> it
>> in memory.
>
> It's an old server with all slots populated so adding memory is not an
> option. I thought of doing an image and porting it into a VM on a
> newer/faster machine. But then at the rate this client's usage
> growing, I foresee that as simply delaying the inevitable.

Think about distributing the parts to different boxes as necessary.  
You can start with the DBMS which is the logical candidate.

>> If speed is what your after why are you worried about VFS?
>> CentOS does support Raw Disk Access (no filesystem).
>
> To be honest, I don't really care about VFS since I didn't know it
> existed until I started looking up Linux file/disk caching :D
>
> So I assumed that was what PHP and DBMS like MySQL/Postgresql would be
> working through. It made sense since they wouldn't need to worry about
> what filesystem was really used.

On the DBMS backend, give it plenty of memory, good storage for the  
workload and good networking.

On the Apache/PHP side, look for a good DBMS inter-connect and some  
PHP caching module and of course enough CPU for the PHP code and  
network for Apache+DBMS inter-connect.

If you wanted to split it up even more you could look into some sort  
of PHP distributed cache/processing system and have PHP processed  
behind Apache.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread m . roth
> fred smith wrote:
>> On Mon, Jan 25, 2010 at 03:14:54AM -0800, John Doe wrote:
>>> From: Anas Alnaffar 
 I tried to run this command
 find -name "*.access*" -mtime +2 -exec rm {} \;
 and I have same error message
>>> How many "*.access*" are there...?
>>>
>> if there are so many that you're finding the previously suggested
>> techniques difficult to use, you can try the brute-force I sometimes
>> use...
>
> It actually shouldn't matter.  As long as the wildcards are quoted on the
> command line, you shouldn't get an error from too many files.  I suspect
> the command that was typed wasn't exactly what is shown above.

First, I don't see the path there, which *must* be after the command

Also, I don't believe that "" will work - the shell will interpret that. I
think you need '', or, what I always use, \, so that if I were typing it,
I'd have:
$ find . -name \*.access\* -mtime +2 -exec rm {} \;

  mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] DNS issue.. help ?!

2010-01-25 Thread Roland Roland
Hi All,

i have DNS configured on my centos 5.2 server.

it's all working fine, right now i want to change the main public dns 
from one IP to another to do some testing (the new public dns ip has 
records which the old one doesnt have and it's done as such for testing)

so i got into /etc/resolv.conf and changed the first nameserver to the 
NEW public DNS.
/etc/init.d/network restart
/etc/init.d/named restart

when i issue an nslookup example.com ON the dns server, i get the exact 
IP i want to do testing on.
but when i do nslookup example.com on the clients machine. the website 
resolves to another IP ( the one set in the initial public dns records)


is there any other changes i need to do for the DNS server redirects its 
requests to the new public dns ?



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread Les Mikesell
fred smith wrote:
> On Mon, Jan 25, 2010 at 03:14:54AM -0800, John Doe wrote:
>> From: Anas Alnaffar 
>>> I tried to run this command
>>> find -name "*.access*" -mtime +2 -exec rm {} \;
>>> and I have same error message
>> How many "*.access*" are there...?
>>
>> JD
> 
> if there are so many that you're finding the previously suggested
> techniques difficult to use, you can try the brute-force I sometimes
> use...

It actually shouldn't matter.  As long as the wildcards are quoted on the 
command line, you shouldn't get an error from too many files.  I suspect the 
command that was typed wasn't exactly what is shown above.

-- 
   Les Mikesell
lesmikes...@gmail.com


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Too much cpu wait on nfs server when we need to read data on it

2010-01-25 Thread fabien faye
Hi,

You can find below the spec of my server:

cat /proc/cpuinfo 
processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 37
model name  : AMD Opteron(tm) Processor 248
stepping: 1
cpu MHz : 2200.034
cache size  : 1024 KB
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 
3dnow pni lahf_lm
bogomips: 4400.06
TLB size: 1024 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

processor   : 1
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 37
model name  : AMD Opteron(tm) Processor 248
stepping: 1
cpu MHz : 2200.034
cache size  : 1024 KB
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 
3dnow pni lahf_lm
bogomips: 4399.42
TLB size: 1024 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

free
 total   used   free sharedbuffers cached
Mem:   20592362049896   9340  0   98001874828
-/+ buffers/cache: 1652681893968
Swap:  40959921124095880


Linux 1-NFS.domain.com 2.6.18-164.9.1.el5 #1 SMP Tue Dec 15 20:57:57 EST 2009 
x86_64 x86_64 x86_64 GNU/Linux

  --- Physical volume ---
  PV Name   /dev/sdc1
  VG Name   vgspace
  PV Size   4.55 TB / not usable 1.97 MB
  Allocatable   yes 
  PE Size (KByte)   4096
  Total PE  1192067
  Free PE   795267
  Allocated PE  396800
   
  --- Physical volume ---
  PV Name   /dev/sdd1
  VG Name   vgspace
  PV Size   4.55 TB / not usable 1.97 MB
  Allocatable   yes 
  PE Size (KByte)   4096
  Total PE  1192067
  Free PE   808067
  Allocated PE  384000

I need  ext4 for file bigger than 2TB.

/dev/mapper/vgspace-vol1 on /vol/vol1 type ext4 (rw,noatime)
/dev/mapper/vgspace-vol2 on /vol/vol2 type ext4 (rw,noatime)

# tune2fs -l /dev/mapper/vgspace-vol1
tune2fs 1.39 (29-May-2006)
tune2fs: Filesystem has unsupported feature(s) while trying to open 
/dev/mapper/vgspace-vol1
Couldn't find valid filesystem superblock.
# tune2fs -l /dev/mapper/vgspace-vol2
tune2fs 1.39 (29-May-2006)
tune2fs: Filesystem has unsupported feature(s) while trying to open 
/dev/mapper/vgspace-vol2
Couldn't find valid filesystem superblock.

export nfs : 

/vol/vol1 192.168.*.0/255.255.255.0(async,no_subtree_check,rw,no_root_squash)
/vol/vol2 192.168.*.0/255.255.255.0(async,no_subtree_check,rw,no_root_squash)

Best regards

Fabien FAYE






- Original Message -
From: "Jim Perrin" 
To: "CentOS mailing list" 
Sent: Monday, January 25, 2010 2:27:41 PM
Subject: Re: [CentOS] Too much cpu wait on nfs server when we need to read  
data on it

On Mon, Jan 25, 2010 at 8:39 AM, fabien faye  wrote:
> Hi,
>
> I have a big server with 24 Disk on 2 3ware card.

Define 'big'. You're not giving much in terms of memory or cpu specs.

> When i write data on my nfs server everything is fine but when i want to read 
> data i have a lot of cpu wait.

Okay. Have you looked at any performance tuning or elevator/scheduler
adjustments? File system tuning? What have you tried on your own so
far to debug this?

> about the file system, i use ext4 on LVM partition.

Why? Is there a performance reason for you to use ext4 vs ext3?

> Do you have any idea about that.

Not based on the little information you've provided. It would help for
you to supply more detail, and the things you've already looked at.

-- 
During times of universal deceit, telling the truth becomes a revolutionary act.
George Orwell
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Too much cpu wait on nfs server when we need to read data on it

2010-01-25 Thread Jim Perrin
On Mon, Jan 25, 2010 at 8:39 AM, fabien faye  wrote:
> Hi,
>
> I have a big server with 24 Disk on 2 3ware card.

Define 'big'. You're not giving much in terms of memory or cpu specs.

> When i write data on my nfs server everything is fine but when i want to read 
> data i have a lot of cpu wait.

Okay. Have you looked at any performance tuning or elevator/scheduler
adjustments? File system tuning? What have you tried on your own so
far to debug this?

> about the file system, i use ext4 on LVM partition.

Why? Is there a performance reason for you to use ext4 vs ext3?

> Do you have any idea about that.

Not based on the little information you've provided. It would help for
you to supply more detail, and the things you've already looked at.

-- 
During times of universal deceit, telling the truth becomes a revolutionary act.
George Orwell
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Automatically check into SVN?

2010-01-25 Thread Greg Bailey
Mr Gabriel wrote:
> I would like to automatically check in a directory to SVN, to maintain 
> the changes that area made to those txt files over time. These files are 
> configuration files, and it would be good to be able to revert back by 
> simply checking out the older versions. But I would to check in the 
> files every files minutes, and I'm sure SVN will not use up any disc 
> space, unless there are any changes.
>
> I already have an svn server up and running in production -- Any help

You could take a look at FSVS, its description appears similar to your 
situation:

http://fsvs.tigris.org/

-Greg

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Too much cpu wait on nfs server when we need to read data on it

2010-01-25 Thread fabien faye
Hi,

I have a big server with 24 Disk on 2 3ware card.

When i write data on my nfs server everything is fine but when i want to read 
data i have a lot of cpu wait.

[r...@nfs /]# vmstat 1
procs ---memory-- ---swap-- -io --system-- 
-cpu--
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
 0  1112   9592   7140 187949600   621  118062  0  2 86 11  0
 0  9112  10668   7148 18777600084 21600 1926  454  0  3 49 49  0
 1  3112   9308   7156 187909200  1400  1580 2123 2009  0  2 50 48  0
 0  0112   9432   7156 187947600  3664 0 2275 2862  0  3 81 17  0
 0  1112   9212   7172 187958800  219632 2132 2293  0  2 66 32  0
 1  0112   9848   7180 187886400  1268  4168 1852 1934  0  2 84 14  0
 0  7112  11208   7180 18769360092 13124 1339  355  0  2 50 48  0
 0  0112  10708   7188 187751200   62816 1552 1014  0  1 84 16  0
 0  1112  10344   7188 187804400   472 0 1334  721  0  1 89 10  0
 0  1112  10372   7196 187813600  193632 2268 2850  0  2 50 48  0
 0  0112  10336   7192 187831200  2220 0 2308 2966  0  2 51 47  0
 0  0112   9716   7200 187901200   69628 1484  885  0  1 89 11  0
 0  2112   9400   7200 187928000   236 0 2112 5819  0  8 75 18  0
 0  0112   9980   7196 187881600  136020 1840 1743  0  1 83 15  0
 0 15112   9168   7196 187958000   768 0 1423  728  0  1 91  9  0
 0  0112  10756   7200 187775200  406832 2788 4225  0  4 70 26  0

I have tuned nfs server with this option :

RPCNFSDARGS="-N 2"
RPCNFSDARGS="-N 4"
MOUNTD_NFS_V3="yes"
RPCNFSDCOUNT=160

about the file system, i use ext4 on LVM partition.

Do you have any idea about that.

Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread fred smith
On Mon, Jan 25, 2010 at 03:14:54AM -0800, John Doe wrote:
> From: Anas Alnaffar 
> > I tried to run this command
> > find -name "*.access*" -mtime +2 -exec rm {} \;
> > and I have same error message
> 
> How many "*.access*" are there...?
> 
> JD

if there are so many that you're finding the previously suggested
techniques difficult to use, you can try the brute-force I sometimes
use...

run:
ls > list

then edit the file (list) with a decent text editor, one in which
you can use one command to place text at the beginning of every line
such that every line then turns out to read:

rm file1
rm file2

etc, as well as removing any lines for files you do NOT want to remove.

if you have 'vi', this command will do the edits for you:
":1,$s/^/rm /"

then make the file executable:

chmod a+x list

then run it:

./list
-- 
 Fred Smith -- fre...@fcshome.stoneham.ma.us -
   But God demonstrates his own love for us in this: 
 While we were still sinners, 
  Christ died for us.
--- Romans 5:8 (niv) --
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread Noob Centos Admin
Hi,

> 20 feilds or columns is really nothing. BUT That's dependant on the type
> of data being inserted.

20 was an arbitary number :)

> Ok so break the one table down create 2 or more, then you will have
> "Joins" & clustered indexes thus slowing you down more possibly.  That
> is greatly dependant on your select, delete, and update scripts.

That was the reason the original develop gave for having these massive
rows! Admittedly it is easier to read but when each row also contains
text/blob fields, they tend to grow rather big. Some users have been
complaining the server seems to be getting sluggish so I'm trying to
plan ahead and make changes before it becomes a real problem.

> Possibly very correct, but Nate is very correct on how you are accessing
> the DB ie direct i/o also.  Your fastest access come in optimized SPROCS
> and Triggers and TSQL.  Slam enough memory into the server and load it
> in memory.

It's an old server with all slots populated so adding memory is not an
option. I thought of doing an image and porting it into a VM on a
newer/faster machine. But then at the rate this client's usage
growing, I foresee that as simply delaying the inevitable.


> If speed is what your after why are you worried about VFS?
> CentOS does support Raw Disk Access (no filesystem).

To be honest, I don't really care about VFS since I didn't know it
existed until I started looking up Linux file/disk caching :D

So I assumed that was what PHP and DBMS like MySQL/Postgresql would be
working through. It made sense since they wouldn't need to worry about
what filesystem was really used.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways

2010-01-25 Thread Noob Centos Admin
Hi,

> If you want a fast database forget about file system caching,
> use Direct I/O and put your memory to better use - application
> level caching.

The web application is written in PHP and runs off MySQL and/or
Postgresql. So I don't think I can access the raw disk data directly,
nor do I think it would be safe since that bypasses the DBMS's checks.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] authentication failure

2010-01-25 Thread fabien faye
Hi,

No one knows how to auto send fail2ban report to the email address present in 
the whois ?

Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat

- Original Message -
From: "fabien faye" 
To: "CentOS mailing list" 
Sent: Saturday, January 23, 2010 8:30:36 PM
Subject: Re: [CentOS] authentication failure

Hi,

I am a fail2ban user and i am very interested to have an autosent mail to the 
ip provider of the brute force ip address.
Do you know if it is possible with fail2ban or if we have to rewrite action in 
fail2ban ?.

Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat


- Mail Original -
De: "Athmane Madjoudj" 
À: "CentOS mailing list" 
Envoyé: Samedi 23 Janvier 2010 18:20:01
Objet: Re: [CentOS] authentication failure

On Sat, Jan 23, 2010 at 6:14 PM, madunix  wrote:
> I noticed that my server has a lot ca. 1000x auth failure from
> different alocated in China / Romania and Netherlands per day since 3
> days
> It looks to me like somebody was trying to get into server by guessing
> my password by brute force.
> what would be the best to stop this attack and how? the server running
> apache mysql and ftp
> PORT     STATE SERVICE
> 21/tcp   open  ftp
> 80/tcp   open  http
> 443/tcp  open  https
> 3306/tcp open  mysql
> ...
> Jan 22 16:07:14 user vsftpd(pam_unix)[17462]: authentication failure;
> logname= uid=0 euid=0 tty= ruser= rhost=195.95.228.150
> Jan 22 16:07:16 user vsftpd(pam_unix)[16737]: check pass; user unknown
> Jan 22 16:07:16 user vsftpd(pam_unix)[16737]: authentication failure;
> logname= uid=0 euid=0 tty= ruser= rhost=195.95.228.150
> Jan 22 16:07:17 user vsftpd(pam_unix)[17462]: check pass; user unknown
> Jan 23 17:23:52 user vsftpd(pam_unix)[20524]: authentication failure;
> logname= uid=0 euid=0 tty= ruser= rhost=221.7.40.47
> Jan 23 17:23:55 user vsftpd(pam_unix)[20524]: check pass; user unknown
> Jan 23 17:23:55 user vsftpd(pam_unix)[20524]: authentication failure;
> logname= uid=0 euid=0 tty= ruser= rhost=221.7.40.47
> Jan 23 17:23:59 user vsftpd(pam_unix)[20524]: check pass; user unknown
> Jan 23 17:24:58 user vsftpd(pam_unix)[20524]: authentication failure;
> logname= uid=0 euid=0 tty= ruser= rhost=221.7.40.47
> Jan 23 00:37:47 user vsftpd(pam_unix)[1791]: check pass; user unknown
> Jan 23 00:37:47 user vsftpd(pam_unix)[1791]: authentication failure;
> logname= uid=0 euid=0 tty= ruser= rhost=217.23.14.168
> Jan 23 00:38:06 user vsftpd(pam_unix)[1791]: check pass; user unknown
> Jan 23 00:38:06 user vsftpd(pam_unix)[1791]: authentication failure;
> logname= uid=0 euid=0 tty= ruser= rhost=217.23.14.168
> ...
>
> Thanks
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>

Maybe a brute force attack, try to install a HIDS like:

APF/BFD: http://www.rfxn.com/projects/advanced-policy-firewall/
http://www.rfxn.com/projects/brute-force-detection/

Fail2ban: http://www.fail2ban.org/

Fail2ban is available in EPEL repos.

HTH
-- 
Athmane Madjoudj
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread John Doe
From: Anas Alnaffar 
> I tried to run this command
> find -name "*.access*" -mtime +2 -exec rm {} \;
> and I have same error message

How many "*.access*" are there...?

JD


  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] The directory that I am trying to clean up is huge

2010-01-25 Thread Tony Mountifield
In article ,
Kevin Krieser  wrote:
> 
> On Jan 23, 2010, at 6:45 AM, Robert P. J. Day wrote:
> 
> > On Sat, 23 Jan 2010, Marcelo M. Garcia wrote:
> >  the find ... -exec variation will invoke a new "rm" command for
> > every single file it finds, which will simply take more time to run.
> > beyond that, the effect should be the same.
> 
> 
> Unless there are files or directories with spaces in them, in which case the
> xargs variant can fail.

That's what -print0 is for, together with the -0 option to xargs:

find dir1 dir2 -name '*.foo' -print0 | xargs -0 rm

Cheers
Tony
-- 
Tony Mountifield
Work: t...@softins.co.uk - http://www.softins.co.uk
Play: t...@mountifield.org - http://tony.mountifield.org
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Automatically check into SVN?

2010-01-25 Thread Giulio Troccoli
>


Linedata Services (UK) Ltd
Registered Office: Bishopsgate Court, 4-12 Norton Folgate, London, E1 6DB
Registered in England and Wales No 3027851VAT Reg No 778499447

-Original Message-


> From: centos-boun...@centos.org
> [mailto:centos-boun...@centos.org] On Behalf Of Mr Gabriel
> Sent: 25 January 2010 10:47
> To: centos@centos.org
> Subject: [CentOS] Automatically check into SVN?
>
> I would like to automatically check in a directory to SVN, to
> maintain the changes that area made to those txt files over
> time. These files are configuration files, and it would be
> good to be able to revert back by simply checking out the
> older versions. But I would to check in the files every files
> minutes, and I'm sure SVN will not use up any disc space,
> unless there are any changes.
>
> I already have an svn server up and running in production -- Any help?

You might get more luck in the us...@subversion.apache.org ML.

Maybe you can have a cron job that check the status of the WC and if there are 
any changes commits them. But you run the risk to commit changes while you're 
still working on the files.

Why can't you manually commit the changes?

Giulio
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Automatically check into SVN?

2010-01-25 Thread Mr Gabriel
I would like to automatically check in a directory to SVN, to maintain 
the changes that area made to those txt files over time. These files are 
configuration files, and it would be good to be able to revert back by 
simply checking out the older versions. But I would to check in the 
files every files minutes, and I'm sure SVN will not use up any disc 
space, unless there are any changes.

I already have an svn server up and running in production -- Any help?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Username and Password

2010-01-25 Thread Arvind P R
is it a normal user whose passwd u forgot?
if yes, login as root user and type
passwd 
and change the passwd
if you dont have root user a/c access.

restart the PC and at grub prompt select the *second* option and type *'e*'
append it with the word *single*
hit *enter *and bit *b*
the machine will login with single user mode.
here you can change the passwd of the root user.
passwd


Cheers,
Arvind

On Mon, Jan 25, 2010 at 1:31 PM, Michael Makotore wrote:

>  Could you please help me I forgot my username and password.
>
>
>
> Regards
>
>
>
> Michael
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Username and Password

2010-01-25 Thread Michael Makotore
Could you please help me I forgot my username and password.

 

Regards

 

Michael

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos