On Tue, 2010-01-26 at 13:41 +0800, Christopher Chan wrote:
> JohnS wrote:
> > On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
> >> Are complicated relationships being stored in postgresql and not in
> >> mysql? I do not know how things are now but mysql has a history of only
> >> bein
Am Dienstag, den 26.01.2010, 07:04 +0100 schrieb Alberto García Gómez:
> Hi fellows, how can I unrar (.rar of course) from my console? What
> package do I need?
>
> Saludos Fraternales
Alberto please do not use "reply to" and then change the topic. Instead
use "new mail" in you mail client to star
On Tue, 2010-01-26 at 01:04 -0500, Alberto García Gómez wrote:
> Hi fellows, how can I unrar (.rar of course) from my console? What
> package do I need?
>
yum search unrar
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listin
> Hi fellows, how can I unrar (.rar of course) from my console?
RPMForge has an unrar package.
Neil
--
Neil Aggarwal, (281)846-8957, http://UnmeteredVPS.net/cpanel
cPanel/WHM preinstalled on a virtual server for only $40/month!
No overage charges, 7 day free trial, PayPal, Google Checkout
On Tue, 2010-01-26 at 01:04 -0500, Alberto García Gómez wrote:
>
> Hi fellows, how can I unrar (.rar of course) from my console? What
> package do I need?
unrar is in the rpmfusion-nonfree repository.
--
MELVILLE THEATRE ~ Melville Sask ~ http://www.melvilletheatre.com
ón de ESET NOD32 Antivirus, versión de la base de firmas de
virus 4805 (20100125) __
ESET NOD32 Antivirus ha comprobado este mensaje.
http://www.eset.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
JohnS wrote:
> On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
>> Are complicated relationships being stored in postgresql and not in
>> mysql? I do not know how things are now but mysql has a history of only
>> being good for simple selects.
>
> Selects can get very upity for mysql a
Greetings,
On Mon, Jan 25, 2010 at 8:05 PM, Roland Roland wrote:
> Hi All,
>
> but when i do nslookup example.com on the clients machine. the website
> resolves to another IP ( the one set in the initial public dns records)
>
>
Could it be because of the dns cache onthe client side?
Regards
Raj
On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
> Are complicated relationships being stored in postgresql and not in
> mysql? I do not know how things are now but mysql has a history of only
> being good for simple selects.
Selects can get very upity for mysql as in "VIEWS". They c
At Mon, 25 Jan 2010 18:49:19 -0800 (PST) CentOS mailing list
wrote:
>
> Hi
>
> Any open source software can open quick time?
>
> and can covert from quick to other movie also?
mplayer / mencoder (from the rpmforge repo)
>
> Thank you
>
> Send instant messages to your online friends http:/
adrian kok wrote:
> Hi
>
> Any open source software can open quick time?
>
> and can covert from quick to other movie also?
>
> Thank you
ffmpeg2theora does a good job at converting the h.264 that modern
quicktime uses into Ogg Theora.
VLC does a good job at playing just about any format.
Yo
Hi
Any open source software can open quick time?
and can covert from quick to other movie also?
Thank you
Send instant messages to your online friends http://uk.messenger.yahoo.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mai
On Mon, Jan 25, 2010 at 3:35 PM, Carlos Santana wrote:
> Hi,
>
> I have a autofs configured to mount home dir from NFS. All user
> accounts lookup is done using LDAP. All is working fine with this
> setup. Now I need to create a local user account and have its home dir
> also on local system. So I
On Monday 25 January 2010 19:35:07 Carlos Santana wrote:
> Now I need to create a local user account and have its home dir
> also on local system
If it's a local user you want (with its fils on local system) why are you using
the autofs facility? Isn't it just a matter of creating the user locall
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of James B. Byrne
Sent: Monday, January 25, 2010 10:06 AM
To: Robert Nichols
Cc: centos@centos.org
Subject: Re: [CentOS] The directory that I am trying to clean up is huge
On Mon, January 25,
Noob Centos Admin wrote:
> The web application is written in PHP and runs off MySQL and/or
> Postgresql. So I don't think I can access the raw disk data directly,
> nor do I think it would be safe since that bypasses the DBMS's checks.
This is what I use for MySQL (among other things)
log-querie
Noob Centos Admin wrote:
> Hi,
>
>> If you want a fast database forget about file system caching,
>> use Direct I/O and put your memory to better use - application
>> level caching.
>
> The web application is written in PHP and runs off MySQL and/or
> Postgresql. So I don't think I can access the
On Jan 25, 2010, at 7:02 PM, JohnS wrote:
>
> On Mon, 2010-01-25 at 18:51 -0500, Ross Walker wrote:
>
>>> Instead look at the way your PHP Code is
>>> Encoding the BLOB Data and if your really need the speed since now
>>> it's
>>> MySQL DB, make you own custom C API for mysql to encode the BLOB.
MHR wrote:
> I read in another forum that CentOS has problems with Firewire drives,
> something along the lines of whenever a new kernel is booted, the
> drives are gone.
>
> Can anyone elaborate on that? I don't use Firewire drives (at all,
> yet), but information about this would be nice to hav
On Mon, 2010-01-25 at 18:51 -0500, Ross Walker wrote:
> > Instead look at the way your PHP Code is
> > Encoding the BLOB Data and if your really need the speed since now
> > it's
> > MySQL DB, make you own custom C API for mysql to encode the BLOB. The
> > DB can do this like that much faster
On Jan 25, 2010, at 6:22 PM, JohnS wrote:
>
> On Mon, 2010-01-25 at 09:45 -0500, Ross Walker wrote:
>> On Jan 25, 2010, at 6:41 AM, Noob Centos Admin
>> wrote:
>>
>>> Hi,
>>>
20 feilds or columns is really nothing. BUT That's dependant on the
type
of data being inserted.
>>>
>>>
On Jan 25, 2010, at 10:48 AM, Corey Chandler
wrote:
> On 1/25/10 9:33 AM, Robert Nichols wrote:
>>
>> When using the -exec action with the ";" terminator, the constructed
>> command line always contains the path for exactly one matched file.
>> Try it. Run "find /usr -exec echo {} ;" and see t
Hi,
I have a autofs configured to mount home dir from NFS. All user
accounts lookup is done using LDAP. All is working fine with this
setup. Now I need to create a local user account and have its home dir
also on local system. So I added a new user account and changed
auto.home as follows:
test1
On Mon, 2010-01-25 at 09:45 -0500, Ross Walker wrote:
> On Jan 25, 2010, at 6:41 AM, Noob Centos Admin
> wrote:
>
> > Hi,
> >
> >> 20 feilds or columns is really nothing. BUT That's dependant on the
> >> type
> >> of data being inserted.
> >
> > 20 was an arbitary number :)
> >
> >> Ok so br
Hi,
I think I have found the problem. It's rather annoying. My host is a vps
hosted at alfahosting.de. They have migrated my guest to another host system
recently as there seemed to be problems with the old host. By migrating they
seem to have messed up the RPM package database and some other thin
2010/1/25 Matt :
>> Sorry about a bit offtopic, but I am looking reliable (not free)
>> secondary dns provider.
>
> Why not just rent a VPS and install CentOS and use it as your
> secondary. You would have total control then and it would be cheap
> and reliable.
Well, my work still costs a lot of
On Jan 25, 2010, at 4:21 PM, Eero Volotinen wrote:
Sorry about a bit offtopic, but I am looking reliable (not free)
secondary dns provider.
i've had consistently good experiences with RollerNet (http://rollernet.us
).
-steve
--
If this were played upon a stage now, I could condemn it as
> Sorry about a bit offtopic, but I am looking reliable (not free)
> secondary dns provider.
Why not just rent a VPS and install CentOS and use it as your
secondary. You would have total control then and it would be cheap
and reliable.
Matt
___
CentOS
2010/1/25 Tobias Weisserth :
> Thanks for the hint, but still no luck:
> [r...@hostname ~]# yum clean all
> Loaded plugins: fastestmirror, priorities
> Cleaning up Everything
> Cleaning up list of fastest mirrors
> [r...@hostname ~]# yum update
> Loaded plugins: fastestmirror, priorities
> Determin
Eero Volotinen wrote:
> Sorry about a bit offtopic, but I am looking reliable (not free)
> secondary dns provider.
My company uses Dynect as primary and seconary though they can
do secondary as well
http://dyn.com/dynect
We also use their DNS based global load balancing as well.
So far 100% up
Thanks for the hint, but still no luck:
[r...@hostname ~]# yum clean all
Loaded plugins: fastestmirror, priorities
Cleaning up Everything
Cleaning up list of fastest mirrors
[r...@hostname ~]# yum update
Loaded plugins: fastestmirror, priorities
Determining fastest mirrors
* addons: mirror.netcol
Sorry about a bit offtopic, but I am looking reliable (not free)
secondary dns provider.
--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Am Montag, den 25.01.2010, 19:48 +0100 schrieb Alan Hoffmeister:
> Hello guyz!
>
> I'm new here, and this is my very first truble...
>
> I need a script that will backup & compress the folder /media/system in
> the folder /media/backups
>
> But that's not the problem, I need that only the last 7 ba
I haven't seen this problem from my drive. I am using Centos 5.4 and the
configuration change that i found necessary was to uncomment the blacklist
file
/etc/modprobe.d/blacklist-firewire
regards-
Rick
On Mon, Jan 25, 2010 at 11:52 AM, MHR wrote:
> I read in another forum that CentOS has probl
I read in another forum that CentOS has problems with Firewire drives,
something along the lines of whenever a new kernel is booted, the
drives are gone.
Can anyone elaborate on that? I don't use Firewire drives (at all,
yet), but information about this would be nice to have
Thanks.
mhr
___
Hi Guys,
On 25/01/10 18:56, Lennart Andersen wrote:
> How about something like this..
>
Dont toppost! and trim the replies. Take a look at
http://wiki.centos.org/GettingHelp/ListInfo for some basic guidelines
that we ask everyone to follow here on the lists.
--
Karanbir Singh
kbsi.
How about something like this..
#!/bin/bash
# This script makes a backup of the files on the primary server directory.
# Change the values of the variables to make the script work:
BACKUPDIR=/data/
BACKUPFILES=*.cdf
GZTARFILE=/var/tmp/data_$(date +%F).tar.gz
SERVER=mcastasp1
REMOTEDIR=/home/
Hello guyz!
I'm new here, and this is my very first truble...
I need a script that will backup & compress the folder /media/system in
the folder /media/backups
But that's not the problem, I need that only the last 7 backups (last 7
days, yeah I know, cronjob...) will stay in that folder...
Th
James B. Byrne wrote:
> On Mon, January 25, 2010 10:31, Robert Nichols wrote:
> \
>> Now if the "{}" string appears more than once then the command line
>> contains that path more than once, but it is essentially impossible
>> to exceed the kernel's MAX_ARG_PAGES this way.
>>
>> The only issue with
On Mon, January 25, 2010 10:31, Robert Nichols wrote:
\
>
> Now if the "{}" string appears more than once then the command line
> contains that path more than once, but it is essentially impossible
> to exceed the kernel's MAX_ARG_PAGES this way.
>
> The only issue with using "-exec command {} ;" f
On 1/25/10 9:33 AM, Robert Nichols wrote:
>
> When using the -exec action with the ";" terminator, the constructed
> command line always contains the path for exactly one matched file.
> Try it. Run "find /usr -exec echo {} ;" and see that you get one
> path per line and output begins almost insta
James B. Byrne wrote:
> On Sat, January 23, 2010 20:21, Robert Nichols wrote:
>> Robert Heller wrote:
>
>> Gosh, then I guess the manpage for 'find' must be totally wrong
>> where it
>> says:
>>
>> -exec command ;
>>...
>>The specified command is ru
On 25/01/2010 13:04, Greg Bailey wrote:
> Mr Gabriel wrote:
>
>> I would like to automatically check in a directory to SVN, to maintain
>> the changes that area made to those txt files over time. These files are
>> configuration files, and it would be good to be able to revert back by
>> simply
Hi
Other things,
What is the result you expect ? Could you please exec dig query on your dns
server and on your client side.
dig example.com @dnsserver
dig example.com
Best regards
Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat
- Original Message
Anas Alnaffar wrote:
> I tried to run this command
>
> find -name "*.access*" -mtime +2 -exec rm {} \;
>
Should have been: find ./ -name \*.access\* -mtime +2 -exec rm -f {} \;
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/l
On Sat, January 23, 2010 20:21, Robert Nichols wrote:
> Robert Heller wrote:
>
> Gosh, then I guess the manpage for 'find' must be totally wrong
> where it
> says:
>
> -exec command ;
>...
>The specified command is run once for each matched
> file.
>
Not
Hi,
If you are working on real TLD, you have to change you dns server on the dns
provider side.
If you made a whois of your domain name :
Example : http://generationip.com/whois?Whois=generationip.com
Whois Server Version 2.0
Domain names in the .com and .net domains can now be registered
wi
On Jan 25, 2010, at 6:41 AM, Noob Centos Admin
wrote:
> Hi,
>
>> 20 feilds or columns is really nothing. BUT That's dependant on the
>> type
>> of data being inserted.
>
> 20 was an arbitary number :)
>
>> Ok so break the one table down create 2 or more, then you will have
>> "Joins" & cluste
> fred smith wrote:
>> On Mon, Jan 25, 2010 at 03:14:54AM -0800, John Doe wrote:
>>> From: Anas Alnaffar
I tried to run this command
find -name "*.access*" -mtime +2 -exec rm {} \;
and I have same error message
>>> How many "*.access*" are there...?
>>>
>> if there are so many that
Hi All,
i have DNS configured on my centos 5.2 server.
it's all working fine, right now i want to change the main public dns
from one IP to another to do some testing (the new public dns ip has
records which the old one doesnt have and it's done as such for testing)
so i got into /etc/resolv.c
fred smith wrote:
> On Mon, Jan 25, 2010 at 03:14:54AM -0800, John Doe wrote:
>> From: Anas Alnaffar
>>> I tried to run this command
>>> find -name "*.access*" -mtime +2 -exec rm {} \;
>>> and I have same error message
>> How many "*.access*" are there...?
>>
>> JD
>
> if there are so many that y
Hi,
You can find below the spec of my server:
cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 37
model name : AMD Opteron(tm) Processor 248
stepping: 1
cpu MHz : 2200.034
cache size : 1024 KB
fpu :
On Mon, Jan 25, 2010 at 8:39 AM, fabien faye wrote:
> Hi,
>
> I have a big server with 24 Disk on 2 3ware card.
Define 'big'. You're not giving much in terms of memory or cpu specs.
> When i write data on my nfs server everything is fine but when i want to read
> data i have a lot of cpu wait.
Mr Gabriel wrote:
> I would like to automatically check in a directory to SVN, to maintain
> the changes that area made to those txt files over time. These files are
> configuration files, and it would be good to be able to revert back by
> simply checking out the older versions. But I would to
Hi,
I have a big server with 24 Disk on 2 3ware card.
When i write data on my nfs server everything is fine but when i want to read
data i have a lot of cpu wait.
[r...@nfs /]# vmstat 1
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff
On Mon, Jan 25, 2010 at 03:14:54AM -0800, John Doe wrote:
> From: Anas Alnaffar
> > I tried to run this command
> > find -name "*.access*" -mtime +2 -exec rm {} \;
> > and I have same error message
>
> How many "*.access*" are there...?
>
> JD
if there are so many that you're finding the previo
Hi,
> 20 feilds or columns is really nothing. BUT That's dependant on the type
> of data being inserted.
20 was an arbitary number :)
> Ok so break the one table down create 2 or more, then you will have
> "Joins" & clustered indexes thus slowing you down more possibly. That
> is greatly depend
Hi,
> If you want a fast database forget about file system caching,
> use Direct I/O and put your memory to better use - application
> level caching.
The web application is written in PHP and runs off MySQL and/or
Postgresql. So I don't think I can access the raw disk data directly,
nor do I thin
Hi,
No one knows how to auto send fail2ban report to the email address present in
the whois ?
Fabien FAYE
RHCE
www.generationip.com
Free network tools & HOWTO for centos and Redhat
- Original Message -
From: "fabien faye"
To: "CentOS mailing list"
Sent: Saturday, January 23, 2010 8:30
From: Anas Alnaffar
> I tried to run this command
> find -name "*.access*" -mtime +2 -exec rm {} \;
> and I have same error message
How many "*.access*" are there...?
JD
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman
In article ,
Kevin Krieser wrote:
>
> On Jan 23, 2010, at 6:45 AM, Robert P. J. Day wrote:
>
> > On Sat, 23 Jan 2010, Marcelo M. Garcia wrote:
> > the find ... -exec variation will invoke a new "rm" command for
> > every single file it finds, which will simply take more time to run.
> > beyond
>
Linedata Services (UK) Ltd
Registered Office: Bishopsgate Court, 4-12 Norton Folgate, London, E1 6DB
Registered in England and Wales No 3027851VAT Reg No 778499447
-Original Message-
> From: centos-boun...@centos.org
> [mailto:centos-boun...@centos.org] On Behalf Of Mr Gabriel
>
I would like to automatically check in a directory to SVN, to maintain
the changes that area made to those txt files over time. These files are
configuration files, and it would be good to be able to revert back by
simply checking out the older versions. But I would to check in the
files every
is it a normal user whose passwd u forgot?
if yes, login as root user and type
passwd
and change the passwd
if you dont have root user a/c access.
restart the PC and at grub prompt select the *second* option and type *'e*'
append it with the word *single*
hit *enter *and bit *b*
the machine will
Could you please help me I forgot my username and password.
Regards
Michael
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
65 matches
Mail list logo