Re: [CentOS] OT: Why VM?

2011-05-27 Thread Rob Lines
On Fri, May 27, 2011 at 2:33 PM, James B. Byrne  wrote:
> I have been working off and on with Xen and KVM on a couple of test
> hosts for that past year or so and while now everything seems to
> function as expected, more or less, I find myself asking the
> question: Why?
>
> We run our own servers at our own sites for our own purposes.  We do
> not, with a few small exceptions, host alien domains.  So, while
> rapidly provisioning or dynamically expanding a client's vm might be
> very attractive to a public hosting provider that is not our
> business model at all.
>
> Why would a small company not in the public hosting business choose
> to employ VM technology?  What are the benefits over operating
> several individual small form factor servers or blades instead?  I
> am curious because what I can find on the net respecting VM use
> cases, outside of for public providers or application testing, seems
> to me mostly puff and smoke.
>
> This might be considered OT but since CentOS is what we use it seems
> to me best that I ask here to start.
>

I would have to say that the usecase really has to do with what you
are doing with servers.

For us when our facility was built ~6 years ago they bought racks and
racks of servers from Dell.  There were racks with 1U dual CPU boxes
and then racks with 2U dual CPU and then the racks with 4U Quad CPU
boxes with varying amounts of ram and drive space in them.  That was
the way to have resources available to spin up a new box for someone
in a timely manner rather than waiting a few weeks for one to be
delivered... but at year 2 it became obvious that many of the boxes
were not being used or they were really underutilized as the software
didn't need that much power but it was a Windows App that wanted to
have the box to itself.  So virtualization came into play.  Now
instead of 10 racks of random servers of varying specs we have 1 rack
with some really beefy servers and then a couple racks with individual
boxes for specific applications that were to resource intense to make
sense to virtualize.

The advantages we have been able to realize is the ability to spin up
a 2 processor 1gb VM for someone to start their project and then if it
gets off the ground and they need more we can bump it up to what they
need by editing a configuration rather than having to add memory to a
box or worse yet have to migrate it from one box to another.  This has
resulted in much better utilization of the hardware that we have.
Many servers don't need lots of resources they just need to have the
resources they have all to themselves or atleast think that they do.
We have many servers that only see a spike in workload periodicaly
during the week/month when a researcher has collected the data they
need to work on.  The rest of the time it sits there waiting.

The other real advantage for us is the DR solutions that virtualizing
gives us.  Previously DR ment having a stack of machines at the DR
site waiting to be recovered to and deciding what servers were
important enough that we needed a box with their name on it and what
could wait until we could call Dell and get a new one shipped out.
Now we just have a smaller VMware cluster at the other end running
some hot DR servers and ready to bring up the replicated LUNs from our
primary site in case of a failure.  We can bring those VMs back up in
a short time instead of having to do baremetal installs then restore
from backups.  Going foward we will be scaling so that we can share
our DR at the remote site with the group that is there as their
primary and they will be able to use our cluster here as their DR
allowing us to consolidate the number of hosts needed.  That is
something that just isn't possiable with physical boxes.

I can't say that you don't take a monitary hit to buy the shared
storage, networking, and software required to support a Virtual world
but I personaly see it as worth it.

On the other hand there are those systems that just make no sense to
virtualize.  We run a HPC cluster with hundreds of nodes.  There is no
reason to virtualize it as we are already trying to squeeze every bit
of performance out of the boxes and adding the hypervisor would defeat
that and we don't need/want more than one OS install running on a
physical box.

So really look at your workload and your business case.
Virtualization isn't the end all be all silver bullet that will make
everything in IT perfect.  It is just another tool in your toolbox
that can help in some cases but much as showing up with a hammer and
finding a screw you might be better off going back to get a screw
driver than trying to just pound it in.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] how to debug hardware lockups?

2008-11-18 Thread Rob Lines
On Tue, Nov 18, 2008 at 9:47 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
>> Did you leave memtest86+ running for 2 days? I thought 1 or 2 cycles
>> would be good enough?
>>
>> I'm hoping to pick-up the server in the next 2 hours then I can see
>> what happens when I run memtest86+ or other tests
>
> Yes, apparently RAM errors can be subtle and only appear when certain
> adjacent bit patterns are stored - or when the moon is in a certain phase or
> something.
>
> --
>  Les Mikesell
>   [EMAIL PROTECTED]

When we burn in machines to try to find errors we go with the day or
two run also.  The one fun thing that we found was that many times it
was temperature related.  It would crash in the rack but then when the
machine was removed to a test bench it would not exhibit the issue.
This is especially true when the machine under load would have both
the CPU and the memory taxed but then during the testing we could only
really tax one or the other using the existing tools.  So blocking a
bit of the air flow in the lab to heat up the case or being able to
test in the same data center environment helped a lot.

We have most errors show up either in the first 2 minutes of running a
memory test or using one the prime number calculations or it will take
a day or few to show up.

Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Re: pam max locked memory issue after updating to 5.2 and rebooting

2008-08-08 Thread Rob Lines
It has been a few days so I am sending this again incase someone has
seen this issue and might have a seen this problem or has a suggestion
of where to look and why it might not be taking these settings with
5.2 when it did with 5.1

On Mon, Aug 4, 2008 at 2:00 PM, Rob Lines <[EMAIL PROTECTED]> wrote:
> We were previously running 5.1 x86_64 and recently updated to 5.2
> using yum.  Under 5.1 we were having problems when running jobs using
> torque and the solution had been to add the following items to the
> files noted
>
> "*  softmemlock unlimited" in /etc/security/limits.conf
> "sessionrequired pam_limits.so" in /etc/pam.d/{rsh,sshd}
>
> This changed the max locked memory setting in ulimit as follows:
>
> Before the change
> rsh nodeX ulimit -a
> still gives us
> max locked memory   (kbytes, -l) 32
>
> After the change
> rsh nodeX ulimit -a
> max locked memory   (kbytes, -l) 16505400
>
> The nodes have 16gb of memory.
>
> Now after the 5.2 updates those files are all the same and on most of
> the nodes we haven't yet rebooted them due to log running processes
> but a few nodes have been restarted and now that jobs are starting to
> be put on them we are back to max locked memory of 32k rather than
> 16gb.
>
> The error we are receiving on those jobs is :
>
> libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
>This will severely limit memory registrations.
> libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
>This will severely limit memory registrations.
> Fatal error in MPI_Init:
> Other MPI error, error stack:
> MPIR_Init_thread(306)...: Initialization failed
> MPID_Init(113)..: channel initialization failed
> MPIDI_CH3_Init(167).:
> MPIDI_CH3I_RDMA_init(138)...:
> rdma_setup_startup_ring(333): cannot create cq
> Fatal error in MPI_Init:
> Other MPI error, error stack:
> MPIR_Init_thread(306)...: Initialization failed
> MPID_Init(113)..: channel initialization failed
> MPIDI_CH3_Init(167).:
> MPIDI_CH3I_RDMA_init(138)...:
> rdma_setup_startup_ring(333): cannot create cq
> rank 45 in job 1  nodeX_35175   caused collective abort of all ranks
>  exit status of rank 45: return code 1
> rank 44 in job 1  nodeX_35175   caused collective abort of all ranks
>  exit status of rank 44: return code 1
>
>
> The full output of :
>
> rsh nodeX ulimit -a
>
> connect to address x.x.x.x port 544: Connection refused
> Trying krb4 rsh...
> connect to address x.x.x.x port 544: Connection refused
> trying normal rsh (/usr/bin/rsh)
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 135168
> max locked memory   (kbytes, -l) 32
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 10240
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 135168
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>
>
> Any ideas, suggestions or items I could roll back would be
> appreciated.  I looked through the list of packages that were updated
> and the only one that I could see that was related was pam.  ssh and
> rsh were not updated.
>
> Thank you,
> Rob
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] pam max locked memory issue after updating to 5.2 and rebooting

2008-08-04 Thread Rob Lines
We were previously running 5.1 x86_64 and recently updated to 5.2
using yum.  Under 5.1 we were having problems when running jobs using
torque and the solution had been to add the following items to the
files noted

"*  softmemlock unlimited" in /etc/security/limits.conf
"sessionrequired pam_limits.so" in /etc/pam.d/{rsh,sshd}

This changed the max locked memory setting in ulimit as follows:

Before the change
rsh nodeX ulimit -a
still gives us
max locked memory   (kbytes, -l) 32

After the change
rsh nodeX ulimit -a
max locked memory   (kbytes, -l) 16505400

The nodes have 16gb of memory.

Now after the 5.2 updates those files are all the same and on most of
the nodes we haven't yet rebooted them due to log running processes
but a few nodes have been restarted and now that jobs are starting to
be put on them we are back to max locked memory of 32k rather than
16gb.

The error we are receiving on those jobs is :

libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
This will severely limit memory registrations.
libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
This will severely limit memory registrations.
Fatal error in MPI_Init:
Other MPI error, error stack:
MPIR_Init_thread(306)...: Initialization failed
MPID_Init(113)..: channel initialization failed
MPIDI_CH3_Init(167).:
MPIDI_CH3I_RDMA_init(138)...:
rdma_setup_startup_ring(333): cannot create cq
Fatal error in MPI_Init:
Other MPI error, error stack:
MPIR_Init_thread(306)...: Initialization failed
MPID_Init(113)..: channel initialization failed
MPIDI_CH3_Init(167).:
MPIDI_CH3I_RDMA_init(138)...:
rdma_setup_startup_ring(333): cannot create cq
rank 45 in job 1  nodeX_35175   caused collective abort of all ranks
  exit status of rank 45: return code 1
rank 44 in job 1  nodeX_35175   caused collective abort of all ranks
  exit status of rank 44: return code 1


The full output of :

rsh nodeX ulimit -a

connect to address x.x.x.x port 544: Connection refused
Trying krb4 rsh...
connect to address x.x.x.x port 544: Connection refused
trying normal rsh (/usr/bin/rsh)
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 135168
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 135168
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


Any ideas, suggestions or items I could roll back would be
appreciated.  I looked through the list of packages that were updated
and the only one that I could see that was related was pam.  ssh and
rsh were not updated.

Thank you,
Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Command line logging program suggestions?

2008-06-20 Thread Rob Lines
On Fri, Jun 20, 2008 at 9:18 AM, Walid <[EMAIL PROTECTED]> wrote:

> 
> Hi Rob,
>
> screen as already mentioned, also script, and if you are doing it to
> monitor other activities other than yours ttysnoop
> http://freshmeat.net/projects/ttysnoop/, for putty there is tabbed putty
> btw @ http://puttycm.free.fr/
>
> regards
>
> Walid
>

I will take a look at screen though it will probably take some getting used
to with how it seems to handle .  The main goal was to help keep better
track of changes made while troubleshooting issues without having to write
down each command and change by hand.

Thanks for the suggestions.

Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Autofs and mount/umount entries in the nfs server logs

2008-06-20 Thread Rob Lines
I am using autofs for a number of machines connecting nfs shares.  All the
mount and unmount log entries on the server are being saved to
/var/log/messages currently.  This is becoming a bit of a pain because as
the number of hosts increase and various monitoring scripts check to be sure
that they have access to the nfs shares it is filling up messages on the nfs
server.  I would like to continue to see these messages but I would like to
see them put else where but looking through the documentation for syslog I
couldn't find any way to separate just those messages out.  I assume(hope)
that someone else has run into this problem and found a way around it.  My
googlefoo has failed me on this one though I might just not be looking for
the right thing.

Thanks,
Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Command line logging program suggestions?

2008-06-20 Thread Rob Lines
I am looking for an app that would run from the terminal and would emulate a
bash shell (or pass everything to the shell) that would allow me to set a
log file and then record all my input and the output to the screen from the
commands.  As an added bonus if it would allow me to run it from two
terminals (or more) on the same machine and log all the input and output to
the same file while still displaying it on the screen that would be great.
The goal being that when making changes or diagnosing a problem it can
sometimes become hard to tell what command came when especially when you
have more than one termial session open.  While using putty with a really
large buffer helps it doesn't deal well with the two terminal issue or
disconnected sessions.

Anyone know of an app like this or any suggestions that could be added to my
bashrc to provide the functionality?

Thanks,
Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] connect to novell 6.5 server

2008-02-22 Thread Rob Lines
I had a thread on this a bit back here is the set of instructions that I
ended up using:

http://lists.centos.org/pipermail/centos/2007-January/074771.html

Here is my follow up to it with additions as we were not using IPX:

http://lists.centos.org/pipermail/centos/2007-April/078492.html

The same RPMs that I build for CentOS 4 worked under 5 and the commands did
also.


Rob

On Fri, Feb 22, 2008 at 2:06 PM, Hiep Nguyen <[EMAIL PROTECTED]> wrote:

> Hi all, we have a centos box that run web server, but we also have novell
> 6.5 server that i want to read/write files to.  is there anyway i can map
> novell's hard drive to centos box?
>
> thanks,
> t. hiep
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cluster fun

2008-02-05 Thread Rob Lines
On Feb 4, 2008 10:38 PM, Scott Ehrlich <[EMAIL PROTECTED]> wrote:

> I've priced some 1 and 2U Dell servers.  Now, I'd like to perform a price
> comparison of COTS hardware for 1 and 2U servers.  What VAR companies do
> people
> recommend I check out for putting machines together?   I'm perfectly
> capable of
> installing and swapping hardware components when/where needed, so on-sight
> support is not needed.
>

I am not sure what type of cluster you are going to be doing  HA or HPC but
I just wanted to make sure that if you are looking at it for HPC that you
saw the dell sc1435.  When we were pricing 1U cluster nodes recently they
gave the best bang for the buck.  We were looking for raw CPU power and lots
of ram and were not worries about the server grade stuff like raid,
redundant power supplies, or remote administration like iLO or DRAC.  When
we compared prices with the other major companies as well as a number of
linux cluster providers and a few whitebox builders they came in cheaper.
In addition the next business day replacement parts are pretty nice.

So far we have been happy with them.

Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large RAID volume issues

2008-02-04 Thread Rob Lines
On Feb 4, 2008 4:49 PM, Ross S. W. Walker <[EMAIL PROTECTED]> wrote:To
move an external array to a new server is as easy as plugging

> it in and importing the volume group (vgimport).
>
> Typically I name my OS volume groups "CentOS" and give
> semi-descriptive names to my external array volume groups, such
> as "Exch-SQL" or "VM_Guests".
>
> You could also have a hot server activate the volume group via
> heartbeat if the first server goes down if your storage
> allows multiple initiators to attach to it.
>
> > We are still checking with the vendor for a solution to move
> > back to the 512 sectors rather than the 2k ones. Hopefully
> > they come up with something.
>
> I wish you luck here, but in my experience once an array is
> created with a set sector size or chunk size, changing these
> usually involves re-creating the array.
>
> LVM might be able to handle the sector size though, no need to
> create any partition on the disk, but future migration
> compatibility could be questionable.
>
> To create a VG out of it:
>
> pvcreate /dev/sdb
>
> then,
>
> vgcreate "VG_Name" /dev/sdb
>
> then,
>
> lvcreate -L 4T -n "LV_Name" "VG_Name"
>
> If you get a new external array say it's /dev/sdc and want to
> move all data from the old one to the new one online and then
> remove the old one.
>
> pvcreate /dev/sdc
>
> vgextend "VG_Name" /dev/sdc
>
> pvmove /dev/sdb /dev/sdc
>
> vgreduce "VG_Name" /dev/sdb
>
> pvremove /dev/sdb
>
> Then take /dev/sdb offline.
>
> -Ross
>
> PS You might want to remove any existing MBR/GPT stuff off of
> /dev/sdb before you pvcreate it, with:
>
> dd if=/dev/zero of=/dev/sdb bs=512 count=63
>
> That will wipe the first track which should do it.
>
>
Luckily the array is empty at the moment as we are only at the phase of
building/mounting so as it turns out there was a typo in the documentation
and it is supposed to be created using the 16 byte CDB option and that will
allow it use 512 sectors.  (apparently there was a rogue not in the docs
changing "If you have an LSI card you should use CDB" to "if you have an LSI
card you should not use CDB") So this will be our next attempt and go from
there.

Thank you very much for the help and the quick lesson on LVM.  Neat stuff we
will have to look at.

Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large RAID volume issues

2008-02-04 Thread Rob Lines
On Feb 4, 2008 3:34 PM, Ross S. W. Walker <[EMAIL PROTECTED]> wrote:

> Rob Lines wrote:
> > On Feb 4, 2008 3:16 PM, John R Pierce <[EMAIL PROTECTED]> wrote:
> >
> >   with LVM, you could join several smaller logical
> > drives, maybe 1TB each,
> >   into a single volume set, which could then contain
> > various file systems.
> >
> >
> > That looks like it may be the result.  The main reason was to
> > keep the amount of overhead and 'stuff' required to revive it
> > in the event of a server issue to a minimum.  That was one of
> > the reasons for going with an enclosure that handles all the
> > RAID internally and just presents to the server as a single
> > drive.  We had been trying to avoid LVM as we had run into
> > problems using knoppix recovering it in the past.
> >
> > It looks like we will probably just end up breaking it up
> > into smaller chunks unless I can find a way for the enclosure
> > to use 512 sectors and still have greater than 2 tb volumes.
>
> LVM is very well supported these days.
>
> In fact I default on LVM for all my OS and external storage
> configurations here as it provides for greater flexibility and
> manageability then raw disks/partitions.
>


How easy is it to migrate to a new os install?  Given the situation as I
described with a single 6tb 'drive' using lvm and the server goes down and
we have to rebuild the server from scratch or move the storage to another
machine (all using CentOS 5) how easy is that?

We are still checking with the vendor for a solution to move back to the 512
sectors rather than the 2k ones. Hopefully they come up with something.

Thanks,
Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large RAID volume issues

2008-02-04 Thread Rob Lines
On Feb 4, 2008 3:16 PM, John R Pierce <[EMAIL PROTECTED]> wrote:

>
>
>
> with LVM, you could join several smaller logical drives, maybe 1TB each,
> into a single volume set, which could then contain various file systems.
>

That looks like it may be the result.  The main reason was to keep the
amount of overhead and 'stuff' required to revive it in the event of a
server issue to a minimum.  That was one of the reasons for going with an
enclosure that handles all the RAID internally and just presents to the
server as a single drive.  We had been trying to avoid LVM as we had run
into problems using knoppix recovering it in the past.

It looks like we will probably just end up breaking it up into smaller
chunks unless I can find a way for the enclosure to use 512 sectors and
still have greater than 2 tb volumes.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Large RAID volume issues

2008-02-04 Thread Rob Lines
>
>
> This would appear to be your problem.  Unless you have strong reasons to
> use 2K sectors, I'd change them to the much more standard 512.
>
> After that, parted should have no issues whatsoever.
>

In looking back through the configuration.  The 2kb sectors were set in the
Array in the Variable Sector Size as that was what it suggested for the
larger than 2tb slices.  The other option on the array was using 16 byte
CDB(Command Descriptor Block) but that options is not compatible with the
LSO cards according the manufacturer.

I would guess that scaling it back to the 1kb per track would not really
help either.

As an aside for the LVM option is there anything to be aware of using it on
such a large drive?

Thank you everyone for the information and help.

Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Large RAID volume issues

2008-02-04 Thread Rob Lines
I have just finished creating an array on our new enclosure and our CentOS 5
server has recognized it.  It shows as the full 6tb in the LSI configuration
utility as well as when I ran fdisk:

[EMAIL PROTECTED] sbin]# fdisk /dev/sdb
Note: sector size is 2048 (not 512)

The number of cylinders for this disk is set to 182292.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 5997.6 GB, 5997628227584 bytes
255 heads, 63 sectors/track, 182292 cylinders
Units = cylinders of 16065 * 2048 = 32901120 bytes

   Device Boot  Start End  Blocks   Id  System

I was then created a parition in fdisk and it appeared to work until I
formated it (here is the output of the formating)

[EMAIL PROTECTED] ~]# mkfs -t ext2 -j /dev/sdb1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
195264512 inodes, 390518634 blocks
19525931 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
11918 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
4096000, 7962624, 11239424, 2048, 23887872, 71663616, 78675968,
10240, 214990848

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

When I did a df I got the following (only included the array entry)

[EMAIL PROTECTED] etc]# df -h
FilesystemSize  Used Avail Use% Mounted on
...
/dev/sdb1 1.5T  198M  1.5T   1% /home1

I then tried removing it and working with parted.


[EMAIL PROTECTED] etc]# parted /dev/sdb
Warning: Device /dev/sdb has a logical sector size of 2048.  Not all parts
of GNU Parted support this at
the moment, and the working code is HIGHLY EXPERIMENTAL.

GNU Parted 1.8.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Error: Unable to open /dev/sdb - unrecognised disk label.
(parted) mklabel gpt
*** glibc detected *** : double free or corruption (!prev):
0x16760800 ***
=== Backtrace: =
/lib64/libc.so.6[0x3435c6f4f4]
/lib64/libc.so.6(cfree+0x8c)[0x3435c72b1c]
/usr/lib64/libparted-1.8.so.0[0x3436c1a5c5]
/usr/lib64/libparted-1.8.so.0[0x3436c48a54]
=== Memory map: 
0040-0041 r-xp  08:05 130761
/sbin/parted
0061-00611000 rw-p 0001 08:05 130761
/sbin/parted
00611000-00612000 rw-p 00611000 00:00 0
0081-00812000 rw-p 0001 08:05 130761
/sbin/parted
1673d000-1677f000 rw-p 1673d000 00:00 0
343580-343581a000 r-xp  08:05 4765445
/lib64/ld-2.5.so
3435a19000-3435a1a000 r--p 00019000 08:05 4765445
/lib64/ld-2.5.so
3435a1a000-3435a1b000 rw-p 0001a000 08:05 4765445
/lib64/ld-2.5.so
3435c0-3435d46000 r-xp  08:05 4765452
/lib64/libc-2.5.so
3435d46000-3435f46000 ---p 00146000 08:05 4765452
/lib64/libc-2.5.so
3435f46000-3435f4a000 r--p 00146000 08:05 4765452
/lib64/libc-2.5.so
3435f4a000-3435f4b000 rw-p 0014a000 08:05 4765452
/lib64/libc-2.5.so
3435f4b000-3435f5 rw-p 3435f4b000 00:00 0
343600-3436013000 r-xp  08:05 4765602
/lib64/libdevmapper.so.1.02
3436013000-3436213000 ---p 00013000 08:05 4765602
/lib64/libdevmapper.so.1.02
3436213000-3436215000 rw-p 00013000 08:05 4765602
/lib64/libdevmapper.so.1.02
343640-3436402000 r-xp  08:05 4765458
/lib64/libdl-2.5.so
3436402000-3436602000 ---p 2000 08:05 4765458
/lib64/libdl-2.5.so
3436602000-3436603000 r--p 2000 08:05 4765458
/lib64/libdl-2.5.so
3436603000-3436604000 rw-p 3000 08:05 4765458
/lib64/libdl-2.5.so
343680-3436802000 r-xp  08:05 4765478
/lib64/libuuid.so.1.2
3436802000-3436a02000 ---p 2000 08:05 4765478
/lib64/libuuid.so.1.2
3436a02000-3436a03000 rw-p 2000 08:05 4765478
/lib64/libuuid.so.1.2
3436c0-3436c5c000 r-xp  08:05 464724
/usr/lib64/libparted-1.8.so.0.0.
1
3436c5c000-3436e5b000 ---p 0005c000 08:05 464724
/usr/lib64/libparted-1.8.so.0.0.
1
3436e5b000-3436e5f000 rw-p 0005b000 08:05 464724
/usr/lib64/libparted-1.8.so.0.0.
1
3436e5f000-3436e6 rw-p 3436e5f000 00:00 0
343700-3437035000 r-xp  08:05 463236
/usr/lib64/libreadline.so.5.1
3437035000-3437234000 ---p 00035000 08:05 463236
/usr/lib64/libreadline.so.5.1
3437234000-343723c000 rw-p 00034000 08:05 463236
/usr/lib64/libreadline.so.5.1
343723c000-343723d000 rw-p 343723c000 00:00 0
343b00-343b00d000 r-xp  08:05 4765466
/lib64/libgcc_s-4.1.2-20070626.s
o.1
343b00d000-343b20d000 ---p d000 08:05 4765466
/lib64/libgcc_s-4.1.2-20070626.s
o.1
343b20d000-343b20e000 rw-p d000 08:05 4765466
/lib64/libgcc_s-4.1.2-20070626.s
o.1
343c

[CentOS] Interesting PXE server setup question

2008-01-24 Thread Rob Lines
While this is not a problem with CentOS I am hoping to solve the situation
using a CentOS machine.  For anyone not interested I am sorry to clutter
your mail box.  For everyone else any ideas or suggestions are welcome.

A bit of background:

We have an application that runs only in DOS 6.22 at the moment that we
would like to run on all of our desktop computers each time they boot up.
Our workstations are mostly Windows XP with some Linux.

Our goals:

We would like to be able to have the machines boot into DOS and run the
application and then reboot to the normal hard drive.  We would like to have
it require no user intervention or as little as possible.  We would also
like to have it only run the app during the first boot up of the day.

Thoughts at the moment:

One idea we have at the moment is to create a PXE server with the DOS boot
image on it.  (I have done that before using Windows RIS but we are trying
to avoid a windows Server as RIS is a bit of a pain and it prefers user
interaction.  It also would not fit well with our solution to have it only
run once a day.)  We could then run the application and inside the DOS image
we could have it reboot the machine.  We could then set the client machines
to boot PXE as their first boot option. The next thought was to somehow
watch the connections to the tftp server where the boot image will be kept
and watch for the client IP then have the PXE server create a new firewall
rule that would block access from that client to tftp.  The thought there is
that once the client has downloaded the boot image once it will run it and
then on reboot will not be able to find the boot image and, I think, would
fail at the pxe boot and move on to the next item in the boot list.  Then
every midnight the list of blocked IPs would be cleared and we start the
process over again.

So any suggestions on the best way to take a bootable DOS disk and turn it
into an image that a Linux based PXE server can serve, ways to monitor the
tftp connections and then add them to the firewall after they have finished
downloading the boot image, and any ideas on any better ideas would be
appreciated.

Thanks for taking the time to read this.

Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SCSI controller suggestions for a 5.x server?

2007-12-13 Thread Rob Lines
Sorry for the delay.  I have been out of the office for a few days.

On Dec 5, 2007 10:11 PM, John R Pierce <[EMAIL PROTECTED]> wrote:

> Rob Lines wrote:
> > We are preparing for a new file server (Dell 2970) with an external
> > disk array with integrated RAID (raidking.com <http://raidking.com>).
> > The array presents via Ultra 320 SCSI.  We are looking for anyone that
> > has had experience or opinions on SCSI cards without RAID that have a
> > good performance working under CentOS 5.x.  Any suggestions/comments
> > on the controllers with large storage especially at or above 2tb would
> > be appreciated.
>
> the dell 2850's I've been bringing up have a dual channel LSI Logic
> ultra320 scsi controller built in, these work quite well. I don't
> know offhand if they have an external scsi connector, however.   I'd
> suggest the equivalent LSI Logic SCSI card, I believe its the 53C10xx
> series, in PCI-X or PCI Express X4, whichever your server has...
>
> someone else said PERC, those are Dell's raid controllers, obviously not
> what you want with an external SCSI RAID
>
> I'm curious why you chose SCSI instead of SAS at this point in time?
> External SCSI cabling is expensive and trouble-prone.   _ALWAYS_ make
> sure _EVERYTHING_ is shut off completely before touching those external
> cables. Don't cheap out on the cables, either.
>
>
In our case it is because the enclosure we are using has all the RAID
support built into it.  So it presents to the computer as a scsi 320.  The
enclosure itself is all SATA drives. In our case we had need for a large
volume of drive space with as simple a solution for DR as possible and an
attempt to minimize the reliance on support for the RAID card.  We currently
have an adaptec with two external drive enclosures that gives us a hard time
periodically and is most likely related to the age of the installed OS but
we didn't want to deal with the same thing in a couple years.

We did have the option of it presenting with fiber channel but as we only
have a single machine that will be connected to it that seemed like a more
complicated and expensive solution for little to no advantage.

Thank you for the suggestions on the cards.  They were the same family that
the company supplying the enclosure suggested.  I was really concerned with
having support for it without a lot of extra work and worry every time a new
kernel was released.

Thanks for all the help,
Rob
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] SCSI controller suggestions for a 5.x server?

2007-12-05 Thread Rob Lines
We are preparing for a new file server (Dell 2970) with an external disk
array with integrated RAID (raidking.com).  The array presents via Ultra 320
SCSI.  We are looking for anyone that has had experience or opinions on SCSI
cards without RAID that have a good performance working under CentOS 5.x.
Any suggestions/comments on the controllers with large storage especially at
or above 2tb would be appreciated.

Any help would be appretiated and any additional information that is needed
please ask.

Thank you,
Rob Lines
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] monitoring rsync backups on centos

2007-08-30 Thread Rob Lines
We went really simple with all the rsync commands being run from the same
machine.  We point all the output from the rsync to a log file in /var/log
and then I usually just tail the file on a daily basis.  Also if it runs
into actual errors cron ends up mailing directly.

We use this on our clusters to good effect though we use batched ssh calls
to tell each cluster node to run their backups but it is all called from the
same script on our head node so that only one backup opperation is running
at a time.  It makes the logs cleaner and we don't run into a problem with
dozens of boxes all trying to sync data back to the head node at the same
time and slowing things down.

Rob

On 8/28/07, Les Mikesell <[EMAIL PROTECTED]> wrote:
>
> dnk wrote:
> > Hi there, I was wondering whay people were using to monitor rsync
> > backups on centos?
> >
> > I have been looking around sourceforge for various programs (IE Backup
> > Monitor), but am hesitant to trust any of them without hearing about
> > some user experiences.
> >
> > We currently just use rsync for backups (like a slightly more
> > "Vanilla" setup), but just want to be sure everything is going as it
> > should each day
>
> I like backuppc (http://backuppc.sourceforge.net/) which is a little
> more than a monitor, but it will email you if things go wrong and has a
> nice web page to check.
>
> --
>Les Mikesell
> [EMAIL PROTECTED]
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS based router dropping connections

2007-07-26 Thread Rob Lines

On 7/24/07, John R Pierce <[EMAIL PROTECTED]> wrote:


Jesse Cantara wrote:
> So basically, what I can figure from all of the evidence at this point
> is the problem is either:
> default configuration of the network in CentOS isn't proper for what
> I'm doing (can't handle the traffic or number of connections). I get a
> decent amount of traffic, maxing out a 10 Mbit connection all day
> long. I don't know exactly where to check to diagnose if this is the
> case though. Can anybody point me where to find things like the system
> usage of the network (memory, any buffers, # of connections, etc)? the
> things I know to check look normal, but that's basically just
> ifconfig, and your standard /var/log/message and dmesg log files.
> or:
> the network drop from the hosting facility is "bad" somehow, either
> the cable physically, or the way in which they are limiting me to 10
> Mbit.


check with the facility to see if that drop is 10Mbit HALF duplex, and
if so, make sure your server's NIC is configured as such.

I had a problem like this in a  coloc many years ago, with a much older
linux version.




While not the exact same issue I had a problem similar to this between two
switches one was a Cisco 4006 and the other was a 3Com 3300 they were using
a media converter that was 10 mb over fiber and for some reason the 3com
would not negotiate properly with the media converter it was plugged into.
It kept jumping between full and half and sometimes it would try to go to
100mb.  As soon as I turnded off auto negotiate and set it to 10 mb full all
the dropped packets disappeared.  It was under similar conditions where it
would all be fine with a low load but as soon as it was running close to max
it would drop packets repeatedly and the link would seem to fail until the
load dropped off (because people thought it was down) then it would become
stable again until the traffic went back up.

John's suggestion looks like a solid one below.  If the 'problem' server is
behaving find in your office I would really look at this as a probable
solution.

Rob

(ps Hopefuly it clears it up.  In our case the problem had been happening
for over a year and the connection fed an elementary school.  I found out
about the problem about a month into working at the place and had it fixed
within a day or two.  The previous outsourced IT dept could never track it
down because they were never there when it happened.  They would come in
after school was out and it would work fine for them without the high packet
load and they would just claim it was user error.)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Recommendation/pointers please - Need to brush up on CentOS/Linux command line tools

2007-06-12 Thread Rob Lines

I can recommend the book A Practical Guide to Linux Commands, Editors and
Shell Programming.

http://www.amazon.com/Practical-Guide-Commands-Editors-Programming/dp/0131478230/ref=pd_bbs_sr_1/104-4412880-2983136?ie=UTF8&s=books&qid=1181662084&sr=8-1

It can be had for $30 and it is a big book.  It has lots of demos and
examples.  While it will not necessarily tell you what tool would be best
for a particular job it will tell you how to use the tool well.

The first chapter is a bit basic and they go into a lot (maybe too much)
detail on the different editors it has good information.  They also have a
quick list of the main command line utilities with a single line of what
they do.

It is like having simplified man pages with some extra examples but in book
form.

Rob

On 6/12/07, Daniel de Kok <[EMAIL PROTECTED]> wrote:


On Mon, 2007-06-11 at 16:51 -0500, Dale wrote:
> I would very much appreciate any suggestions on any online resources, or
> even a decent book to purchase with the focus of brushing up on Linux
> command line tools. The focus is on troubleshooting type commands,
> adding users from command line
>  and so forth.


Still very much work in progress:
http://www.taickim.com/books/unixsystems/html/


-- Daniel

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos