Mag Gam

2013-07-20 Thread Mag Gam
http://xn--brboden-mxa.dk/tu/ijvzx.wycyspfmmw





Mag Gam


7/21/2013 7:22:35 AM


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capg7zshzmcc6kjr+yaqmzmvx5qzwm+bqyn+orcbgzrx0xnc...@mail.gmail.com



nfs proxy

2011-04-03 Thread Mag Gam
Here is my situation, I have 3TB of data on a NFS server which has 2
NICs (bonded). I have 50 clients which access this data -- mainly
reading. Now, I also have spare servers and I would like to use these
servers to cache the NFS traffic (if possible). Are there any
programs/techniques to 'cache' NFS read traffic? I am thinking setting
up a DNS  round robin scheme to cache the NFS data on my spare
servers.

Any thoughts?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=ipdsJ59-ag9WKg7qoHpO++rP=c...@mail.gmail.com



Re: unloading unnecessary modules

2010-11-30 Thread Mag Gam
Stan,

sorry for the late response.

lspci gives me this about my Ethernet adapter.

.
04:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit
Ethernet Controller (rev 06)
Subsystem: Hewlett-Packard Company NC360T PCI Express Dual
Port Gigabit Server Adapter
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr+ Stepping- SERR- FastB2B-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
SERR- http://fiz.stanford.edu:8081/display/ramcloud/Low+latency+RPCs and it
seems very promising.





On Sun, Nov 28, 2010 at 7:37 PM, Stan Hoeppner  wrote:
> Mag Gam put forth on 11/28/2010 7:31 AM:
>> Erp, pressed 'send' to quickly.
>>
>>
>> TCP/UDP offloading, to my understanding  hardware has to support  and
>> my hardware Intel e1000 doesn't by our engineering team.
>> i know we can offset the NIC to do IP checksum but it would be great
>> to bypass the kernel in general.
>
> Just to confirm they are correct, can we get lspci -vv output for your
> e1000 please?
>
>> As a replier stated, RT is a good option but I am really not sure how
>> it will affect our latency.
>
> Maybe if you told us more about the target application we could give you
> better advice.  Is it primarily network one way or RTT bound?  Does it
> make extensive use of DNS lookups and is latency bound there?  Compute
> latency bound?  Disk latency bound?
>
> There are all manner of latencies in a system.  Knowing which one(s) are
> critical to your application would allow us to better help you.  A real
> time kernel may or may not be what you need.
>
> I would venture to guess that an "e-commerce" system would not be
> network latency bound but database access latency bound while processing
> orders.  Are you building the database system or the web front end
> application?  If you're building a web front end app, optimizing the
> system at the kernel level is pointless, as php, perl, python, etc have
> tremendously higher execution latencies than the kernel.  We're talking
> 1000 fold difference here.  If this is the case, you should be focusing
> all of your efforts on optimizing the performance of your interpreter.
>
> --
> Stan
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
> Archive: http://lists.debian.org/4cf2f5c3.5070...@hardwarefreak.com
>
>


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktin�ib_mc8ddz2_gx8x7aewwi=mm1=uisu7...@mail.gmail.com



Re: unloading unnecessary modules

2010-11-28 Thread Mag Gam
Erp, pressed 'send' to quickly.


TCP/UDP offloading, to my understanding  hardware has to support  and
my hardware Intel e1000 doesn't by our engineering team.
i know we can offset the NIC to do IP checksum but it would be great
to bypass the kernel in general.

As a replier stated, RT is a good option but I am really not sure how
it will affect our latency.



On Sun, Nov 28, 2010 at 8:10 AM, Mag Gam  wrote:
> Stan,
>
> thanks for the response.
>
> To my understanding, CONFIG_HZ is a kernel time option. Has that
> changed? I can certainly rebuild the kernel. How can I check via /proc
> what my HZ is currently set at? Is there a tool to determine this for
> me?
>
>
>
> Removing tasks from cron has helped! We had some weird random tasks
> starting up at production hours which causes interrupts. This is a
> notoriously underestimated tip.
>
>
>
>
>
> On Sat, Nov 27, 2010 at 11:49 PM, Stan Hoeppner  
> wrote:
>> Mag Gam put forth on 11/27/2010 11:06 AM:
>>> Stan,
>>>
>>> Correct. On my severs I too have sound cards and USB. I don't really
>>> need them so I would rather unload them. I suppose I can do a macro
>>> benchmark and state if it helped or not but I would like to know on a
>>> micro level to see if it helped. I think one possibility is to do
>>> "lat_pipe" from lmbench to measure transaction latency of a UNIX pipe.
>>>
>>> My goal is to have the most optimal kernel/tuning since our
>>> application is very latency sensitive.
>>
>> In that case modules aren't your worry--the kernel interrupt timer is,
>> along with scheduled tasks.  For latency sensitive apps, you need a
>> kernel with something like CONFIG_HZ=1000 or greater, which IIRC used to
>> be the "workstation" default.  The "server" default is 250Hz.  Also,
>> IIRC, the current Debian kernels implement tickless timers to allow
>> better integration as virtual machine guests.  For latency sensitive
>> apps, you'd don't want a tickless timer.
>>
>> If you have a latency sensitive app:
>>
>> 1.  Use a kernel with CONFIG_HZ=1000 (or greater)
>>
>> 2.  Eliminate all possible cron jobs or schedule them to run at times
>> when it won't impact the latency of your application.  I.e. make sure no
>> processes fire unexpectedly which may impact the latency of you
>> application.  On today's multicore systems this is less of a problem
>> than it once was as your application can continue to run on the core
>> which is executing it while a new process will be fired up on another
>> core.  When/if in doubt, minimize unexpected process execution.
>>
>> 3.  If the application is disk I/O bound, make sure you have plenty of
>> write cache, and a stripe (RAID6/10) of a sufficient number of spindles.
>>  RAID10 is optimal for low latency.  RAID6 can suffer read-modify-write
>> cycles which will increase latency.  If you have large write cache, and
>> your app never bursts an amount of data larger than the cache, then
>> RAID6 may be fine.  Testing is your friend here.  Also note that the XFS
>> filesystem offers the "realtime" sub volume which can decrease latency.
>>  It was originally developed for streaming media applications such as
>> digital broadcast systems that replaced teh traditional tape systems:
>> http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/ch04s09.html
>>
>> 4.  If the application is sensitive to network latency, use a NIC and
>> driver that supports TCP offload processing.  If the application needs
>> DNS name resolution of remote systems, consider installing a local
>> caching resolver such as pdns-recursor, which can reduce lookup latency
>> considerably for cached results.
>>
>> --
>> Stan
>>
>>
>> --
>> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
>> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>> Archive: http://lists.debian.org/4cf1df5b.6030...@hardwarefreak.com
>>
>>
>


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlkti�-ssx=gjyhvfuzckzhkc9ck3k2lmvxhrn...@mail.gmail.com



Re: unloading unnecessary modules

2010-11-28 Thread Mag Gam
Stan,

thanks for the response.

To my understanding, CONFIG_HZ is a kernel time option. Has that
changed? I can certainly rebuild the kernel. How can I check via /proc
what my HZ is currently set at? Is there a tool to determine this for
me?



Removing tasks from cron has helped! We had some weird random tasks
starting up at production hours which causes interrupts. This is a
notoriously underestimated tip.





On Sat, Nov 27, 2010 at 11:49 PM, Stan Hoeppner  wrote:
> Mag Gam put forth on 11/27/2010 11:06 AM:
>> Stan,
>>
>> Correct. On my severs I too have sound cards and USB. I don't really
>> need them so I would rather unload them. I suppose I can do a macro
>> benchmark and state if it helped or not but I would like to know on a
>> micro level to see if it helped. I think one possibility is to do
>> "lat_pipe" from lmbench to measure transaction latency of a UNIX pipe.
>>
>> My goal is to have the most optimal kernel/tuning since our
>> application is very latency sensitive.
>
> In that case modules aren't your worry--the kernel interrupt timer is,
> along with scheduled tasks.  For latency sensitive apps, you need a
> kernel with something like CONFIG_HZ=1000 or greater, which IIRC used to
> be the "workstation" default.  The "server" default is 250Hz.  Also,
> IIRC, the current Debian kernels implement tickless timers to allow
> better integration as virtual machine guests.  For latency sensitive
> apps, you'd don't want a tickless timer.
>
> If you have a latency sensitive app:
>
> 1.  Use a kernel with CONFIG_HZ=1000 (or greater)
>
> 2.  Eliminate all possible cron jobs or schedule them to run at times
> when it won't impact the latency of your application.  I.e. make sure no
> processes fire unexpectedly which may impact the latency of you
> application.  On today's multicore systems this is less of a problem
> than it once was as your application can continue to run on the core
> which is executing it while a new process will be fired up on another
> core.  When/if in doubt, minimize unexpected process execution.
>
> 3.  If the application is disk I/O bound, make sure you have plenty of
> write cache, and a stripe (RAID6/10) of a sufficient number of spindles.
>  RAID10 is optimal for low latency.  RAID6 can suffer read-modify-write
> cycles which will increase latency.  If you have large write cache, and
> your app never bursts an amount of data larger than the cache, then
> RAID6 may be fine.  Testing is your friend here.  Also note that the XFS
> filesystem offers the "realtime" sub volume which can decrease latency.
>  It was originally developed for streaming media applications such as
> digital broadcast systems that replaced teh traditional tape systems:
> http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/ch04s09.html
>
> 4.  If the application is sensitive to network latency, use a NIC and
> driver that supports TCP offload processing.  If the application needs
> DNS name resolution of remote systems, consider installing a local
> caching resolver such as pdns-recursor, which can reduce lookup latency
> considerably for cached results.
>
> --
> Stan
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
> Archive: http://lists.debian.org/4cf1df5b.6030...@hardwarefreak.com
>
>


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktirznybzotgydd8xeq4smdiu2cv9kstknl...@mail.gmail.com



Re: unloading unnecessary modules

2010-11-27 Thread Mag Gam
Stan,

Correct. On my severs I too have sound cards and USB. I don't really
need them so I would rather unload them. I suppose I can do a macro
benchmark and state if it helped or not but I would like to know on a
micro level to see if it helped. I think one possibility is to do
"lat_pipe" from lmbench to measure transaction latency of a UNIX pipe.

My goal is to have the most optimal kernel/tuning since our
application is very latency sensitive.


On Sat, Nov 27, 2010 at 11:01 AM, Leandro Minatel
 wrote:
>
> On Sat, Nov 27, 2010 at 2:43 AM, Stan Hoeppner 
> wrote:
>>
>> Mag Gam put forth on 11/26/2010 11:14 PM:
>>
>> > unloading unnecessary modules
>>
>> If they are unnecessary modules, the kernel won't load them in the first
>> place, as the hardware they interface with doesn't exit.  If they're not
>> loaded, how can you unload them?
>>
>> I think you need to provide us with _your_ definition of "unnecessary".
>
> I suppose he's talking about that modules loaded for hardware that it's
> present but unused, for example: the sound card. For my servers I generally
> buy "clons" and they have an embedded sound card. So, we don't need the
> sound modules loaded at startup.
> Another example (maybe) is the USB, mouse, SATA/PATA when we have a SCSI
> controller, etc.
>
>
>>
>> If you're really that concerned about kernel footprint and performance,
>> you can always roll your own kernel, as I do, building in the drivers
>> you know you need, none that you don't, and disable loadable module
>> support.  However, this can get tricky if you don't know precisely what
>> you're doing.
>>
>
> I don't know. I decided long ago not to compile the kernel anymore. I do
> prefer blacklisting modules instead. But, it's only my opinion.
>
>>
>> --
>> Stan
>>
>
> LMM
>


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktim2z-6tog=rsrqs1u0nd8zfjkivsh6yz7+2z...@mail.gmail.com



unloading unnecessary modules

2010-11-26 Thread Mag Gam
Hello,

I am currently working on a ecommerce system for a client. We are
using Debian 5. I was told by an engineer that unloading unnecessary
modules will improve performance in the system. My question(s) are: is
this true? Also, how do I measure the kernel (or base OS) system
before and after I unload the modules. I was told 'lmbench' can
measure kernel latency but I can't think of any test which would be
valid.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktin_7rj0q_kw0xa6fryjgv-cct38sh1ptv8a-...@mail.gmail.com



restricting number of user logins

2010-10-22 Thread Mag Gam
Currently we do alot of `rsync -e ssh` to a host.  Is it possible to
restrict only 5 logins per user on the server?  My goal is to avoid
having 100s of these sshd processes running on the server which will
slow it down.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlkti=qysi2_hgspa9ebsttb8f-vbwkpthpv4nno...@mail.gmail.com



Host configuration

2010-06-28 Thread Mag Gam
I manage close to 4k servers at my research lab. Most of these hosts
are used for research simulations.

My problem is, most of these hosts need to have a very similar
configuration such as having the same /etc/passwd, /etc/group,
/etc/hosts.allow and etc...Are there any tools which exist will help
me do something like this? I don't need anything to change the file
for me  instead simple alert me that "host abc has a different file
from the mastercopy"


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktindukybhnkiumpi3a6dk6qxd4pqn-geeokac...@mail.gmail.com



mkfifo question

2010-04-26 Thread Mag Gam
Hello,

Currently I download a file (which is about 700MB) from wget and place
it in my /tmp and do my task on the file. If I have to work with 10 of
these fies at a single time I have to have 10 files in /tmp;

I was wondering if anyone has a clever idea how I can avoid having all
10 in /tmp and have a pipe or a "virtual file" so the program things
there is actually a file there.  Is it possible to fake out the OS
like that?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/w2z1cbd6f831004260436u90c45e45r9daf9d0cb0717...@mail.gmail.com



DNS round robin with NFS

2010-01-01 Thread Mag Gam
I have 3 NFS servers which are serving the same exact data - ISO images.

I have close to 50 clients who access this data so I manually mount up
1/3 clients to serverA, 1/3 clients to serverB, and the remainder to
serverC.

I was wondering if I can place the 3 NFS server in a pool and have all
the clients access the pool.

Any thoughts?
TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: checking for multicast traffic

2009-09-16 Thread Mag Gam
How do I send multicast traffic? How do I receive it?




2009/9/16 Γιώργος Πάλλας :
> Mag Gam wrote:
>> How can I check if my adapter is sending and receiving multicast traffic?
>>
>>
>>
>
> I'd say install wireshark, capture the traffic and examine it...
>
>


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



checking for multicast traffic

2009-09-15 Thread Mag Gam
How can I check if my adapter is sending and receiving multicast traffic?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



HP MSA60

2009-08-18 Thread Mag Gam
I was wondering if anyone here has experience with HP MSA60 with P400
and P800 controller.  How reliable are they for a 24x7 shop?

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: sudo logging

2009-07-25 Thread Mag Gam
interesting indeed

Does anyone have any experience with:
http://freshmeat.net/projects/sudoscript/


On Fri, Jul 24, 2009 at 9:55 AM, Berthold Cogel wrote:
> Chris Davies schrieb:
>> Berthold Cogel  wrote:
>>> We're doing somthing like this in /etc/sudoers:
>>
>>
>>> Cmnd_Alias      SHELLS =        /bin/sh, \
>>>                                /bin/bash, \
>>                               [...]
>>
>>> TRUSTED_USR  ALL = NOPASSWD:    ALL ,!SHELLS, NOROOT
>>
>> Surely this breaks trivially?
>>
>>     ln -s /bin/bash /tmp/somethingelse
>>     sudo /tmp/somethingelse
>>
>> Chris
>>
>>
>
> Of course you're right...
>
> But in this case TRUSTED_USR means what it says... It's only to prevent
> colleagues to shoot themselves.
>
> For the very special setup on some of our systems they need a lot of
> permissions. But we don't want them do be root for some reasons.
> Surely they can break the setup if they want. But they gain nothing if
> they do.
>
> It's not a setup we make for every user. But it would be a waste to
> define each single command in this case. If they really need to be root,
> they can use sudosh.
>
>
> Berthold
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>
>


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: sudo logging

2009-06-11 Thread Mag Gam
I though there was already a tool which integrates sudo and script.
This is the combination I was looking for.



On Thu, Jun 11, 2009 at 2:02 AM, Frank Lin PIAT wrote:
> On Wed, 2009-06-10 at 19:57 -0400, Mag Gam wrote:
>> We have many users at my university engineering lab. Some professors
>> need commands for root and of other users, so we decided to setup sudo
>> permissions. I was wondering if there is a way to log all commands
>> when they sudo into an account or root account.
>
>
> You should only grant the right to execute some specific commands. One
> should not be able to use sudo to run a shell as root.
> Therefore each command is execute using "sudo something" and each
> executed command is logged.
>
>> I would like to even capture key strokes...
>
> Once your users are root, you have to trust them (they can kill whatever
> tool you run) but you can check the command "script".
>
> One idea... If you want to log all what is typed, you could tell your
> users to connect to another box, from where they would telnet to the
> target box. You can then use a sniffer to log the connection.
>
> BTW, make sure this is legal in your country.
>
> Franklin
>
>


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



sudo logging

2009-06-10 Thread Mag Gam
We have many users at my university engineering lab. Some professors
need commands for root and of other users, so we decided to setup sudo
permissions. I was wondering if there is a way to log all commands
when they sudo into an account or root account.

I would like to even capture key strokes...


TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: wget or curl?

2009-06-10 Thread Mag Gam
thankyou all


On Wed, Jun 10, 2009 at 9:55 AM, Oliver Schneider wrote:
> I tend to prepare my scripts for either of these, by saying:
>
> wget  || curl -L -O 
>
> curl supports of course more protocols, but then if you only need FTPS I 
> haven't seen that explicitly mentioned by wget docs either. But I might err 
> ...
>
> // Oliver
>
> PS: Had accidentally replied to the sender only.
>
>  Original-Nachricht 
>> Datum: Wed, 10 Jun 2009 07:58:13 -0500
>> Von: Mark Allums 
>> An: debian-user 
>> Betreff: Re: wget or curl?
>
>> Mag Gam wrote:
>> >  I would like to automatically get files from a FTPs using TLS. Is it
>> > better to use wget or curl?
>> >
>> > or is there another alternative?
>> >
>> > TIA
>>
>>
>>
>> If you have the resources (i.e., disk space, etc.) install both.  No
>> need to decide unless you must.  Some applications (scripts) expect
>> wget, others like curl.  I would say curl is more of a requirement, wget
>> is more optional.   But that is in my experience, others might have
>> other experiences.
>>
>> (This point of view is from a non-technical standpoint.  Looking forward
>> to reading what others have to say.)
>>
>>
>> Mark Allums
>
> --
> ---
> DDKWizard and DDKBUILD: <http://ddkwizard.assarbad.net>
>
> Trunk (potentially unstable) version: 
> <http://ddkwizard.assarbad.net/trunk/ddkbuild.cmd>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>
>


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



wget or curl?

2009-06-10 Thread Mag Gam
 I would like to automatically get files from a FTPs using TLS. Is it
better to use wget or curl?

or is there another alternative?

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Suggestion for a mail package

2009-06-05 Thread Mag Gam
We are planning to run an email server at my university. We would like
to use something that has a nice Web based gui for its configuration.
Does anyone have any good ideas? We have tried courier and exim, but
their web-based GUIs were not that good. Any other email packages out
there we can try?

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: creating a compact binary

2009-04-07 Thread Mag Gam
I don't really care about the size. But I really want the entire rsync
 distribution to be in 1 file.

What is the difference between static binary and standalone?


On Tue, Apr 7, 2009 at 10:39 AM, Ron Johnson  wrote:
> On 2009-04-06 18:59, Mag Gam wrote:
>>
>> I was wondering if its possible to compile src code  (for example
>> rsync) to create 1 large binary. I want to do this to easily
>> distribute rsync.
>
> To what kind of platform?  You can't just require that certain libraries
> exist?
>
> Besides, static binaries are *huge*.  Definitely not compact.  Do you really
> mean stand-alone?
>
> --
> Scooty Puff, Sr
> The Doom-Bringer
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject
> of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>
>


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: creating a compact binary

2009-04-07 Thread Mag Gam
correct. I want to make a static linked binary for rsync.  Thanks for
stating the obvious.


On Tue, Apr 7, 2009 at 6:01 AM, Sharninder  wrote:
> On Tue, Apr 7, 2009 at 5:29 AM, Mag Gam  wrote:
>> I was wondering if its possible to compile src code  (for example
>> rsync) to create 1 large binary. I want to do this to easily
>> distribute rsync.
>>
>
> If I'm reading this correctly, what you need is compiling the rsync
> binary as a statically linked binary. Read through the rsync Makefile
> to figure out how to do this.
>
> --
> Sharninder
> http://geekyninja.com/
>


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



creating a compact binary

2009-04-07 Thread Mag Gam
I was wondering if its possible to compile src code  (for example
rsync) to create 1 large binary. I want to do this to easily
distribute rsync.

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: stitching a filesystem

2009-03-17 Thread Mag Gam
I was wondering if there was some clever technique you can do with
symbolic links and exports.

On Tue, Mar 17, 2009 at 8:30 PM, Douglas A. Tutty  wrote:
> On Tue, Mar 17, 2009 at 07:22:13PM -0400, Mag Gam wrote:
>> Is it possible to combine 2 filesystems so they would appear as one?
>>
>> For example, you have 2 volumes:
>> /vol0 (500GB)
>> /vol1 (500GB)
>>
>> I want to export both of them so it would appear as 1TB volume.
>
> Are they already ext? or are they plain partitions.  If just plain
> partitions (or other block devices), then use LVM to combine them, put a
> filesystem on the resultant LV, then mount that.
>
> Doug.
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>
>


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



stitching a filesystem

2009-03-17 Thread Mag Gam
Is it possible to combine 2 filesystems so they would appear as one?

For example, you have 2 volumes:
/vol0 (500GB)
/vol1 (500GB)

I want to export both of them so it would appear as 1TB volume.


Any ideas?

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



RAID controller data loss

2009-03-08 Thread Mag Gam
Hello Debian Users:

Have there been any instances where a RAID controller wiped out data
in particular the logical drives it creates are no longer able to use.
The only solution is to recreate the logical drive?

Just curious.

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: OT: file system versus databases

2009-02-24 Thread Mag Gam
Paul:

Thanks for the response.

> I will guess that, at your company, there are very few updates of this
> transaction data, New transactions are added to the record as they
> happen.  The *.txt files may contain references to prior transactions,
> but these are human readable text, not some sort of computer
> actionable links, or pointers.

Correct, most of this data is actually sorted by date.

For instance, country/2005/01/01/foo.txt

So, the data is  never duplicated.

I was just curious how this method is much faster than DBMS for
access. It seems when we know what data we are looking for by date,
the data retreivel is very fast when its on a Unix filesystem.

For instance, grep "something" country/2005/??/01/foo.txt

It gives an instant result. Thats how we are using it and we love it.

TIA



On Tue, Feb 24, 2009 at 10:06 AM, Paul E Condon
 wrote:
> On 2009-02-23_23:28:22, Mag Gam wrote:
>> I was curious why this was faster:
>>
>> At our company we store close to 50TB of  certain transaction data and
>> we stored it on a UNIX filesystem raw without any DBMS help.
>
> I will guess that, at your company, there are very few updates of this
> transaction data, New transactions are added to the record as they
> happen.  The *.txt files may contain references to prior transactions,
> but these are human readable text, not some sort of computer
> actionable links, or pointers.
>
> But are you sure that your current system never loses any transaction
> history because of glitches in adding to the record? Are there
> back-references (in text) to transistions that can't be found in the
> record when someone decides to look for them? Does your company ever
> have a disagreement with a customer of a supplier in which your
> counter-party claims that your transaction record is not a record of
> what really happened, but just wishful thinking of your management?
>
> If not, you really don't need to trouble yourself about DBMS, but ...
>
> In a real, full up DBMS, there is something called 'ACID'.  This
> stands for four features that distinguish a real DBMS, from a
> not-so-real DBMS. They concern DBMS transactions, not real business
> transitions, but changes in the database that DBMS gurus call
> transactions.
>
> These DBMS transactions must be Atomic, Consistent, Isolated, and
> Durable. Those DBMS features are very difficult to implement,
> especially once one understands the full implications of what these
> words mean to real DBMS gurus. Google 'ACID' and follow the
> links. Read the docs of PostgreSQL, which is an open-source real DBMS'.
> Be cautious about MySQL. It has a long history of being not
> 'ACID', and struggling to understand it.
>
> But, by all means, don't get involved in DBMS if you don't really
> need it. OTOH, if the company really does need it, but doesn't
> realize the danger of not having it --- perhaps you can be a savior.
>
> HTH
>>
>> For example:
>> country/A/name/A.txt
>> country/B/name/B.txt
>> country/C/name/C.txt
>> and so on...
>> We have close to 500 million entries in this format.
>>
>>
>> When we do a read() on a file, its very fast and we enjoy it. Would we
>> get a similar performance if we use a database and index it?
>>
>> TIA
>>
>>
>> --
>> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
>> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>>
>>
>
> --
> Paul E Condon
> pecon...@mesanetworks.net
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
>
>


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



OT: file system versus databases

2009-02-23 Thread Mag Gam
I was curious why this was faster:

At our company we store close to 50TB of  certain transaction data and
we stored it on a UNIX filesystem raw without any DBMS help.

For example:
country/A/name/A.txt
country/B/name/B.txt
country/C/name/C.txt
and so on...
We have close to 500 million entries in this format.


When we do a read() on a file, its very fast and we enjoy it. Would we
get a similar performance if we use a database and index it?

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: NIS tuning

2008-12-29 Thread Mag Gam
andy,

thanks for the response. I may try this.

On Mon, Dec 29, 2008 at 12:17 PM, Mag Gam  wrote:
> Hello All,
>
> We are using NIS for our university's mechanical/computer/civil
> engineering lab. We have near 4000 clients and 1 NIS server. We have 4
> global NIS servers, which is used thru out the university, but I
> replicate 1 NIS server nightly to be used for the 4000 clients.
> Obviously, we will get a lot of calls to the NIS server, and
> occasionally it crashes --rpc failure yp operation message -- and our
> authentication breaks. I was wondering if there is a way optimize
> ypserv to serve this more clients. Or is it possible to setup multiple
> NIS servers for a client, similar to multiple DNS servers in
> /etc/resolv.conf
>
>
> TIA
>


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



NIS tuning

2008-12-29 Thread Mag Gam
Hello All,

We are using NIS for our university's mechanical/computer/civil
engineering lab. We have near 4000 clients and 1 NIS server. We have 4
global NIS servers, which is used thru out the university, but I
replicate 1 NIS server nightly to be used for the 4000 clients.
Obviously, we will get a lot of calls to the NIS server, and
occasionally it crashes --rpc failure yp operation message -- and our
authentication breaks. I was wondering if there is a way optimize
ypserv to serve this more clients. Or is it possible to setup multiple
NIS servers for a client, similar to multiple DNS servers in
/etc/resolv.conf


TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



server upgrade question

2008-12-11 Thread Mag Gam
At my university we have 10 servers. Each server has 8 cores with 32
GIG of memory running Debian 4.0.  We have to give these servers to a
different department, and our Dean would like to consiladate 10
servers into 5 servers. The new server will have 16 cores with 64 GIG
of memory. Basically a 2:1 type of deal.

Since we are doing a 2:1, should we expect 2:1 performance? For
instance, most of our applications are heavy compute and memory
intensive applications. Would they run at the same speed, better, or
worse with this new setup? My guess is that same?

Oh, yeah will be running 4.0 :-)

TIA


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org



Re: updatedb for very large filesystems

2008-10-07 Thread Mag Gam
Great. Thanks. Basically I have 500+ directories; each directory which
has over 9000 files. I was wondering if there is a trick I can use.

TIA


On Thu, Oct 2, 2008 at 9:29 AM, James Youngman <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 2, 2008 at 11:18 AM, Ron Johnson <[EMAIL PROTECTED]> wrote:
>> Since find is so disk-intensive, isn't this is only of benefit if /usr, /var
>> and /home are on different devices?
>
> Yes.   Disk-head-movement optimisation will not be implemented in
> findutils for another six weeks or so.
>
> James.
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: updatedb for very large filesystems

2008-10-02 Thread Mag Gam
WEll, I am more interesting is searching a large Networked filesystem.



On Thu, Oct 2, 2008 at 6:18 AM, Ron Johnson <[EMAIL PROTECTED]> wrote:
> On 10/02/08 04:28, James Youngman wrote:
>>
>> On Wed, Oct 1, 2008 at 12:15 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
>>>
>>> I was wondering if its possible to run updatedb on a very large
>>> filesystem (6 TB). Has anyone done this before? I plan on running this
>>> on a weekly basis, but I was wondering if updatedb was faster than a
>>> simple 'find'. Are there any optimizations in 'updatedb' ?
>>
>> With findutils you can update several parts of the directory tree in
>> parallel, or update various parts on a different time schedule.
>>
>> Here's an example with three directory trees searched in parallel with
>> one being searched remotely on another server and then combined with a
>> canned list of files from a part of the filesystem that never changes.
>>
>> find /usr -print0  > /var/tmp/usr.files0 &
>> find /var  -print0  > /var/tmp/var.files0 &
>> find /home -print0 > /var/tmp/home.files0 &
>> ssh nfs-server 'find /srv -print0' > /var/tmp/srv.files0 &
>> wait
>
> Since find is so disk-intensive, isn't this is only of benefit if /usr, /var
> and /home are on different devices?
>
>> sort -f -z /var/tmp/archived-stuff.files.0 /var/tmp/usr.files0
>> /var/tmp/var.files0 /var/tmp/home.files0 /var/tmp/srv.files0 |
>> /usr/lib/locate/frcode -0 > /var/tmp/locatedb.new
>> rm -f /var/tmp/usr.files0 /var/tmp/var.files0 /var/tmp/home.files0
>> /var/tmp/srv.files0
>>
>> cp /var/cache/locate/locatedb /var/cache/locate/locatedb.old
>> mv /var/tmp/locatedb.new /var/cache/locate/locatedb
>>
>>
>
>
> --
> Ron Johnson, Jr.
> Jefferson LA  USA
>
> "Do not bite at the bait of pleasure till you know there is no
> hook beneath it."  -- Thomas Jefferson
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject
> of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: updatedb for very large filesystems

2008-10-01 Thread Mag Gam
Thanks Sven. Is it possible to get file user owner and file size with
the mlocate/updatedb ?

I would like to get granular reports like that...


On Wed, Oct 1, 2008 at 7:48 AM, Sven Joachim <[EMAIL PROTECTED]> wrote:
> On 2008-10-01 13:15 +0200, Mag Gam wrote:
>
>> I was wondering if its possible to run updatedb on a very large
>> filesystem (6 TB). Has anyone done this before?
>
> I don't have such luxurious filesystems, but it should certainly be
> possible.  It's just a matter of time (the number of files is what
> really counts, not the size of the filesystem).
>
>> I plan on running this
>> on a weekly basis, but I was wondering if updatedb was faster than a
>> simple 'find'. Are there any optimizations in 'updatedb' ?
>
> The implementation of updatedb mlocate package has an important
> optimization, it reuses old entries for directories whose mtime didn't
> change since the last run.  So the second and subsequent runs should be
> considerably faster than the first.  Note, however, that mlocate is not
> available in Etch.
>
> The findutils version may be too slow to be run on a weekly basis on
> systems with many millions of files.
>
> Sven
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



updatedb for very large filesystems

2008-10-01 Thread Mag Gam
I was wondering if its possible to run updatedb on a very large
filesystem (6 TB). Has anyone done this before? I plan on running this
on a weekly basis, but I was wondering if updatedb was faster than a
simple 'find'. Are there any optimizations in 'updatedb' ?

TIA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: kernel swap question

2008-09-14 Thread Mag Gam
Michael:

Interesting. That will just not use swap? So, it will FIFO pages into
physical memory?



On Sun, Sep 14, 2008 at 3:29 PM, Michael <[EMAIL PROTECTED]> wrote:
> On Sat, Sep 13, 2008 at 06:28:31PM -0400, Mag Gam wrote:
>> I have a system with 32GB of RAM. The application is designed so it
>> does not do sequential reads and it does random operations. The
>> application memory intensive and I would like it to not swap. I want
>> it to use physical memory as much as possible. Once the memory is read
>> and operated on, I want that page to disappear and not even goto
>> paging status. What is the best VM tuning for this?
>
> Add to /etc/sysctl.conf this string vm.swappiness=0
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



tar block question

2008-09-14 Thread Mag Gam
I have to tar up many small files. I have over 30k files in a
directory. What is the best way to do this? If I tar it its taking a
long time but from what I have been reading if I increase my blocksize
it should go faster. But I am not sure if that will work. Any
thoughts?

TIA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: kernel swap question

2008-09-13 Thread Mag Gam
Also, my I/O is pretty fast. It can do 250/MBsec read/write randomly.
If that helps..

TIA


On Sat, Sep 13, 2008 at 6:28 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
> I have a system with 32GB of RAM. The application is designed so it
> does not do sequential reads and it does random operations. The
> application memory intensive and I would like it to not swap. I want
> it to use physical memory as much as possible. Once the memory is read
> and operated on, I want that page to disappear and not even goto
> paging status. What is the best VM tuning for this?
>
> TIA
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



kernel swap question

2008-09-13 Thread Mag Gam
I have a system with 32GB of RAM. The application is designed so it
does not do sequential reads and it does random operations. The
application memory intensive and I would like it to not swap. I want
it to use physical memory as much as possible. Once the memory is read
and operated on, I want that page to disappear and not even goto
paging status. What is the best VM tuning for this?

TIA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



fuse question

2008-09-04 Thread Mag Gam
I am trying to use fuse to mount up a user created filesystem.


$ /sbin/lsmod  | grep fuse
fuse   40404  0

$ dd if=/dev/zero bs=1024k count=10 of=fs

$ /sbin/mkfs.ext3 fs

$ fusermount fs mnt
fusermount: old style mounting not supported

I am not sure what I am dong wrong.

Any ideas?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



NFS export question

2008-08-30 Thread Mag Gam
I noticed when exporting NFS we are specifying fsid=X

Is it possible to auto increment this fsid? I am exporting over 70
directories and keeping track of this fsid number is becoming a task
of its own.

Any thoughts? How is everything else doing this?

TIA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: syslog and email

2008-08-28 Thread Mag Gam
Thanks all for the responses. It seems I am going to the syslog-ng
way, however I am having trouble setting up the email alerts.
I have written a script to email me but I am not sure how to set this
up in syslog-ng

For example, in /var/log/messages if I get a message "foo" I would
like it to email me. Is that possible?

TIA


On Wed, Aug 27, 2008 at 2:33 AM, Paul Johnson <[EMAIL PROTECTED]> wrote:
> On Mon, 2008-08-25 at 07:58 -0400, Mag Gam wrote:
>> Currently at my university we have 50 servers in our physics lab, and
>> I am forwarding all syslog messages to 1 server. Is it possible to
>> email me an alert once a particular alert occurs? Instead of
>> constantly parsing the log file, I would like something a big more
>> realtime.
>
> You might consider installing logcheck on the hosts you wish to receive
> alerts from.
>
> --
> Paul Johnson
> [EMAIL PROTECTED]
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



syslog and email

2008-08-25 Thread Mag Gam
Currently at my university we have 50 servers in our physics lab, and
I am forwarding all syslog messages to 1 server. Is it possible to
email me an alert once a particular alert occurs? Instead of
constantly parsing the log file, I would like something a big more
realtime.

TIA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: interface for tar

2008-08-21 Thread Mag Gam
FUSE looks really good. I am going to investigate it.

TIA


On Thu, Aug 21, 2008 at 10:49 AM, Shachar Or <[EMAIL PROTECTED]> wrote:
> On Thursday 21 August 2008 16:18, Tzafrir Cohen wrote:
>> On Thu, Aug 21, 2008 at 06:29:00AM -0400, Mag Gam wrote:
>> > Sharchar:
>> >
>> > Can I do that with autofs? Lets say I have a directory called
>> > /home/$userid/image_files; can I have autofs to look for
>> > /home/$userid/& and automatically mount and unmount these fs images?
>> > Lets say I keep the fs images in /home/$user/.isos
>>
>> Here is what we have for auto-mounting ISO images.
>
> Actually, I think that the .isos directory is badly named because we're not
> talking about ISO9660 at all.
>>
>> $ cat /etc/auto.master
>> #
>> /var/www/netinst/cds/etc/auto.netinst-cd
>>
>> $ cat /etc/auto.netinst-cd
>> cd1 -fstype=iso9660,ro,user,loop :/path/to/image1.iso
>> cd2 -fstype=iso9660,ro,user,loop :/path/to/image2.iso
>>
>> This will still be require a file system to mount.
>>
>> --
>> Tzafrir Cohen | [EMAIL PROTECTED] | VIM is
>> http://tzafrir.org.il || a Mutt's
>> [EMAIL PROTECTED] ||  best
>> ICQ# 16849754 || friend
>
> --
> Shachar Or | שחר אור
> http://ox.freeallweb.org/
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


Re: interface for tar

2008-08-21 Thread Mag Gam
Sharchar:

Can I do that with autofs? Lets say I have a directory called
/home/$userid/image_files; can I have autofs to look for
/home/$userid/& and automatically mount and unmount these fs images?
Lets say I keep the fs images in /home/$user/.isos

ANy thoughts about this?

Either way, thanks for all of your help guys!

TIA

On Thu, Aug 21, 2008 at 1:18 AM, Shachar Or <[EMAIL PROTECTED]> wrote:
> On Wednesday 20 August 2008 13:50, Mag Gam wrote:
>> David:
>>
>> Do you have some sort of script to manage this? I am a little hesitate
>> to give professors mkfs and mount  sudo access. Is there a way around
>> this?
>
> You can specify the 'user' option in fstab so that usres can mount the
> relevant filesystem.
>
> If you precreate the files with the filesystems in them, it may cover it.
>>
>> On Wed, Aug 20, 2008 at 12:13 AM, Mag Gam <[EMAIL PROTECTED]> wrote:
>> > WOW!
>> >
>> > Very nice ideas.
>> >
>> > I like the dd idea. What command would I use for that? Also, the files
>> > are coming from NFS; how can I help this?  Any ideas for this?
>> >
>> > On Tue, Aug 19, 2008 at 10:24 PM, David Fox <[EMAIL PROTECTED]> wrote:
>> >> On Tue, Aug 19, 2008 at 5:40 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
>> >>> At my university we run fluid dynamic simulations. These simulations
>> >>> create many small files (30,000) per hour. Their size is very small
>> >>> (20k to 200k). Instead of having this on the filesystem since it take
>> >>
>> >> My approach:
>> >>
>> >> make a sufficiently-sized file using dd if=/dev/zero of=/bigfile bs=1m
>> >> count=1000
>> >>
>> >> size so that you have enough room, and room for growth, of course
>> >>
>> >> Make a filesystem inside of that file (reiserfs might be a good choice
>> >> since it is well-designed to handle lots of smallish files, although
>> >> "small" by that definition may be much smaller than 200k)
>> >>
>> >> Mount that file in loopback mode prior to running your simulations,
>> >> and (after moving the files over to the new filesystem) direct all
>> >> filesystem traffic to use that 'filesystem' which may entail only
>> >> something simple as cd'ing into the 'filesystem' and starting work.
>> >>
>> >>
>> >> --
>> >> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
>> >> with a subject of "unsubscribe". Trouble? Contact
>> >> [EMAIL PROTECTED]
>
> --
> Shachar Or | שחר אור
> http://ox.freeallweb.org/
>
>


Re: interface for tar

2008-08-20 Thread Mag Gam
Thank you all.

I am very interested in the fuse AND the fs image solution. Is it
possible to integrate that into auto mounter or autofs type solution?
I don't want too many open mounts. If the /tmp/mountpoint it not open,
I would like to automatically disregard the mount point.



On Wed, Aug 20, 2008 at 11:06 AM, Glennie Vignarajah <[EMAIL PROTECTED]> wrote:
> Le Wednesday 20 August 2008 vers 02:40, Mag Gam("Mag Gam"
> <[EMAIL PROTECTED]>) a écrit:
>
> Hello,
>
>> I would like to tar them
>> per day into one tar file. I would then like an interface similar
>> to zsh/ksh to "cd tar.file" and use it as a typeical shell.
>
> Try fuse[1]. It has a driver for tar(ArchiveFileSystems) files[2].
>
> 1: http://fuse.sourceforge.net/
> 2: http://fuse.sourceforge.net/wiki/index.php/FileSystems
> --
> http://www.glennie.fr
> The reasonable man adapts himself to the world; the unreasonable one
> persists in trying to adapt the world to himself. Therefore all
> progress depends on the unreasonable man.
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


Re: interface for tar

2008-08-20 Thread Mag Gam
David:

Do you have some sort of script to manage this? I am a little hesitate
to give professors mkfs and mount  sudo access. Is there a way around
this?

On Wed, Aug 20, 2008 at 12:13 AM, Mag Gam <[EMAIL PROTECTED]> wrote:
> WOW!
>
> Very nice ideas.
>
> I like the dd idea. What command would I use for that? Also, the files
> are coming from NFS; how can I help this?  Any ideas for this?
>
>
>
> On Tue, Aug 19, 2008 at 10:24 PM, David Fox <[EMAIL PROTECTED]> wrote:
>> On Tue, Aug 19, 2008 at 5:40 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
>>> At my university we run fluid dynamic simulations. These simulations
>>> create many small files (30,000) per hour. Their size is very small
>>> (20k to 200k). Instead of having this on the filesystem since it take
>>
>> My approach:
>>
>> make a sufficiently-sized file using dd if=/dev/zero of=/bigfile bs=1m
>> count=1000
>>
>> size so that you have enough room, and room for growth, of course
>>
>> Make a filesystem inside of that file (reiserfs might be a good choice
>> since it is well-designed to handle lots of smallish files, although
>> "small" by that definition may be much smaller than 200k)
>>
>> Mount that file in loopback mode prior to running your simulations,
>> and (after moving the files over to the new filesystem) direct all
>> filesystem traffic to use that 'filesystem' which may entail only
>> something simple as cd'ing into the 'filesystem' and starting work.
>>
>>
>> --
>> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
>> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>>
>>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: interface for tar

2008-08-19 Thread Mag Gam
WOW!

Very nice ideas.

I like the dd idea. What command would I use for that? Also, the files
are coming from NFS; how can I help this?  Any ideas for this?



On Tue, Aug 19, 2008 at 10:24 PM, David Fox <[EMAIL PROTECTED]> wrote:
> On Tue, Aug 19, 2008 at 5:40 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
>> At my university we run fluid dynamic simulations. These simulations
>> create many small files (30,000) per hour. Their size is very small
>> (20k to 200k). Instead of having this on the filesystem since it take
>
> My approach:
>
> make a sufficiently-sized file using dd if=/dev/zero of=/bigfile bs=1m
> count=1000
>
> size so that you have enough room, and room for growth, of course
>
> Make a filesystem inside of that file (reiserfs might be a good choice
> since it is well-designed to handle lots of smallish files, although
> "small" by that definition may be much smaller than 200k)
>
> Mount that file in loopback mode prior to running your simulations,
> and (after moving the files over to the new filesystem) direct all
> filesystem traffic to use that 'filesystem' which may entail only
> something simple as cd'ing into the 'filesystem' and starting work.
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



interface for tar

2008-08-19 Thread Mag Gam
At my university we run fluid dynamic simulations. These simulations
create many small files (30,000) per hour. Their size is very small
(20k to 200k). Instead of having this on the filesystem since it take
up inode space, I would like to tar them per day into one tar file. I
would then like an interface similar to zsh/ksh to "cd tar.file" and
use it as a typeical shell. Do tools like that exist? That would be
very benefical for us and our system admin does not have to yell at
us.
Any thoughts or ideas?

TIA


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: bandwidth tool

2008-07-07 Thread Mag Gam
Well,

I need something realtime and accurate.

Any thoughts?

TIA

On Mon, Jul 7, 2008 at 11:03 AM, M. Piscaer <[EMAIL PROTECTED]> wrote:
> Mag Gam schreef:
>>
>> Is there a tool to measure network traffic? I am using ifstat but its
>> reporting wrong statistics. I am trying to get something similar to
>>
>> eth0 , 16Mb/sec
>> eth1,  10Mb/sec
>>
>> etc..
>>
>> I need something simple :-)
>>
>> TIA
>>
> I use nload. It gives you an grafic of the past half minute.
>
> Regards,
>
> Michiel Piscaer
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject
> of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
>
>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



bandwidth tool

2008-07-05 Thread Mag Gam
Is there a tool to measure network traffic? I am using ifstat but its
reporting wrong statistics. I am trying to get something similar to

eth0 , 16Mb/sec
eth1,  10Mb/sec

etc..

I need something simple :-)

TIA


Re: memory question (hardware)

2008-07-05 Thread Mag Gam
Thanks for helping me out with this.



On Sat, Jul 5, 2008 at 4:27 PM, Jeff Soules <[EMAIL PROTECTED]> wrote:

> Latency, risk of failure, sure... also sheer design complexity (since you
> have
> to solve the geometry of fitting more circuitry in the same space), and
> subsequent complexity of fabrication (since you have to actually make
> those tiny little circuits).  There's also heat dissipation, which isn't so
> so
> bad for memory but is still nontrivial.
> Using smaller circuit paths means that the control signals wind up being
> effectively "noisier" too (or so I understand), which affects a whole slew
> of things, including memory timings among others.
>
> At least this is all what I remember...!
>
> On Sat, Jul 5, 2008 at 2:24 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
> >
> > Thanks for the responses.
> >
> > What is the engineering challenge of having more memory in a single die?
> I expect latency would be a issue. Also, as Brad mentioned greater risk of
> failure.
> >
> > Any thing else?
> >
> >
> >
> > On Fri, Jul 4, 2008 at 11:04 AM, <[EMAIL PROTECTED]> wrote:
> >>
> >> >
> >> >
> >> >
> >> > Original Message 
> >> >From: [EMAIL PROTECTED]
> >> >To: debian-user@lists.debian.org
> >> >Subject: RE: memory question (hardware)
> >> >Date: Thu, 3 Jul 2008 01:08:10 -0400
> >> >
> >> >>I am curious...
> >> >>
> >> >>
> >> >>When memory is manufactured why does a stick of 4GB memory cost 2.5
> >> >times of
> >> >>2GB memory? Is the manufacturing process that much different to
> >> >justify the
> >> >>cost?
> >>
> >> Obviously we can't open up the sticks and look at the chips, but the
> >> usual answer is that the 2G used "the older" technology and the 4G
> >> used the "newer" technology and the chip vendor is trying to recoup
> >> development costs.  As the "newer" technology becomes the "older"
> >> technology the cost will go down.  With Moore's "law" this gives the
> >> chip vendor about 18 months to recoup most of the development costs
> >> and some profit.
> >> Larry
> >> >>
> >>
> >>
> >>
> >
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>


Re: memory question (hardware)

2008-07-05 Thread Mag Gam
Thanks for the responses.

What is the engineering challenge of having more memory in a single die? I
expect latency would be a issue. Also, as Brad mentioned greater risk of
failure.

Any thing else?



On Fri, Jul 4, 2008 at 11:04 AM, <[EMAIL PROTECTED]> wrote:

> >
> >
> >
> > Original Message 
> >From: [EMAIL PROTECTED]
> >To: debian-user@lists.debian.org
> >Subject: RE: memory question (hardware)
> >Date: Thu, 3 Jul 2008 01:08:10 -0400
> >
> >>I am curious...
> >>
> >>
> >>When memory is manufactured why does a stick of 4GB memory cost 2.5
> >times of
> >>2GB memory? Is the manufacturing process that much different to
> >justify the
> >>cost?
>
> Obviously we can't open up the sticks and look at the chips, but the
> usual answer is that the 2G used "the older" technology and the 4G
> used the "newer" technology and the chip vendor is trying to recoup
> development costs.  As the "newer" technology becomes the "older"
> technology the cost will go down.  With Moore's "law" this gives the
> chip vendor about 18 months to recoup most of the development costs
> and some profit.
> Larry
> >>
>
>
>
>


auto mounter and nfs question

2008-07-05 Thread Mag Gam
At my university we have Debian and Redhah (blah :-)  )servers. Their
primary purpose is to serve files to users.

We are trying to figure out a easy way to manage and export the mount points
via NFS to Linux labs which have around 500 clients.

Each server has a mount point like this:

Server1
/building_name/projectid0
/building_name/projectid1
/building_name/projectid2

Server2
/building_name/projectid3
/building_name/projectid4
/building_name/projectid5


And the clients mount it up like this when needed
/net/projectid0
/net/projectid5


We also have a autofs and NIS server. I was wondering if this is possible to
do. The project names are constantly changing. Any thoughts?

TIA


memory question (hardware)

2008-07-02 Thread Mag Gam
I am curious...


When memory is manufactured why does a stick of 4GB memory cost 2.5 times of
2GB memory? Is the manufacturing process that much different to justify the
cost?


Re: root file system question

2008-06-22 Thread Mag Gam
Very good points.

Trying to understand Linux from a theoretical point of view.


On Sun, Jun 22, 2008 at 12:24 PM, Gilles Mocellin <[EMAIL PROTECTED]>
wrote:

> Le Sunday 22 June 2008 18:08:45 Ron Johnson, vous avez écrit :
> > On 06/22/08 11:01, Mag Gam wrote:
> > > Ok, so in theory assuming no processes use hd resources then there
> > > should be no HD activity.
> >
> > Swap.  Even if you have adequate memory, Linux will occasionally
> > move things to swap space.  Of course, you can disable swap...
> [...]
>
> If you use ext3 filesystems, there's the journal flush happening every 5
> minutes.
>


Re: root file system question

2008-06-22 Thread Mag Gam
Ok, so in theory assuming no processes use hd resources then there should be
no HD activity.



On Sun, Jun 22, 2008 at 11:36 AM, David <[EMAIL PROTECTED]> wrote:

> On Sun, Jun 22, 2008 at 5:18 PM, Mag Gam <[EMAIL PROTECTED]> wrote:
> > This is more of a theoretical Unix question,
> >
> > When there are no users on the system, the system is idle, would there
> still
> > be I/O activity on the root disks?
> >
> > If so, what processes will be doing the I/O ?
>
> Depends entirely on what services you have running, crontab jobs, etc.
>
> If you have 0 services installed then there should be no harddrive
> activity after booting.
>
> You can check this with vmstat
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>


root file system question

2008-06-22 Thread Mag Gam
This is more of a theoretical Unix question,

When there are no users on the system, the system is idle, would there still
be I/O activity on the root disks?

If so, what processes will be doing the I/O ?


TIA


Re: indexing particular file types

2008-06-14 Thread Mag Gam
Yes. This is exactly what I intend to do. Thanks for the feedback.

If you have any advice on this please don't hesitate to share with us :-)

TIA


On Wed, Jun 11, 2008 at 1:23 PM, Ron Johnson <[EMAIL PROTECTED]> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 06/10/08 05:43, Mag Gam wrote:
> > Is it possible to index all symbolic links (source and destination) of a
> > filesystem? For example, in my university we have a project where
> > professors use vast amount of disk space -- over 10 TB a month. We
> > provide the professors a mount point, /barXX and export that mount
> > point. The professor then symbolic links that filesystem like, ln -s
> > /nfsexport/barXX June10_data. I would like to keep track of these
> > symbolic links. Is there a good method for this? Is there a feature in
> > ext3 which will let me keep track of these symbolic links. I can always
> > do a find /fs and compare inode info, but that would just take too
> long...
>
> A relatively simple python or Perl script would do the trick.
>
> - --
> Ron Johnson, Jr.
> Jefferson LA  USA
>
> "Kittens give Morbo gas.  In lighter news, the city of New New
> York is doomed."
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFIUAoES9HxQb37XmcRAtDAAKCLjhYrVyZYhqxVMHTWKTxUT6QS0gCfSkrI
> gXU+ysO2s32z4pUSxGjpwTw=
> =gv8G
> -END PGP SIGNATURE-
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>


swap space on a large system

2008-06-11 Thread Mag Gam
Typically, we create a partition to capture a kernel dump when the system
crashes. Therefore, a system with 16GB of RAM will have a partition with
16GB.

How would I scale a system with 64 or 128GB of memory? Any thoughts?


TIA


indexing particular file types

2008-06-10 Thread Mag Gam
Is it possible to index all symbolic links (source and destination) of a
filesystem? For example, in my university we have a project where professors
use vast amount of disk space -- over 10 TB a month. We provide the
professors a mount point, /barXX and export that mount point. The professor
then symbolic links that filesystem like, ln -s /nfsexport/barXX
June10_data. I would like to keep track of these symbolic links. Is there a
good method for this? Is there a feature in ext3 which will let me keep
track of these symbolic links. I can always do a find /fs and compare inode
info, but that would just take too long...

TIA


Re: RAID for large disks

2008-06-08 Thread Mag Gam
Well said.

Thankyou and everyone


On Sun, Jun 8, 2008 at 10:14 AM, Damon L. Chesser <[EMAIL PROTECTED]> wrote:

> On Sun, 2008-06-08 at 07:33 -0400, Mag Gam wrote:
> > Again, I appreciate the responses.
> >
> > Damon:
> >
> > I am dealing with HW RAID. I looked for the "geometry" for my
> > controller, but could not find it.
> >
> >
> http://h2.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&docIndexId=64179&taskId=101&prodTypeId=329290&prodSeriesId=1157686
> >
> > I am very curious about the geometry too...
> >
> >  I don't know enough to pick and choose the optimal setting. Since I
> > am working in the academic field, I would like to really understand
> > this "geometry setting". Can someone please elaborate on this topic?
> >
> > TIA
>
> Mag,
>
> It looks like your controller does not let you set very much manually:
> page 13 shows you can set the Stripe Size.   At this point, jump in and
> play.  But again, I don't think it will amount to a hill of beans.  The
> controller "masks" all reads and writes to the physical drives and the
> OS is ignorant of the underlining details.  IF you needed to set it up a
> certain way, you would KNOW it.  And even so, it looks like the only
> things you can adjust with this controller is the stripe size.  In
> almost all cases the "optimal" setting is the default of the controller
> when you use it's setup "wizard" thingy, what ever it may be called.
> All other deviations are for very specific instructions in some manual
> for some application (or you spend much time bench testing to arrive at
> what works best for you with that given hardware).  Some advice:  unless
> you are being graded in some way on the maximum through put of this
> server:  Jump in, install it and be done.
>
> We (IT) don't know the "optimum" settings of such things.  We play with
> it per some set of (specific) directions or we have a test box we can
> bench test to meet some objective with.  This concept is a moving,
> slippery thing.  It all depends on your network throughput, latency, cpu
> load, bus load, I/O of every other component, I/O of the controller,
> it's (the controller) memory, physical hd read/write speed and probably
> a few more I can't think of right now.
>
> Kill this beast, install the os using defaults for the HDs, see if you
> can serve up the files at a rate that works for you, if not, look it
> over again.
>
> BTW, on the next model server you get, everything you learn here will
> not be valid unless it is the exact same set of hardware.  That is why I
> would not fret over this, there is no "great maxim" to be learned except
> "can I set up THIS box to work at the rate I need?"  Sometimes the
> answer is no, but that falls onto the procurement end of the deal.
>
> A DBA will spend much time telling a sys admin what strip to put onto a
> RAID (using a hardware controller), but that is from the application mfg
> having benched tested a specific model of server with very specific
> hardware to arrive at the best throughput possible with a given hardware
> load out.  This is not your situation.  The only answer is to test.
>
> Anyway, that is my 2C worth.
>
>
> --
> Damon L. Chesser
> [EMAIL PROTECTED]
> http://www.linkedin.com/in/dchesser
>


Re: RAID for large disks

2008-06-08 Thread Mag Gam
Again, I appreciate the responses.

Damon:

I am dealing with HW RAID. I looked for the "geometry" for my controller,
but could not find it.

http://h2.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&docIndexId=64179&taskId=101&prodTypeId=329290&prodSeriesId=1157686

I am very curious about the geometry too...

 I don't know enough to pick and choose the optimal setting. Since I am
working in the academic field, I would like to really understand this
"geometry setting". Can someone please elaborate on this topic?

TIA


On Sat, Jun 7, 2008 at 8:18 PM, Mike Bird <[EMAIL PROTECTED]> wrote:

> On Sat June 7 2008 17:04:02 Mag Gam wrote:
> > Does this page,
> > http://www.redhat.com/archives/linux-lvm/2006-October/msg00014.html,
> hold
> > any validity? The poster makes a good argument, but by seeing Damon's
> > response it makes no sense to go thru the trouble. I would be willing to
> > try this if I get some assistance...
>
> If performance is uber-critical for your application then you
> need to benchmark various different configurations under
> realistic multitasking loads.
>
> For most applications, it's really not worth worrying about
> the details at this level.
>
> --Mike Bird
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>


Re: RAID for large disks

2008-06-07 Thread Mag Gam
Thanks thats the exact same question I have.

Does this page,
http://www.redhat.com/archives/linux-lvm/2006-October/msg00014.html, hold
any validity? The poster makes a good argument, but by seeing Damon's
response it makes no sense to go thru the trouble. I would be willing to try
this if I get some assistance...






On Sat, Jun 7, 2008 at 7:39 PM, Brian McKee <[EMAIL PROTECTED]> wrote:

> On Sat, Jun 7, 2008 at 8:27 AM, Mag Gam <[EMAIL PROTECTED]> wrote:
> >
> > I have a RAID controller with 256MB of on board cache and its connected
> to
> > 12 500GB SATA disks. I am planning to create 2 RAID groups (6 disks
> each),
> > but I don't know what is the optimal stripe size should be.
> >
> > Also, once I stripe on the RAID controller I am planning to use LVM. Is
> > striping a good idea? What should I consider for the filesystem?
>
> I think you are confusing stripe and stride (and others may not
> realize it's still a stripe in RAID5 even though it's not a mirror).
>
> This email seems to be apropos -
> <http://www.redhat.com/archives/linux-lvm/2006-October/msg00014.html>
> but I dont' have any advice for you.
>
> Brian
>


Re: RAID for large disks

2008-06-07 Thread Mag Gam
Thanks for the responses all.

I want RAID 5 but without mirroring. The data is important but not that
important.
I am planning to use LVM.

If the controller creates a stripe size of 16k, do I need to do anything
special with physical extends (in pvcreate or vgcreate) ?
Do I need to do anything specific when creating a LV? I plan on striping my
LV to create extra spindles. Do I need to create my ext3 filesystem with any
particular settings? I am looking for a optimal tuning guide with emphasis
on performace versus redudancy.



On Sat, Jun 7, 2008 at 4:27 PM, Andrew M.A. Cater <
[EMAIL PROTECTED]> wrote:

> On Sat, Jun 07, 2008 at 12:52:24PM -0400, Mag Gam wrote:
> > With the RAID array I am planning to use RAID 5 so my data is still
> > protected. My confusion is going with RAID striping (picking the right
> > size). Also, Does the filesystem layout need to be specific when I do
> > striping? If I am using 128k stripes, should I start my filesystem on
> 129k
> > and end with max-(128+1k)?
> >
>
> You have four or five considerations.
>
> You mentioned you were going to use your 12 disks as two RAID arrays.
>
> If one is going to be for your data and one for a backup of that data -
> 2 x RAID 5 and then RAID1 [5 x 500 = ~2.5TB mirrored].
>
> If you need maximum data storage - all your disks in one array in RAID
> 5.
>
> 11 x 500, one spare - 5.5TB but you rely on the spare :)
>
> If you need data resilience - all your disks in one array in RAID 6 or
> RAID 10
>
> Hardware RAID control is lovely - but you may need battery backup on
> some cards to avoid problems on delayed writes. Hardware RAID control
> also ties you to one manufacturer's cards and/or recovery utilities if a
> RAID fails and you have to recover data.
>
> If you go the hardware route: take the card defaults.
>
> Linux mdadm works well and, under some circumstances, can approach the
> performance of a dedicated hardware RAID card - disks can be swapped
> into any Linux box to recover the RAID.
>
> You can then add LVM on top.
>
> HTH,
>
> Andy
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>


Re: RAID for large disks

2008-06-07 Thread Mag Gam
With the RAID array I am planning to use RAID 5 so my data is still
protected. My confusion is going with RAID striping (picking the right
size). Also, Does the filesystem layout need to be specific when I do
striping? If I am using 128k stripes, should I start my filesystem on 129k
and end with max-(128+1k)?


On Sat, Jun 7, 2008 at 12:38 PM, Damon L. Chesser <[EMAIL PROTECTED]> wrote:

> On Sat, 2008-06-07 at 11:15 -0500, Ron Johnson wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> >
> > On 06/07/08 07:27, Mag Gam wrote:
> > >
> > > I have a RAID controller with 256MB of on board cache and its connected
> > > to 12 500GB SATA disks. I am planning to create 2 RAID groups (6 disks
> > > each), but I don't know what is the optimal stripe size should be.
> >
> > That's very controler-specific.  Read the manual.
> >
> > > Also, once I stripe on the RAID controller I am planning to use LVM. Is
> > > striping a good idea? What should I consider for the filesystem?
> >
> > Striping is a GREAT idea IFF you want serious speed, but don't care
> > about your data.  If one of the disks goes flaky, *all* the data on
> > the stripeset goes poof.
> >
> > So, *never* use striping on a production server!!  Unless you hate
> > the company, are vindictive, and are about to leave.
> >
> > Otherwise, use RAID 0, 10, 0+1 or 5.
> >
> > If you make huge RAID sets, I don't see the purpose of LVM.  OTOH,
>
> for one, if you make a bad partition choice.  With LVM, you can shrink
> one and grow the other as needed.
>
> > if you make 6 mirror sets, use LVM to make a 6 "device" unit.
> >
> > Also, Damon is correct about booting.  In fact, I'd have a separate
> > boot device.
> >
> > Lastly, to what media are you going to back all this data up?  How
> > frequently?
> >
> > - --
> > Ron Johnson, Jr.
> > Jefferson LA  USA
>
> To back up Ron's input:
>
> http://www.linuxjunkies.org/html/LVM-HOWTO.html#s8  it is a bit dated,
> but the info is good.
>
>
> --
> Damon L. Chesser
> [EMAIL PROTECTED]
> http://www.linkedin.com/in/dchesser
>


Re: RAID for large disks

2008-06-07 Thread Mag Gam
Damon,

I haven't even approached the file system level yet. The application is a
basic fileserver which will host our professor's mechanical engineering
images. These images can be anywhere from 20MB to 300MB so I would consider
them "normal files".

I am hoping some hardware people can chime in about the RAID configuration
first. I have plenty of RAM on the server (12GB), and a fast RAID controller
so I would like to get this going first then I will worry about the file
system. Unless, people feel this is a holistic approach.

Any thoughts?



On Sat, Jun 7, 2008 at 9:59 AM, Damon L. Chesser <[EMAIL PROTECTED]> wrote:

> On Sat, 2008-06-07 at 08:27 -0400, Mag Gam wrote:
> >
> > I have a RAID controller with 256MB of on board cache and its
> > connected to 12 500GB SATA disks. I am planning to create 2 RAID
> > groups (6 disks each), but I don't know what is the optimal stripe
> > size should be.
>
> Are you going to use the RAID controller to make the raid (ie, they will
> be hardware raid and the machine and the OS will not know of it)?  If
> so, I would go with the controller defaults with out overriding reasons
> to change them.  One such reason I can think of is an application such
> as oracle which has very detailed instructions on what kind of
> strip/raid you need for a particular use.
> >
> > Also, once I stripe on the RAID controller I am planning to use LVM.
> > Is striping a good idea?
> This, I don't know.
> >  What should I consider for the filesystem?
>
> Again, it depends on your use.  Lots of real big files, you might want
> something besides ext3.  Lots of little or just "normal" files, ext3
> should work just fine for you.  There are some file system "experts" on
> this list that can fill in the details.  As a disclaimer, I have only
> used ext3 and have never had to use anything different.  But again, your
> "Killer app" might have very specific requirements (again, oracle is
> very specific in it's recommendations and I assume any good app will
> tell you the optimum set up for it's self) however here are some things
> to read to fill in the time for you :)
> http://fsbench.netnation.com/ <--Performance comparison: Linux
> filesystems.
>
> http://en.wikipedia.org/wiki/Comparison_of_file_systems
>
> http://linuxreviews.org/sysadmin/filesystems
>
> http://www.linfo.org/filesystem.html
>
> No matter what FS you choose, I would NOT deviate from having a /boot in
> ext3.  The filesystem has very good recovery tools and is well
> documented.  I might also not use anything but ext3 for the / as well
> and put /kill_app on the optimal type of fs for it's self.  If XFS is
> the best for your app, having /boot and / in ext3 will not affect the
> app.  This might be a prejudice I have since I am very comfortable
> working in ext3 and not so in say, Reisers, especially in file recovery
> operations or resizing.
>
> HTH
> --
> Damon L. Chesser
> [EMAIL PROTECTED]
> http://www.linkedin.com/in/dchesser
>


RAID for large disks

2008-06-07 Thread Mag Gam
I have a RAID controller with 256MB of on board cache and its connected to
12 500GB SATA disks. I am planning to create 2 RAID groups (6 disks each),
but I don't know what is the optimal stripe size should be.

Also, once I stripe on the RAID controller I am planning to use LVM. Is
striping a good idea? What should I consider for the filesystem?


TIA