On Saturday 31 July 2004 00:20, Steven Critchfield wrote:
> Actually, it isn't VoIP data yet, VoIP is Voice over Internet Protocol.
> The 1000hz interupt is still just digitizing the audio off the PSTN
> link. When it comes time to read/write VoIP data, it is likely 20ms of
> audio, plus headers an
On Fri, 2004-07-30 at 18:55, Andrew Kohlsmith wrote:
> On Friday 30 July 2004 19:51, Mike Benoit wrote:
> > Tuning these [PCI latencies] should allow you to give your TDM cards
> > long burst lengths, and make your IDE devices very premptable...
>
> I would have figured you want very short burst l
On Friday 30 July 2004 19:51, Mike Benoit wrote:
> Tuning these [PCI latencies] should allow you to give your TDM cards
> long burst lengths, and make your IDE devices very premptable...
I would have figured you want very short burst lengths to prevent any one
device from hogging the PCI bus and
Someone mailed me off list and suggested the below:
Tuning these [PCI latencies] should allow you to give your TDM cards
long burst lengths, and make your IDE devices very premptable...
A decent article which has info in PCI latency (and IRQ, etc) is at:
http://www-106.ibm.com/developerworks/l
On Wed, 2004-07-21 at 12:14, Mike Benoit wrote:
> I have a P3-800 with two IDE drives in a software RAID1 configuration.
> Each drive is on a separate IDE channel. Now anytime there is HD
> activity, I hear "beeps" and "cutting out" on a call using the X100P
> card.
Wow, i'm seeing exactly the sa
In an article on IDE vs. SCSI I read that MTBF numbers for IDE were
frequently caculated at 8 hours on 16 hours off per day (assumes desktop
usage) but SCSI drives were calculated at 24hrs on per day. So even though
the MTBF numbers look the same ... The main reason is, reportedly, better
quality
On Wed, Jul 21, 2004 at 06:15:23PM -0400, Andrew Kohlsmith said:
> On Wednesday 21 July 2004 16:33, Steven Critchfield wrote:
> > Software raid is bad. IDE hardware raid isn't much better. Software raid
> > is always going to eat your system alive since the CPU has to be busy
> > with 2 or more wri
> "Steven" == Steven Critchfield <[EMAIL PROTECTED]> writes:
Steven> oddly enough, there isn't much if any difference these days at
Steven> the physical level. It is just the interface and the set of
Steven> specs on the interface. SCSI drives usually will give you
Steven> warning of their pro
On Thursday 22 July 2004 05:46, Kevin Walsh wrote:
Some datapoints of my own:
Supermicro motherboard, single Xeon 2.6 (HT), software RAID1 on SCSI using
Seagate ST39173LC drives:
# hdparm -tT /dev/md0
/dev/md0:
Timing buffer-cache reads: 128 MB in 0.36 seconds =355.56 MB/sec
Timing buffe
Steven Critchfield [EMAIL PROTECTED] wrote:
> BTW, my raid card on my Dell 2450 had this output
> nash5:/home/critch# hdparm -tT /dev/sda
>
> /dev/sda:
> Timing buffer-cache reads: 128 MB in 0.61 seconds =209.84 MB/sec
> Timing buffered disk reads: 64 MB in 2.52 seconds = 25.40 MB/sec
>
My
Mike Benoit wrote:
I have a P3-800 with two IDE drives in a software RAID1 configuration.
Each drive is on a separate IDE channel. Now anytime there is HD
activity, I hear "beeps" and "cutting out" on a call using the X100P
card.
I ran the zttest program, and discovered HD activity would drop th
I hope this one is within context for the thread.
I have been pondering for a while on building a high availability
asterisk cluster..
I know it'd be matter of having a master 'service' router, selecting
form a poll of asterisk servers (at least two) and if any of them falls
down, the other woul
On Jul 21, 2004, at 7:01 PM, Steven Critchfield wrote:
BTW, my raid card on my Dell 2450 had this output
nash5:/home/critch# hdparm -tT /dev/sda
/dev/sda:
Timing buffer-cache reads: 128 MB in 0.61 seconds =209.84 MB/sec
Timing buffered disk reads: 64 MB in 2.52 seconds = 25.40 MB/sec
and I d
On Thu, 2004-07-22 at 12:56 +1200, wrote:
> > I would be interested in seeing if other people can reproduce low
> > zttest accuracy rates with their mainboards. zttest is in the zaptel/
> > directory, and you can run it while Asterisk happily chugs along
> > handling calls.
> >
> > What I usually
On Wed, 2004-07-21 at 17:36, Scott Laird wrote:
> On Jul 21, 2004, at 1:33 PM, Steven Critchfield wrote:
> >
> > Software raid is bad. IDE hardware raid isn't much better. Software
> > raid
> > is always going to eat your system alive since the CPU has to be busy
> > with 2 or more writes as oppos
> I would be interested in seeing if other people can reproduce low
> zttest accuracy rates with their mainboards. zttest is in the zaptel/
> directory, and you can run it while Asterisk happily chugs along
> handling calls.
>
> What I usually do is run zttest in one window, then in another window
No good reason, except that the box may be used for something else in
the future..
On Wed, 2004-07-21 at 17:26, Scott Laird wrote:
> On Jul 21, 2004, at 4:53 PM, Joshua McClintock wrote:
> > Our production environment is using a 4 port 3ware 8500 series card
> > with
> > 2 drives (mirrored) on th
On Jul 21, 2004, at 4:25 PM, Kevin P. Fleming wrote:
Scott Laird wrote:
That hasn't been my experience at all. Frankly, I've never seen a
cheap (<$3k) hardware RAID controller that can touch software RAID's
performance on Linux, especially in "challenging" setups, like
RAID-5. Sure, software R
I didn't want to turn this in to a software vs. hardware raid, or IDE
vs. SCSI. I was more curious about the PCI bus/interrupt issues and the
mainboard. I only have 1 line going in to this asterisk server, so CPU
usage is not an issue whatsoever. Even during a raid rebuild.
Asterisk's CPU usage do
On Jul 21, 2004, at 4:53 PM, Joshua McClintock wrote:
Our production environment is using a 4 port 3ware 8500 series card
with
2 drives (mirrored) on the pstn (2 t1 cards) machine and an 8 port
3ware
8500 series with 8 drives (raid5) on the pbx/vm machine.
Flawless so far.
Why so many drives for
Our production environment is using a 4 port 3ware 8500 series card with
2 drives (mirrored) on the pstn (2 t1 cards) machine and an 8 port 3ware
8500 series with 8 drives (raid5) on the pbx/vm machine.
Flawless so far.
On Wed, 2004-07-21 at 16:25, Kevin P. Fleming wrote:
> Scott Laird wrote:
>
Scott Laird wrote:
That hasn't been my experience at all. Frankly, I've never seen a cheap
(<$3k) hardware RAID controller that can touch software RAID's
performance on Linux, especially in "challenging" setups, like RAID-5.
Sure, software RAID eats more CPU, but most PCs have CPU to spare the
On Jul 21, 2004, at 1:33 PM, Steven Critchfield wrote:
Software raid is bad. IDE hardware raid isn't much better. Software
raid
is always going to eat your system alive since the CPU has to be busy
with 2 or more writes as opposed to it's normal 1.
That hasn't been my experience at all. Frankly,
On Wednesday 21 July 2004 16:33, Steven Critchfield wrote:
> Software raid is bad. IDE hardware raid isn't much better. Software raid
> is always going to eat your system alive since the CPU has to be busy
> with 2 or more writes as opposed to it's normal 1.
I've never had issues with IDE RAID1 --
On Wed, 2004-07-21 at 14:14, Mike Benoit wrote:
> I have a P3-800 with two IDE drives in a software RAID1 configuration.
> Each drive is on a separate IDE channel. Now anytime there is HD
> activity, I hear "beeps" and "cutting out" on a call using the X100P
> card.
>
> I ran the zttest program,
Mike Benoit wrote:
I have a P3-800 with two IDE drives in a software RAID1 configuration.
Each drive is on a separate IDE channel. Now anytime there is HD
activity, I hear "beeps" and "cutting out" on a call using the X100P
card.
I ran the zttest program, and discovered HD activity would drop the
On Jul 21, 2004, at 12:36 PM, [EMAIL PROTECTED]
wrote:
I have a P3-800 with two IDE drives in a software RAID1 configuration.
Each drive is on a separate IDE channel. Now anytime there is HD
activity, I hear "beeps" and "cutting out" on a call using the X100P
card.
I ran the zttest program, and di
I have a P3-800 with two IDE drives in a software RAID1 configuration.
Each drive is on a separate IDE channel. Now anytime there is HD
activity, I hear "beeps" and "cutting out" on a call using the X100P
card.
I ran the zttest program, and discovered HD activity would drop the
accuracy down to b
28 matches
Mail list logo