Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Stefan Monnier
>> # hdparm -I /dev/sdc | grep "Sector size"
>> Logical  Sector size:   512 bytes
>> Physical Sector size:  4096 bytes

> This is reported by the drive to hdparm.  Only the 512 is used by the
> kernel.  It has no knowledge of the 4KB physical block size and can't
> use it because the drive reports 512 bytes to the kernel as the
> physical block size.

Isn't it rather than the kernel chooses to only use the logical
sector size?  Where/when does the drive report 512B physical
sector sizes?

In any case, the issue is probably not really in the kernel but in the
filesystems and partitioning tools: all that's really needed to use the
drive efficiently is for fdisk/parted and for mkfs to be told (and make
use of) the physical block size.  Of course, maybe a good way to provide
this info is to teach the kernel about it so those tools don't need to
use side-band info via hdparm.

>> Don't we already waste that space with our filesystems? Ext2 cannot use
>> blocks smaller than 1024 Bytes, as far as I can see. And by default even
>> 4kB are used for small filesystems (<5GB on my /).
> This depends on the FS and how it allocates space for files.

Indeed: for mail servers, there are 2 issues:
- actual disk space use, which does not have to depend on the block size
  (file systems can use sub-blocks, they just often elect not to).

  > traditional 512 byte/sector disk.  XFS can pack multiple small files
  > into a single 4KB block extent.  It is able to do this thanks to
  > delayed allocation.

  Indeed, and for that reason 4KB physical blocks wouldn't cause
  additional disk space usage.

- performance accessing those small files.  Arguably, when accessing
  such small files, the bandwidth is typically low, so even in the worst
  case where 4KB blocks increase the bandwidth by a factor 8, it's still
  not necessarily going to be part of the bottleneck.


Stefan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/jwvd3o2ycsu.fsf-monnier+gmane.linux.debian.u...@gnu.org



Re: Can Debian Backup ntfs File System?

2011-01-11 Thread Bob

On 01/06/2011 12:50 AM, Patrick Ouellette wrote:

Zeroth rule of support - never trust your user's to tell you the
entire story (corollary - people lie about what happened)

First rule of support - before deleting *anything* make a
backup copy yourself


Ding Ding Ding, I usually use dd &or ntfsimage to do a full backup that 
I leave on my personal file server for a while before I touch anything.


Now they're so cheap I often just replace the HDD and leave the old one 
in a cupboard for backup, it's saved my & my clients / friends / 
relatives bacon a couple of times.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d2d27c2.3060...@homeurl.co.uk



Re: USB key requirement.

2011-01-11 Thread Stefan Monnier
>> The eToken is basically a smartcard that plugs into USB.
> I still don't really understand the difference apart from it containing
> a key that I match against.  Which is in essence what I was asking to
> do with a USB block device which looks much cheaper than the eToken.

Typically, the difference is that it's not just a key you can read, but
instead the key is kept hidden inside the smartcard and you can only ask
the smartcard to use the key.

Think of it this way: you can ask the smartcard to decrypt some
encrypted data you provide, and if it succeeds, it proves to you that it
knows the secret key.  But you can't directly read the secret key, which
means you can't easily copy the smartcard.

Real smartcards probably don't work the way I described, but I hope it
gives you some idea of how a smartcard can be different from a plain USB
mass storage holding a secret key.


Stefan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/jwvipxuyd94.fsf-monnier+gmane.linux.debian.u...@gnu.org



Re: Re (4): OpenVPN server mode usage.

2011-01-11 Thread Mike Bird
On Tue January 11 2011 19:23:50 PETER EASTHOPE wrote:
> r...@dalton:/etc/openvpn# ip addr show

I don't see the OpenVPN tunnel.

What happens on "/etc/init.d/openvpn start"?

FWIW, I use "dev tun0" (or "dev tunN" for some N) instead of
"dev tun" in the OpenVPN config.

--Mike Bird


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110953.13741.mgb-deb...@yosemite.net



Re: [SOLVED] Is squeeze compatible with WD20EARS and other 2TB drives?

2011-01-11 Thread Stefan Monnier
>> I have no idea what makes you so angry against "green" drives.
> I am against using any drive, at this time, in Linux, with a native
> sector size other than 512 bytes.

Again, I fail to see why you're so emotional about it.  I understand you
don't recommend people buy such drives unless they know what they're
doing, because their performance is sensitive to "irrelevant" details
and is not great in any case (except maybe for streaming where the
bandwidth is perfectly good, tho).

That's OK: these drives aren't sold as speed daemons.

> The Linux partitioning tools still do not easily/properly handle these
> hybrid drives with 4096 byte per sectors that translate 512 byte
> sectors to the host.

Indeed, although it would be very easy to make them do a better job.

> B.  Doesn't care about performance of any kind, and is happy with sub
>  20MB/s rates.

I don't care much about performance: I have a WD10EADS in a wl700ge, for
example (yes, that's a home router with a 266MHz MIPS cpu and 64MB of
RAM: no fan, no noise).

But I also use another WD10EADS in my desktop, where it is faster (not
by a large amount, but I did notice it, even tho I'm rather insensitive
to such issues) than the barracuda it replaced.  Admittedly, these
WD10EADS don't use the 4KB blocks, so their performance is more in line
with the usual.

> I'm down on these drives due to the maniacal 8 second head park
> interval, which likely does more mechanical damage than it saves power
> in dollar terms.

There is simply no concrete evidence to back this urban legend.

Think about it: this head-park speed is not a marketing argument, which
means it is both technically and commercially trivial for WD to make the
interval longer, so would WD really be so dumb as to keep the interval
short just to screw their customers?
And same for all the laptop drive manufacturers?

> I'm down on the fact that people buy them to save power, when they
> really don't save that much power compared to other drives.
> Not enough to notice on an electric bill.

Doesn't hurt anyone, does it?

> The sector alignment issue bugs me the most.  Second on the list is
> that these WD Green drives were designed to NOT be used, rather than
> used.  The only way to get significant power savings is to sleep the
> drive most of the 24 hour day.  BUT, all the other drives same almost
> as much power in their sleep modes.

Yes, those drives are mostly meant for use cases where they're not
spinning 24/7 (e.g. home server to store your videos, music, photos,
backups, ...).  And yes, most other 3½" drives consume a comparable
amount of power when idle, but most non-green 3½" drives can't be spun
down aggressively enough without wearing out much too quickly.

> So again, where's the advantage?  Some of the drives are quieter by
> 3-4 dB.  If your chassis sits on the floor you won't notice much
> difference.

My desktop tower sits on the floor.  And yes, I and the rest of my
family noticed the difference, despite the CPU fan and system fan
staying unchanged.

> If you have moderately loud system/CPU fans they'll drown
> out the noise generated by the drives.

Hmm... how 'bout:
"If you have a fanless silent system, even these quiet green drives
drown out the noise generated by the rest of the system".

> There's just nothing to really like about these drives, and many things to
> dislike.  It's that simple.

I love them: they're exactly what I need.


Stefan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/jwvoc7myfca.fsf-monnier+gmane.linux.debian.u...@gnu.org



Re: o/t ipod

2011-01-11 Thread Sean Keane
Sharepod

Free application to transfer music and video files from an iPod/iPod Touch/
iPhone to a PC.

Try running it in wine, or use a VM.




On Fri, Jan 7, 2011 at 09:17, Thomas H. George  wrote:

> On Tue, Dec 28, 2010 at 04:27:16PM +0100, Klistvud wrote:
> > Dne, 27. 12. 2010 18:43:06 je Camaleón napisal(a):
> >
> > >
> > >I still fail to see what people find exciting in Apple devices.
> > >Yes, they
> > >look nice but they're also even more closed than any MS product >:-)
> > >
> >
> > It must be their price. In our profit-centered culture, anything
> > overly expensive seems to instantly achieve a magical aura and a
> > high perceived value, be it a rolex, a ferrari, or a
> > high-maintenance lady.
> >
> Yes, but I must give the devil his due.  While I have no intention of
> using the gadget as an mp3 player - my Sundisk Sansa plays both mp3 and
> ogg - what the designers have crammed into this little package is truly
> impressive.  Wi-Fi, send and receive email, camera with zoom and the
> pictures can be emailed to my computer (my cell phone charges for that),
> motion and position sensors (the Star Walk app shifts the view of the
> night sky as you point the device at different stars), etc.
>
> If it had been my money I would have bought an Android device but I
> can't look this gift horse in the mouth.
>
> Tom George
> >
> >
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive: http://lists.debian.org/20110107151756.ga10...@tomgeorge.info
>
>


Audio from browsers correctly routed (2 audio cards, same box)

2011-01-11 Thread Bruno Buys
Hi all,
I just configured a multiseat computer (
https://secure.wikimedia.org/wikipedia/en/wiki/Multiseat_configuration)
using my squeeze machine and help from the pt-debian-users list. Very nice,
now I can share it with my wife. Everything is working ok, except for one
detail: I have no idea how to route audio from browsers (ffox and chrome
playing youtube, for example) to my wife's usb audio dongle. In order to get
audio separated to both of us, I got an audio dongle with mic and line out
jacks. With mplayer and vlc playing multimedia files I can get audio routed
correctly to the dongle. So this is not a driver-hardware-kernel question.
The dongle is ok, I just need to figure out how to tell the browsers to use
the dongle, not the onboard audio.
Vlc and mplayer have extensive configuration options, and it was easy to
choose. But browsers seem to offer less choice for configuring. I searched
about:config in ffox, no good. Chrome doesn't even have one.

lspci | grep -i audio

00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition
Audio Controller (rev 01)

lsusb | grep -i audio

Bus 004 Device 002: ID 0d8c:000c C-Media Electronics, Inc. Audio Adapter

Thanks for any inputs,


Bruno


Re (4): OpenVPN server mode usage.

2011-01-11 Thread PETER EASTHOPE
From:   Bob Proulx 
Date:   Mon, 10 Jan 2011 21:55:10 -0700
> x: echo foo | nc -u y 1149
> 
> You should see that show up in your tcpdump traces.

You've tried this on your system?  Or least can detect the datagram  
leaving the orginating system?

From:   Mike Bird 
Date:   Tue, 11 Jan 2011 14:39:13 -0800
> Anything interesting in the /etc/openvpn/*, or in the output
> of "iptables-save" or of "route -n" or of "ifconfig"?

Incidentally, telnet and daytime haven't worked in dalton since last Spring.   

r...@dalton:/etc/openvpn# cat /etc/openvpn/myvpn.conf
# dalton:/etc/openvpn/myvpn.conf 
# 
# Default protocol is udp. 
# Default port is 1194. 
# Joule has a dynamic address. 
mode server
dev tun 
ifconfig 10.4.0.2 10.4.0.1 
verb 5 
secret /root/key 1 
# Machines in the local home zone reached _via_ the tunnel. 
# Curie 
route 172.23.4.2 
# Heaviside 
route 172.23.5.2 
# Shaw mail servers _via_ the tunnel. 
# route shawmail.gv.shawcable.net 
route 64.59.128.135 
route 24.71.223.43 
# Shaw ftp server _via_ the tunnel. 
# route ftp.shaw.ca 
route 64.59.128.134

I haven't touched /etc/openvpn/update-resolv-conf.  It remains 
exactly as installed.

r...@dalton:/etc/openvpn# ip route show
142.103.107.128/25 dev eth0  proto kernel  scope link  src 142.103.107.137 
172.24.1.0/24 dev LocLCS106703196  proto kernel  scope link  src 172.24.1.1 
default via 142.103.107.254 dev eth0 

r...@dalton:/etc/openvpn# ip addr show
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP 
qlen 1000
link/ether 00:02:55:d9:a7:ef brd ff:ff:ff:ff:ff:ff
inet 142.103.107.137/25 brd 142.103.107.255 scope global eth0
inet6 fe80::202:55ff:fed9:a7ef/64 scope link 
   valid_lft forever preferred_lft forever
3: LocLCS106703196:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000
link/ether 00:16:b6:ef:1d:6c brd ff:ff:ff:ff:ff:ff
inet 172.24.1.1/24 brd 172.24.1.255 scope global LocLCS106703196
inet6 fe80::216:b6ff:feef:1d6c/64 scope link 
   valid_lft forever preferred_lft forever

r...@dalton:/etc/openvpn# iptables-save
# Generated by iptables-save v1.4.8 on Tue Jan 11 17:46:23 2011
*raw
:PREROUTING ACCEPT [64233:10270729]
:OUTPUT ACCEPT [54270:6788049]
COMMIT
# Completed on Tue Jan 11 17:46:23 2011
# Generated by iptables-save v1.4.8 on Tue Jan 11 17:46:23 2011
*nat
:PREROUTING ACCEPT [9450:1202948]
:POSTROUTING ACCEPT [18164:1299072]
:OUTPUT ACCEPT [18164:1299072]
:eth0_masq - [0:0]
-A POSTROUTING -o eth0 -j eth0_masq 
-A eth0_masq -s 172.24.0.0/16 -j MASQUERADE 
COMMIT
# Completed on Tue Jan 11 17:46:23 2011
# Generated by iptables-save v1.4.8 on Tue Jan 11 17:46:23 2011
*mangle
:PREROUTING ACCEPT [64234:10270769]
:INPUT ACCEPT [62019:9683981]
:FORWARD ACCEPT [1905:577508]
:OUTPUT ACCEPT [54271:6788557]
:POSTROUTING ACCEPT [56176:7366065]
:tcfor - [0:0]
:tcout - [0:0]
:tcpost - [0:0]
:tcpre - [0:0]
-A PREROUTING -j tcpre 
-A FORWARD -j MARK --set-xmark 0x0/0xff 
-A FORWARD -j tcfor 
-A OUTPUT -j tcout 
-A POSTROUTING -j tcpost 
COMMIT
# Completed on Tue Jan 11 17:46:23 2011
# Generated by iptables-save v1.4.8 on Tue Jan 11 17:46:23 2011
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]
:Drop - [0:0]
:Loc+_fwd - [0:0]
:Loc+_in - [0:0]
:Reject - [0:0]
:dropBcast - [0:0]
:dropInvalid - [0:0]
:dropNotSyn - [0:0]
:dynamic - [0:0]
:fw2loc - [0:0]
:fw2net - [0:0]
:fw2vpn - [0:0]
:loc2fw - [0:0]
:loc2net - [0:0]
:loc2vpn - [0:0]
:loc_frwd - [0:0]
:logdrop - [0:0]
:logflags - [0:0]
:logreject - [0:0]
:net2fw - [0:0]
:net2loc - [0:0]
:net2vpn - [0:0]
:net_frwd - [0:0]
:ppp+_fwd - [0:0]
:ppp+_in - [0:0]
:reject - [0:0]
:shorewall - [0:0]
:smurflog - [0:0]
:smurfs - [0:0]
:tcpflags - [0:0]
:vpn2fw - [0:0]
:vpn2loc - [0:0]
:vpn2net - [0:0]
:vpn_frwd - [0:0]
-A INPUT -m conntrack --ctstate INVALID,NEW -j dynamic 
-A INPUT -i eth0 -j net2fw 
-A INPUT -i Loc+ -j Loc+_in 
-A INPUT -i ppp+ -j ppp+_in 
-A INPUT -i tun0 -j vpn2fw 
-A INPUT -i lo -j ACCEPT 
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-A INPUT -j Reject 
-A INPUT -j LOG --log-prefix "Shorewall:INPUT:REJECT:" --log-level 6 
-A INPUT -g reject 
-A FORWARD -m conntrack --ctstate INVALID,NEW -j dynamic 
-A FORWARD -i eth0 -j net_frwd 
-A FORWARD -i Loc+ -j Loc+_fwd 
-A FORWARD -i ppp+ -j ppp+_fwd 
-A FORWARD -i tun0 -j vpn_frwd 
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-A FORWARD -j Reject 
-A FORWARD -j LOG --log-prefix "Shorewall:FORWARD:REJECT:" --log-level 6 
-A FORWARD -g reject 
-A OUTPUT -o eth0 -j fw2net 
-A OUTPUT -o Loc+ -j fw2loc 
-A OUTPUT -o ppp+ -j fw2loc 
-A OUTPUT -o tun0 -j fw2vpn 
-A OUTPUT -o lo -j ACCEPT 
-A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-A OUTPUT -j ACCEPT 
-A Drop 
-A Drop -p tcp -m tcp --dport 113 -m comment --comment "Auth" -j reject 
-A Drop -j dropBcast 
-

Re: piping find to zip -- with spaces in path

2011-01-11 Thread Doug

On 01/11/2011 08:46 PM, Robert Blair Mason Jr. wrote:

On Tue, 11 Jan 2011 14:53:33 -0700
Bob Proulx  wrote:


Robert Blair Mason Jr. wrote:

Rob Owens  wrote:

I tried this and it successfully creates myfile.zip:

find ./ -iname "*.jpg" -print | zip myfile -@

But it fails if there are spaces in the path or filename.  How can I
make it work with spaces?

I think the best way would be to quote them in the pipe:

find ./ -iname "*.jpg" -printf "'%p'\n" | zip myfile -@

But that fails when the filename contains a quote character.

   John's Important File

Using zero terminated strings (zstrings) are best for handling
arbitrary data in filenames.

Real Unix(TM) users never put [^[:ascii:]] characters in file names.

Bob

True.  Underscores are _wonderful_ things.  But remember, Linux is Not Unix!

Unfortunately for the OP, i don't *think* zip accepts zstrings.  Perhaps a 
script to just remove all of the non-ascii characters in the filename of all 
files in the current directory?

Random tangent, but pascal strings are often more efficient from a programming 
standpoint than c-style strings...

I didn't prune anything because I can't figure out where to do it 
without losing any semblance of coherence, but anyway:
The comment about real Unixers not using ascii characters:  what about 
urls?  They come from the Unix world, and are
full of underscores and question marks and equal signs.  Then there are 
emails, all of which require the @ sign.  Not

complaining, just asking.

--doug

--
Blessed are the peacemakers...for they shall be shot at from both sides. --A. 
M. Greeley


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d2d19d8.3020...@optonline.net



Re: piping find to zip -- with spaces in path

2011-01-11 Thread Robert Blair Mason Jr.
On Tue, 11 Jan 2011 14:53:33 -0700
Bob Proulx  wrote:

> Robert Blair Mason Jr. wrote:
> > Rob Owens  wrote:
> > > I tried this and it successfully creates myfile.zip:
> > > 
> > > find ./ -iname "*.jpg" -print | zip myfile -@
> > > 
> > > But it fails if there are spaces in the path or filename.  How can I
> > > make it work with spaces?
> > 
> > I think the best way would be to quote them in the pipe:
> > 
> > find ./ -iname "*.jpg" -printf "'%p'\n" | zip myfile -@
> 
> But that fails when the filename contains a quote character.
> 
>   John's Important File
> 
> Using zero terminated strings (zstrings) are best for handling
> arbitrary data in filenames.
> 
> Real Unix(TM) users never put [^[:ascii:]] characters in file names.
> 
> Bob

True.  Underscores are _wonderful_ things.  But remember, Linux is Not Unix!

Unfortunately for the OP, i don't *think* zip accepts zstrings.  Perhaps a 
script to just remove all of the non-ascii characters in the filename of all 
files in the current directory?

Random tangent, but pascal strings are often more efficient from a programming 
standpoint than c-style strings...

-- 
rbmj


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110111204644.3a58a...@blair-laptop.mason



Re: piping find to zip -- with spaces in path

2011-01-11 Thread Andrew McGlashan

Hi,

Andrew McGlashan wrote:

Rob Owens wrote:

I tried this and it successfully creates myfile.zip:

find ./ -iname "*.jpg" -print | zip myfile -@

But it fails if there are spaces in the path or filename.  How can I
make it work with spaces?


Does this work:

find ./ -iname "*.jpg" -print0 | xargs -0 zip myfile -@


It does work, I tested it and it was perfect for the job -- not sure 
where the OP has gone though.


Cheers

--
Kind Regards
AndrewM

Andrew McGlashan
Broadband Solutions now including VoIP


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d2d0575.2060...@affinityvision.com.au



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread John Hasler
Bob writes:
> They do consume memory and cpu scheduling queue resources.

I wrote:
> Very little, due to shared memory and copy-on-write.

Stan writes:
> In this case I don't think all that much memory is shared.  Each
> process' data portion is different as each processes a different
> picture file.

I was referring to pre-forking.  Pre-forked processes share text and
also share data while waiting for work.  Thus they consume little in the
way of resources until they have something to do.
-- 
John Hasler


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87sjwz53q4@thumper.dhh.gt.org



Re: [SOLVED] Is squeeze compatible with WD20EARS and other 2TB drives?

2011-01-11 Thread Greg Trounson

On 11/01/11 01:26, Stan Hoeppner wrote:

Stefan Monnier put forth on 1/9/2011 10:42 PM:


I have no idea what makes you so angry against "green" drives.


I am against using any drive, at this time, in Linux, with a native sector size
other than 512 bytes.  The Linux partitioning tools still do not easily/properly
handle these hybrid drives with 4096 byte per sectors that translate 512 byte
sectors to the host.  Simply partitioning them correctly requires Ph.D.  The
average, and even some advanced, users cannot configure them for correct
performance.

...

In my experience with these drives, aligning to 4k sectors has always 
fixed any performance issues, and was achieved by creating partitions 
within fdisk invoked as follows:


fdisk -cu -H224 -S56 /dev/sdx

Greg


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d2cf98d.9070...@maths.otago.ac.nz



Finding kernel modication that caused the boot failure using git-bisect?

2011-01-11 Thread Tech Geek
Back in the days when Debian repo had the
linux-image-2.6.30-bpo.2-686_2.6.30-8~bpo50+2_i386.deb kernel, I had
installed on my PC. Now I have installed the current kernel package from
lenny-backports -- linux-image-2.6.32-bpo.5-686_2.6.32-29~bpo50+1_i386.deb.

The problem is that the old 2.6.30 kernel use to boot fine on my PC but the
new 2.6.32 kernel hangs during PCI enumeration stage of the kernel boot
phase..


[0.472235] ACPI: PCI Interrupt Link [LNKH] (IRQs 10 11 15) *0, disabled.
[0.478536] vgaarb: device added:
PCI::00:02.0,decodes=io+mem,owns=io+mem,locks=none
[0.480017] vgaarb: loaded
[0.484078] PCI: Using ACPI for IRQ routing
[0.488278] Switching to clocksource tsc
[0.494208] pnp: PnP ACPI init
[0.497298] ACPI: bus type pnp registered
[0.502819] pnp: PnP ACPI: found 7 devices
[0.506912] ACPI: ACPI bus type pnp unregistered
[0.511524] PnPBIOS: Disabled by ACPI PNP
[0.550571] pci :01:00.0: PCI bridge, secondary bus :02
[0.556480] pci :01:00.0:   IO window: disabled
[0.561354] pci :01:00.0:   MEM window: disabled
[0.566312] pci :01:00.0:   PREFETCH window: disabled
[0.571706] pci :00:1c.0: PCI bridge, secondary bus :01
[0.577615] pci :00:1c.0:   IO window: 0x1000-0x1fff
[0.582918] pci :00:1c.0:   MEM window: 0x8000-0x801f
[0.588999] pci :00:1c.0:   PREFETCH window: 0x8020-0x803f
[0.595518] pci :00:1c.1: PCI bridge, secondary bus :03
[0.601425] pci :00:1c.1:   IO window: 0xf000-0x
[0.606729] pci :00:1c.1:   MEM window: 0xdff0-0xdfff
[0.612812] pci :00:1c.1:   PREFETCH window: 0x8040-0x805f
[0.619341] pci :00:1c.0: enabling device (0004 -> 0007)
 
-the kernel hangs at this point--

Before reporting this bug to the kernel community I would like to find out
what kernel commit between 2.6.30 and 2.6.32 made the later unbootable. Any
ideas how can I use git tools like git-bisect to find out the culprit? A
step-by-step tutorial would be really helpful. I have been able to reproduce
this issue with the vanilla kernel from kernel.org.

Thanks.


Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Stan Hoeppner
Robert Holtzman put forth on 1/11/2011 5:45 PM:

> I said this was the end
> of the OT wrangling and I meant it.

If that's the case, then why did you respond again?  And why are you responding
yet again, to this?

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2cf0ff.8070...@hardwarefreak.com



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Stan Hoeppner
John Hasler put forth on 1/11/2011 4:12 PM:
> Bob writes:
>> They do consume memory and cpu scheduling queue resources.
> 
> Very little, due to shared memory and copy-on-write.

In this case I don't think all that much memory is shared.  Each process' data
portion is different as each processes a different picture file.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2cee26.4010...@hardwarefreak.com



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Stan Hoeppner
Bob Proulx put forth on 1/11/2011 3:08 PM:
> Stan Hoeppner wrote:
>> Camaleón put forth:
>>> real1m44.038s
>>> user2m5.420s
>>> sys 1m17.561s
>>>
>>> It uses 2 "convert" proccesses so the files are being run on pairs.
>>>
>>> And you can even get the job done faster if using -P8:
>>>
>>> real1m25.255s
>>> user2m1.792s
>>> sys 0m43.563s
>>
>> That's an unexpected result.  I would think running #cores*2^x with an
>> increasing x value would start yielding lower total run times within a few
>> multiples of #cores.
> 
> If you have enough memory (which is critical) then increasing the
> number of processes above the number of compute units *a little bit*
> is okay and increases overall throughput.
> 
> You are processing image data.  That is a large amount of disk data
> and won't ever be completely cached.  At some point the process will

Not really.  Each file, in my case, started as a 1.8MB jpeg.  The disk
throughput on my server is ~80MB/s.  Read latency is about 15-20ms on average.
In my recent example workload there were 35 such images.

> block on I/O waiting for the disk.  Perhaps not often but enough.  At
> that moment the cpu will be idle until the disk block becomes
> available.  When you are runing four processes on your two cpu machine
> that means there will always be another process in the run queue ready
> to go while waiting for the disk.  That allows processing to continue
> when otherwise it would be waiting for the disk.  I believe what you
> are seeing above is the result of being able to compute during that
> small block on I/O wait for the disk interval.

That's gotta be a very small iowait interval.  So small, in fact, it doesn't
show up in top at all.  I've watched top a few times during these runs and I
never see iowait.

I assumed the gain was simply because, watching top, each convert process
doesn't actually fully peg the cpu during the entire process run life.  Running
one or two more processes in parallel with the first two simply gives the kernel
scheduler the opportunity to run another process during those idle ticks.  There
is also the time gap between a process exiting and xargs starting up the next
one.  I have no idea how much time that takes.  But all the little bits add up
in the total execution time of all 35 processes.

> On the negative side having more processes in the run queue does
> consume a little more overhead for process scheduling.  And launching
> a lot of processes consumes resources.  So it definitely doesn't make
> sense to launch one process per image.  But being above the number of
> cpus does help a small amount.

Totally agree.  That amount of decreased run time is small enough on my system
that I don't bother with 3 processes.  I only parallelize 2, as the extra ~80MB
of memory consumed by the 3rd is better consumed by smtpd, imapd, httpd than
saving me 5-10 seconds of execution time for the batch photo resize.  This is a
server after all. ;)

> Another negative is that other tasks then suffer.  With excess compute
> capacity you always have some cpu time for the desktop side of life.
> Moving windows, rendering web pages, other user tasks, delivering
> email.  Sometimes squeezing that last percentage point out of
> something can really kill your interactive experience and end up
> frustrating you more.  So as a hint I wouldn't push too hard on it.

In my case those other tasks aren't interactive, but they exist nonetheless, as
mentioned above.

> My benchmarks show that hyperthreading (fake cpus) actually slow down
> single thread processes such as image conversions.  HT seems like a
> marketing breakthrough to me.  Although having the effective extra
> registers available may benefit a highly threaded application.  I just
> don't have any performance critical highly threaded applications.  I
> am sure they exist somewhere along with unicorns and other good
> sources of sparkles.

This has been my experience as well.  SMT traditionally doesn't work well when
you oversubscribe more compute bound processes than a machine has physical
cores.  This was discovered relatively quickly after Intel's HT CPUs hit the
market.  Folks began running one s...@home process per virtual CPU on dual
socket Xeon boxen, 4 processes total, and their elapsed time per process
increased substantially vs running one process per socket.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2cecef.1020...@hardwarefreak.com



Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Robert Holtzman
On Tue, Jan 11, 2011 at 10:47:01AM -0600, Stan Hoeppner wrote:
> Robert Holtzman put forth on 1/11/2011 1:44 AM:
> > On Mon, Jan 10, 2011 at 04:44:13AM -0600, Stan Hoeppner wrote:
> 
> >> Interesting advice Bob.  Practice it.
> > 
> > I did. Read your post again, especially the part that says "This is
> > because of your liberal political leanings...".
> 
> Yes.  I called black black.  And you blew your cork.  Reread what I was
> responding to, here:
> http://www.linux-archive.org/debian-user/474373-hard-drive-energy-not-worth-conserving-drives.html

I did. There is no hit of any political content. 

> 
> > End of OT discussion. 
> 
> Not quite.

   .snip.

> 
> 
> Did you blow your top because I called Green Green?  Or did you blow your top
> because you thought my classification of his prose was incorrect?  Did you 
> feel
> I was lying?  Why exactly did you blow your top Bob?

You're trying to bait me and I don't bait well. I said this was the end
of the OT wrangling and I meant it. BTW, baiting is also a mark of a
troll. You're not one, are you?

-- 
Bob Holtzman
Key ID: 8D549279
"If you think you're getting free lunch,
 check the price of the beer"


signature.asc
Description: Digital signature


Re: [OT]: Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Dan Serban
On Tue, 11 Jan 2011 09:18:48 -0600
Stan Hoeppner  wrote:

> Dan Serban put forth on 1/10/2011 7:52 PM:
> > On Mon, 10 Jan 2011 12:04:19 -0600
> > Stan Hoeppner  wrote:
> > 
> > [snip]
> >> http://www.hardwarefreak.com/server-pics/
> > 
> > Which gallery system are you using?  I quite like it.
> 
> That's the result of Curator:
> http://furius.ca/curator/
> 
> I've been using it for 7+ years.  Debian dropped the package sometime
> back, before Etch IIRC.  Last time I installed it I grabbed it from
> SourceForge.  It's a python app so you need python and you'll need
> the imagemagick tools.

It's a nice looking interface, simple is what I like.

> 
> Unfortunately its functions are written in a manner that psyco can't
> optimize. It's plenty fast though if you're doing a directory
> structure with only a couple hundred pic files or less.  My server is
> pretty old, 550MHz, and I've got a couple of dirs with thousands of
> image files.  It takes over 12 hours to process them.  It processes
> all subdirs under a dir.  I've found no option to disable this.
> Thus, be mindful of the way you setup your directory structures.
> Even if nothing in a subdir has changed since the last run, curator
> will still process all subdirs.  It's pretty fast at doing so, but if
> you have 100 subdirs with 100 files in each that's 10,000 image files
> to be looked at, and bumps up the run time.
> 

Indeed, I find that "simple" services always seem to end up eating a
lot more resources than originally thought.

> With any modern 2-3GHz x86 AMD/Intel CPU you prolly don't need to
> worry about the speed of curator.  I've never run it on a modern
> chip, just my lowly, but uber cool, vintage Abit BP6 dual Celeron
> 3...@550 server, which is the server in those photos.  I have a
> tendency to hang onto systems as long as they're still useful.  At
> one time it was my workstation/gaming rig.  Those dual Celerons are
> now idle 99%| of the time, and the machine is usually plenty fast for
> any interactive command line or batch work I need to do.
> 

I commend  your spirit.  I have collections of such hardware, but in my
incessant need to have more power, and less power usage, half of this
stuff gets retired.  I wish I could find a good cause to give it to,
but the linux/debian zealot in me refuses to just give it away to
the dark side :/, if it'll run windows, I want you to give me money
for it.  Heh.

I have a dual proc p3 1ghz motherboard. Pretty much
worthless now, though it did a hell of a job running internal email and
web/db services.

> Of note, if you've been reading this thread, you'll notice I use this
> script and ImageMagick's convert utility to resize my camera photos
> before running curator on them, since I can now resize them almost
> twice as fast, running 2 parallel convert processes.
> 

I certainly have followed the thread and have learned that xargs allows
you to parallel process commands.  Something my 20 years of linux
adventures haven't taught me until yesterday.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2011052102.7be6c...@ws82.int.tlc



Re: Directory synchronization packages

2011-01-11 Thread John A. Sullivan III
On Tue, 2011-01-11 at 17:37 -0500, John A. Sullivan III wrote:
> Hello, all.  We are looking for a Lenny/Squeeze package to synchronize
> directories between our physical desktops and our X2Go (www.x2go.org) /
> Trinity (KDE3 - trinity.pearsoncomputing.net) based virtual desktops via
> sshfs and have had difficulty finding an acceptable package that is
> friendly for all levels of users to use within a GUI.  Any suggestions?
> 
> We noticed that there is a package for unison-gtk but have some
> reservations.  As powerful as unison is, it is no longer maintained.
> The GUI seemed intuitive to a developer but not to an average office
> user.  There was no way to browse hidden directories other than entering
> them by hand.  So our search began especially for something that would
> work well on KDE3/Trinity.
> 
> We looked at Krusader and were very impressed but it appears to be a
> simple copy rather than an rsync style differential block
> synchronization.
> 
> We look at Komparator but it is KDE4 based, has a UI that seems
> overwhelming to an average office user, and seems more oriented toward
> search and compare than sync.
> 
> We had great hopes for FreeFileSync but it seemed feature poor and the
> user interface was unusable.  We may have mangled the setup since there
> is no Debian package and we had to pull in boost and wxwidgets to get it
> to compile and work at all.
> 
> We are considering DirSyncPro.  Again, there is no Debian package.  It
> is Java based which concerns us regarding memory and CPU consumption
> when we are trying to squeeze as many virtual desktops onto a host as
> possible.  It does seem to be fairly full featured and actively
> developed.  Our packet traces show it does appear to do block level
> synchronization.
> 
> Right now our front runner is DirSyncPro followed by crafting Konqueror
> service menus based upon unison.  We like the ease of use of the latter
> idea but think it will be too "dumb" and brings us back to unison being
> unmaintained.
> 
> Have we missed any options? I should mention that the synchronization is
> usually bi-directional so a simple rsync front end is not a good option.
> Thanks - John
> 
> 
> 
I should mention that DirSyncPro does not have a way of displaying
hidden directories either - John


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/1294788710.15889.26.ca...@denise.theartistscloset.com



Re: Squeeze: Gnome icons missing

2011-01-11 Thread Freeman
On Tue, Jan 11, 2011 at 10:48:05PM +, Zeissmann wrote:
> > Your Gnome is probably not the same version number as the online manual
> > refers to. Gnome changes all the time.
> 
> 
> This might be so. But then again my local manual (help stuff) says 
> exactly the same thing. And still I don't know how to set those icons.
> 
> 

Perhaps there has been a misunderstanding. 

I have know desktop but I think where you need to be is

Gnome Menu > System Tools (not "System") > Configuration Editor > desktop >
gnome > interface > buttons_have_icons | menus_have_icons  
.

If configuration editor isn't there, install gconf2.


-- 
Regards,
Freeman

"Microsoft is not the answer. Microsoft is the question. NO (or Linux) is the
answer." --Somebody


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110111231518.ga7...@europa.office



Re: Squeeze: Gnome icons missing

2011-01-11 Thread Zeissmann
> Your Gnome is probably not the same version number as the online manual
> refers to. Gnome changes all the time.


This might be so. But then again my local manual (help stuff) says 
exactly the same thing. And still I don't know how to set those icons.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/igimn5$a8...@speranza.aioe.org



Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Stan Hoeppner
Jochen Schulz put forth on 1/11/2011 12:58 PM:
> Stan Hoeppner:
>> Jochen Schulz put forth on 1/11/2011 3:19 AM:
>>
>>> And those pesky 4k blocks will never take hold. 512 bytes were a good
>>> idea in the 1950s, so what's wrong with it now!?
>>
>> 4KB blocks are great.  Too bad these drives report 512B blocks to the kernel,
>> which is what causes the problem.  "Advanced format" = hybrid, not native.
> 
> My WD10EARS (not the 2TB variant that this thread was about) looks correct:
> 
> # hdparm -I /dev/sdc | grep "Sector size"
> Logical  Sector size:   512 bytes
> Physical Sector size:  4096 bytes

This is reported by the drive to hdparm.  Only the 512 is used by the kernel.
It has no knowledge of the 4KB physical block size and can't use it because the
drive reports 512 bytes to the kernel as the physical block size.  That's
precisely what causes all the problems.

> Don't we already waste that space with our filesystems? Ext2 cannot use
> blocks smaller than 1024 Bytes, as far as I can see. And by default even
> 4kB are used for small filesystems (<5GB on my /).

This depends on the FS and how it allocates space for files.  XFS for example
uses variable length extents.  Each extent is comprised of one or more blocks.
Each block is 4KB (mkfs.xfs default).  Each block is 8 physical sectors on a
traditional 512 byte/sector disk.  XFS can pack multiple small files into a
single 4KB block extent.  It is able to do this thanks to delayed allocation.
This works fairly well on busy mail servers using maildir storage format as many
small files are written in close succession, allowing delayed allocation to pack
many into a single extent.  XFS on a workstation where there may be many minutes
between each small file write will probably not be packing multiple small files
into a single extent.  But XFS isn't really targeted at general use single use
workstations.  It's designed for highly parallel workloads with multiple
concurrent read/write streams--i.e. servers and supercomputers.  Small file
storage efficiency is not a core feature of XFS but a fairly recent optimization
AFAIK.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2ce1a7.2040...@hardwarefreak.com



Re: USB key requirement.

2011-01-11 Thread Dan Serban
On Tue, 11 Jan 2011 22:52:06 +0100
deloptes  wrote:

> 
> > 
> > My case is different in the sense that I'm not decrypting my block
> > volumes, just halting a boot sequence.
> > 
> 
> There is something wrong with the setup of your case.
> 
> If you are doing a diskless boot from a share ... how could you use a
> device (usb or something else) to authenticate before the system has
> booted? The idea with the GPG/PGP key is not bad, but it won't help
> you for the setup with the USB drive.
> 

I figured that after the root partition is mounted (nfs), I would have
an init.d script that would work its magic.. if it's there, allow the
continuation of the boot sequence (load gdm and other non-essential
services).  All I would require is to match against an encrypted key
without user intervention.

> Q: Do you have a keyboard and is it desirable to use it on boot time?
> Or you want just to plugin and if the right usb is inside the boot
> will go on. you can do this after the system has already booted and
> you can access the usb from the diskless station.

Second option, no keyboard interaction is required in my mind.  If you
miss having the usb stick inserted, then to move forward, hit the reset
button.

> Q: have you heard of security
> dongles
> "http://www.naturela-bg.com/index.php?categ=&page=itm&lang=en&id=45&pid=&p=";
> 

I have heard of them, but I don't personally understand the actual
difference of a specialized key, versus a usb block device with an
encryption file on it.

> regards
> 
> 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2011043824.008ce...@ws82.int.tlc



Re: Re (3): OpenVPN server mode usage.

2011-01-11 Thread Mike Bird
On Tue January 11 2011 14:09:09 PETER EASTHOPE wrote:
> OK.  Seems that somehow I've managed to disable port
> 1194 or tcpdump.

Anything interesting in the /etc/openvpn/*, or in the output
of "iptables-save" or of "route -n" or of "ifconfig"?

(Post them here if there's nothing private.)

--Mike Bird


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110439.13440.mgb-deb...@yosemite.net



Re: ktorrent in sid

2011-01-11 Thread Mike Bird
On Tue January 11 2011 14:06:59 deloptes wrote:
> Something brought me to this when reading " lenny -> squeeze with trinity"
> by Mike
>
> One thing I'm missing not in trinity but in debian sid is ktorrent. is it
> really not working? Because it is not working for me since I've upgraded to
> kde4.
>
> It seems I can not use it with a proxy server. Is there a chance to install
> the old one (ktorrent2.2) or is it better to setup a virtual machine with
> lenny, where it is working fine?

I don't have a test Squeeze KDE 4 box right now because we're focused
on Trinity+Squeeze but but I'm 99% certain that Lenny's ktorrent2.2
was one of the few bright spots in our most recent KDE 4 evaluation
a few weeks ago.

--Mike Bird


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110433.32075.mgb-deb...@yosemite.net



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread John Hasler
Bob writes:
> Another negative is that other tasks then suffer.

That's what group scheduling is for.
-- 
John Hasler


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87wrmb5bmt@thumper.dhh.gt.org



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread John Hasler
Bob writes:
> They do consume memory and cpu scheduling queue resources.

Very little, due to shared memory and copy-on-write.
-- 
John Hasler


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/871v4j6qf4@thumper.dhh.gt.org



eagle CAD

2011-01-11 Thread Wayne Topa

Hi Guys & Gals

  I haven't used eagle in many moons and have a need to use it for a
new project.  ISTR that my installation "used' to autoroute the board
but I can't seem to get the latest testing version does not autoroute.

  I 'think' that there was a way to run eagle, as root, the first time
and get autorouting to work.  I tried that without success.  I searched 
for a README  in the eagle docs also without success.


  Anyone have eagle running with autorouting care to pass on the what I 
have to do to get it working?  I am trying with version 5.1.0-1.


Thanks in advance.

Wayne


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d2cd7a5.60...@gmail.com



ktorrent in sid

2011-01-11 Thread deloptes

Something brought me to this when reading " lenny -> squeeze with trinity"
by Mike

One thing I'm missing not in trinity but in debian sid is ktorrent. is it
really not working? Because it is not working for me since I've upgraded to
kde4.

It seems I can not use it with a proxy server. Is there a chance to install
the old one (ktorrent2.2) or is it better to setup a virtual machine with
lenny, where it is working fine?

thanks in advance

regards


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/igika3$qh...@dough.gmane.org



Re (3): OpenVPN server mode usage.

2011-01-11 Thread PETER EASTHOPE
From:   Bob Proulx 
Date:   Mon, 10 Jan 2011 21:55:10 -0700
> They don't reach the external interface?  That is an excellent clue.
> But I think it might be a problem trying to have traceroute do it.
>  ... try netcat instead.

At work now and this happens on Dalton.  142.103.107.138 is 
carnot.yi.org on the LAN here.  ssh dalton to carnot 
and carnot to dalton work as expected.  Appears that 
dalton is unable to send a test datagram out its 
external interface, port 1194.

*   From: Mike Bird 
*   Date: Mon, 10 Jan 2011 21:26:28 -0800
> FWIW, we have not encountered any problems in what is now a mixed
> Lenny/Squeeze OpenVPN network.

OK.  Seems that somehow I've managed to disable port 
1194 or tcpdump.  

Thanks,... Peter E.

In ssh viewer 0 on dalton.
r...@dalton:~# shorewall stop
Stopping Shorewall
Running /sbin/iptables-restore...
done.
r...@dalton:~# tcpdump -lni any port 1194
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes

In ssh viewer 1 on dalton.
r...@dalton:~# echo foo | netcat -u 142.103.107.138 1149

In ssh viewer 0 on dalton.
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
r...@dalton:~# shorewall start
Compiling...

-- 
http://members.shaw.ca/peasthope/
http://carnot.yi.org/ = http://carnot.pathology.ubc.ca/



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/ce54ae37a0f7a.4d2c6...@shaw.ca



Re: lenny -> squeeze with trinity

2011-01-11 Thread deloptes
Mike Bird wrote:

> Please contact me on list or off if I can be of any further
> assistance.

thank you for the compact information.

I'm preparing a trinity on top of squeeze for further use of kde3. One
problem I had was with tora+oracle and the other with kplayer using a dvbt
card. I had to compile the old sources of kplayer with kde3. I could also
build debian packages but not sure about it. anyway I could compile them
only from within kdeveloper. from the command line I've got strange errors.

May be it applies to other apps too.

regards


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/igik3q$qh...@dough.gmane.org



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Stan Hoeppner
Camaleón put forth on 1/11/2011 9:38 AM:

> I supposed you wouldn't care much in getting a script to run faster with 
> all the available core "occupied" if you had a modern (<4 years) cpu and 
> plenty of speedy ram because the routine you wanted to run it should not 
> take many time... unless you were going to process "thousand" of 
> images :-)

That's a bit ironic.  You're suggesting the solution is to upgrade to a new
system with a faster processor and memory.  However, all the newer processors
have 2, 4, 6, 8, or 12 cores.  So upgrading simply for single process throughput
would waste all the other cores, which was the exact situation I found myself 
in.

The ironic part is that parallelizing the script to maximize performance on my
system will also do the same for the newer chips, but to an even greater degree
on those with 4, 6, 8, or 12 cores.  Due to the fact that convert doesn't eat
100% of a core's time during its run, and the idle time in between one process
finishing and xargs starting another, one could probably run 16-18 parallel
convert processes on a 12 core Magny Cours with this script before run times
stop decreasing.

The script works.  It cut my run time by over 50%.  I'm happy.  As I said, this
system's processing power is complete overkill 99% of the time.  It works
beautifully with pretty much everything I've thrown at it, for 8 years now.  If
I _really_ wanted to maximize the speed of this photo resizing task I'd install
Win32 ImageMagick on my 2GHz Athlon XP workstation with dual channel memory
nForce2 mobo, convert them on the workstation, and copy them to the server.

However, absolute maximum performance of this task was not, and is not my goal.
 My goal was to make use of the second CPU, which was sitting idle in the
server, to speed up the task completion.  That goal was accomplished. :)

>>> Running more processes than real cores seems fine, did you try it?
>>
>> Define "fine".  
> 
> Fine = system not hogging all resources.

I had run 4 (2 core machine) and run time was a few seconds faster than 2
processes, 3 seconds IIRC.  Running 8 processes pushed the system into swap and
run time increased dramatically.  Given that 4 processes only have a few seconds
faster than two, yet consumed twice as much memory, the best overall number of
processes to run on this system is two.

> I didn't know the meaning of that "SUT" term... 

I like using it.  It's good short hand.  I wish more people used it, or were
familiar with it, so I wouldn't have to define it every time I use it. :)

> The test was run in a 
> laptop (Toshiba Tecra A7) with an Intel Core Duo T2400 (in brief, 2M 
> Cache, 1.83 GHz, 667 MHz FSB, full specs¹) and 4 GiB of ram (DDR2)

> VM is Virtualbox (4.0) with Windows XP Pro as host and Debian Squeeze as 
> guest. VM was setup to use the 2 cores and 1.5 GiB of system ram. Disk 
> controller is emulated via ich6.

I wonder how much faster convert it would run on bare metal on that laptop.

>> Are you "new" to the concept of parallel processing and what CPU process
>> scheduling is?
> 
> No... I guess this is quite similar to the way most of the daemons do 
> when running in background and launch several instances (like "amavisd-
> new" does) but I didn't think there was a direct relation in the number 
> of the running daemons/processes and the cores available in the CPU, I 
> mean, I thought the kernel would automatically handle all the resources 
> available the best it can, regardless of the number of cores in use.

This is correct.  But the kernel can't take a single process make it run across
all cores, maximizing performance.  For this, the process must be written to
create threads, forks, or children.  The kernel will then run each of these on a
different processor core.  This is why Imagemagick convert needs to be
parallelized when batching many photos.  If you don't parallelize it, the kernel
can't schedule it across all cores.  The docs say it will use threads but only
with "large" files.  Apparently 8.2 megapixel JPGs aren't "large", as the
threading has never kicked in for me.  By using xargs for parallelization, we
create x number of concurrent processes.  The kernel then schedules each one on
a different cpu core.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2cd295.30...@hardwarefreak.com



Re: piping find to zip -- with spaces in path

2011-01-11 Thread Bob Proulx
Robert Blair Mason Jr. wrote:
> Rob Owens  wrote:
> > I tried this and it successfully creates myfile.zip:
> > 
> > find ./ -iname "*.jpg" -print | zip myfile -@
> > 
> > But it fails if there are spaces in the path or filename.  How can I
> > make it work with spaces?
> 
> I think the best way would be to quote them in the pipe:
> 
> find ./ -iname "*.jpg" -printf "'%p'\n" | zip myfile -@

But that fails when the filename contains a quote character.

  John's Important File

Using zero terminated strings (zstrings) are best for handling
arbitrary data in filenames.

Real Unix(TM) users never put [^[:ascii:]] characters in file names.

Bob


signature.asc
Description: Digital signature


Re: USB key requirement.

2011-01-11 Thread deloptes

> 
> My case is different in the sense that I'm not decrypting my block
> volumes, just halting a boot sequence.
> 

There is something wrong with the setup of your case.

If you are doing a diskless boot from a share ... how could you use a device
(usb or something else) to authenticate before the system has booted? The
idea with the GPG/PGP key is not bad, but it won't help you for the setup
with the USB drive.

Q: Do you have a keyboard and is it desirable to use it on boot time? Or you
want just to plugin and if the right usb is inside the boot will go on. you
can do this after the system has already booted and you can access the usb
from the diskless station.
Q: have you heard of security
dongles 
"http://www.naturela-bg.com/index.php?categ=&page=itm&lang=en&id=45&pid=&p=";

regards


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/igije8$nd...@dough.gmane.org



Re: piping find to zip -- with spaces in path

2011-01-11 Thread Robert Blair Mason Jr.
On Mon, 10 Jan 2011 20:50:19 -0500
Rob Owens  wrote:

> I tried this and it successfully creates myfile.zip:
> 
> find ./ -iname "*.jpg" -print | zip myfile -@
> 
> But it fails if there are spaces in the path or filename.  How can I
> make it work with spaces?
> 
> Thanks
> 
> -Rob
> 

I think the best way would be to quote them in the pipe:

find ./ -iname "*.jpg" -printf "'%p'\n" | zip myfile -@

-- 
rbmj


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2011053503.30cab...@blair-laptop.mason



Re: Squeeze. How to set video res to 1366x768 in pure console?

2011-01-11 Thread Klistvud

Dne, 11. 01. 2011 22:13:58 je Mark Goldshtein napisal(a):


One thing to mention, I am running # update-grub2 instead of
update-grub. Is it wrong? AFAIR I have installed GRUB2 during Squeeze
installation process.


Check out where update-grub2 points to -- it's probably a shell script  
that just calls update-grub anyway.


--
Cheerio,

Klistvud  
http://bufferoverflow.tiddlyspot.com
Certifiable Loonix User #481801  Please reply to the list, not to  
me.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1294781698.1868...@compax



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Bob Proulx
Camaleón wrote:
> No... I guess this is quite similar to the way most of the daemons do 
> when running in background and launch several instances (like "amavisd-
> new" does)

That is an optimization to help with the latency overhead associated
with forking processes.  In order to reduce the response time to react
to an external event such as arrival of email or processing a web page
many daemons such as those pre-fork copies ahead of time so that they
will be ready and waiting.  Those processes don't consume cpu time
while waiting.  They do consume memory and cpu scheduling queue
resources.  But pre-forked, ready to go, and waiting they just sit
there waiting for something to do.  But then when there is I/O and
they have something to do then they can get going on it very quickly
since they are already loaded in memory.  This reduces response latency.

Bob


signature.asc
Description: Digital signature


Re: Squeeze. How to set video res to 1366x768 in pure console?

2011-01-11 Thread Mark Goldshtein
On Mon, Jan 10, 2011 at 3:01 AM, Phil Requirements
 wrote:
> On 2011-01-09 23:28:58 +, Phil Requirements wrote:
>> On 2011-01-09 23:11:29 +0300, Mark Goldshtein wrote:
>> > As an experiment, from googling, I have added this:
>> >
>> > GRUB_GFXMODE=1366x768x32
>> > GRUB_GFXPAYLOAD_LINUX=1366x768x32
>> >
>> > to /etc/default/grub, when # update-grub2 and rebooted.
>> > Strange effect was achieved, I have seen 1366x768 at the grub's
>> > initial boot moment, where counter counts from 5 seconds to zero and
>> > then console switched back to 640x480. These bright 5 seconds were
>> > enjoyable, though...
>> >
>> > Grub restored to the original state and I continue googling.
>> >
>> > I just wonder, if there a standard tool with 'howto' how to change a
>> > console resolution in Debian? Or, wiki instructions maybe?
>>
>> Mark,
>>
>> You are almost there! You got a good-looking conosole, and
>> then it went bad again. I've seen that before.
>
> Sorry to reply to my own self. I mis-pasted in the previous
> message. I meant:
>
> Change this:
>
>    GRUB_GFXMODE=1366x768x32
>    GRUB_GFXPAYLOAD_LINUX=1366x768x32
>
> To this:
>
>    GRUB_GFXMODE=1366x768x32
>    GRUB_GFXPAYLOAD_LINUX=keep
>

Sorry for delaying and thanks a lot for the help!

No success story yet, tried 32-bit colour and 16, 8, nothing - console
output during boot went well, but Ctrl+Alt+F1...6 shows text partially
on both sides of the screen. It is split to margins of a screen sides.
Current /etc/default/grub is here:

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="nomodeset video=uvesafb:mode_option=1366x768"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
GRUB_GFXMODE=1366x768
GRUB_GFXPAYLOAD_LINUX=keep

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_LINUX_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

One thing to mention, I am running # update-grub2 instead of
update-grub. Is it wrong? AFAIR I have installed GRUB2 during Squeeze
installation process.

-- 
Sincerely Yours'
Mark Goldshtein


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktimhngxfjh4sotc+esd6wqabjomzp_jhnzafc...@mail.gmail.com



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Bob Proulx
Stan Hoeppner wrote:
> Camaleón put forth:
> > real1m44.038s
> > user2m5.420s
> > sys 1m17.561s
> > 
> > It uses 2 "convert" proccesses so the files are being run on pairs.
> > 
> > And you can even get the job done faster if using -P8:
> > 
> > real1m25.255s
> > user2m1.792s
> > sys 0m43.563s
> 
> That's an unexpected result.  I would think running #cores*2^x with an
> increasing x value would start yielding lower total run times within a few
> multiples of #cores.

If you have enough memory (which is critical) then increasing the
number of processes above the number of compute units *a little bit*
is okay and increases overall throughput.

You are processing image data.  That is a large amount of disk data
and won't ever be completely cached.  At some point the process will
block on I/O waiting for the disk.  Perhaps not often but enough.  At
that moment the cpu will be idle until the disk block becomes
available.  When you are runing four processes on your two cpu machine
that means there will always be another process in the run queue ready
to go while waiting for the disk.  That allows processing to continue
when otherwise it would be waiting for the disk.  I believe what you
are seeing above is the result of being able to compute during that
small block on I/O wait for the disk interval.

On the negative side having more processes in the run queue does
consume a little more overhead for process scheduling.  And launching
a lot of processes consumes resources.  So it definitely doesn't make
sense to launch one process per image.  But being above the number of
cpus does help a small amount.

Another negative is that other tasks then suffer.  With excess compute
capacity you always have some cpu time for the desktop side of life.
Moving windows, rendering web pages, other user tasks, delivering
email.  Sometimes squeezing that last percentage point out of
something can really kill your interactive experience and end up
frustrating you more.  So as a hint I wouldn't push too hard on it.

> > No need to have a quad core with HT. Nice :-)

My benchmarks show that hyperthreading (fake cpus) actually slow down
single thread processes such as image conversions.  HT seems like a
marketing breakthrough to me.  Although having the effective extra
registers available may benefit a highly threaded application.  I just
don't have any performance critical highly threaded applications.  I
am sure they exist somewhere along with unicorns and other good
sources of sparkles.

Bob


signature.asc
Description: Digital signature


Re: Squeeze: Gnome icons missing

2011-01-11 Thread Krzysztof Bieniasz
> I don't quite see how it's done. I looked into the Gnome online manual
> and it says something about turning the icons on under Appearance ->
> Interface preferences. The thing is I don't have this tab under
> Appearance -- on neither computer. So maybe I'm missing some package?

Sorry for this one. I'm not yet used to the internal workings of linux-
gate.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/igifam$od...@speranza.aioe.org



Re: Squeeze: Gnome icons missing

2011-01-11 Thread Klistvud

Dne, 11. 01. 2011 20:49:31 je Zeissmann napisal(a):

> I turn them on by running gconf2 or the configuration editor and
> navigating to Desktop -> Gnome - Interface and checking
> "buttons_have_icons" and "menus_have_icons".

I don't quite see how it's done. I looked into the Gnome online manual
and it says something about turning the icons on under Appearance ->
Interface preferences. The thing is I don't have this tab under
Appearance -- on neither computer. So maybe I'm missing some package?



Your Gnome is probably not the same version number as the online manual  
refers to. Gnome changes all the time.


--
Cheerio,

Klistvud  
http://bufferoverflow.tiddlyspot.com
Certifiable Loonix User #481801  Please reply to the list, not to  
me.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1294777803.1868...@compax



Re: Squeeze: Gnome icons missing

2011-01-11 Thread Zeissmann
> I turn them on by running gconf2 or the configuration editor and
> navigating to Desktop -> Gnome - Interface and checking
> "buttons_have_icons" and "menus_have_icons".

I don't quite see how it's done. I looked into the Gnome online manual 
and it says something about turning the icons on under Appearance -> 
Interface preferences. The thing is I don't have this tab under 
Appearance -- on neither computer. So maybe I'm missing some package?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/igic8b$gv...@speranza.aioe.org



Re: Squeeze: Gnome icons missing

2011-01-11 Thread Zeissmann
> I turn them on by running gconf2 or the configuration editor and
> navigating to Desktop -> Gnome - Interface and checking
> "buttons_have_icons" and "menus_have_icons".

I don't quite see how this is done. I looked into the Gnome manual and it 
says something about turning the icons on. The thing is it says it's done 
under Appearance -> Interface Preferences, but I don't have any such tab 
under Appearance -- on neither of the computers. Perhaps I'm lacking some 
package then?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/igiasr$ds...@speranza.aioe.org



Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Jochen Schulz
Stan Hoeppner:
> Jochen Schulz put forth on 1/11/2011 3:19 AM:
> 
>> And those pesky 4k blocks will never take hold. 512 bytes were a good
>> idea in the 1950s, so what's wrong with it now!?
> 
> 4KB blocks are great.  Too bad these drives report 512B blocks to the kernel,
> which is what causes the problem.  "Advanced format" = hybrid, not native.

My WD10EARS (not the 2TB variant that this thread was about) looks correct:

# hdparm -I /dev/sdc | grep "Sector size"
Logical  Sector size:   512 bytes
Physical Sector size:  4096 bytes

# hdparm -I /dev/sdc | grep Model
Model Number:   WDC WD10EARS-22Y5B1

> When we have _native_ 4KB sectors things will be much better, at least for
> larger file types.  For mail servers native 4KB sectors will waste a lot of
> platter space...

Don't we already waste that space with our filesystems? Ext2 cannot use
blocks smaller than 1024 Bytes, as far as I can see. And by default even
4kB are used for small filesystems (<5GB on my /).

J.
-- 
It is not in my power to change anything.
[Agree]   [Disagree]
 


signature.asc
Description: Digital signature


Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Stan Hoeppner
Jochen Schulz put forth on 1/11/2011 3:19 AM:

> And those pesky 4k blocks will never take hold. 512 bytes were a good
> idea in the 1950s, so what's wrong with it now!?

4KB blocks are great.  Too bad these drives report 512B blocks to the kernel,
which is what causes the problem.  "Advanced format" = hybrid, not native.

When we have _native_ 4KB sectors things will be much better, at least for
larger file types.  For mail servers native 4KB sectors will waste a lot of
platter space...

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2c8aa4.30...@hardwarefreak.com



Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Stan Hoeppner
Robert Holtzman put forth on 1/11/2011 1:44 AM:
> On Mon, Jan 10, 2011 at 04:44:13AM -0600, Stan Hoeppner wrote:

>> Interesting advice Bob.  Practice it.
> 
> I did. Read your post again, especially the part that says "This is
> because of your liberal political leanings...".

Yes.  I called black black.  And you blew your cork.  Reread what I was
responding to, here:
http://www.linux-archive.org/debian-user/474373-hard-drive-energy-not-worth-conserving-drives.html

> End of OT discussion. 

Not quite.

How would a sociology or political science professor at the nearest college or
uni to you likely describe that prose or the person who wrote it?  As a
conservative viewpoint?  No.  S/he would describe it as a liberal viewpoint,
just as I did.

Since when is the mere act of correctly identifying something as being liberal
considered an attack of some nature?  How is this any different than calling
Black Black, or Green Green, or Blue Blue?

Did you blow your top because I called Green Green?  Or did you blow your top
because you thought my classification of his prose was incorrect?  Did you feel
I was lying?  Why exactly did you blow your top Bob?

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2c8985.6000...@hardwarefreak.com



Re: Basic questions about Debian package tools

2011-01-11 Thread Lisi
On Monday 10 January 2011 12:04:27 pt3...@gmail.com wrote:
> Is it true I should avoid APT and use some other frontend or, better,
> dpkg directly?
> I tried aptitude, but I don't like ncurses-based tools. I prefer the
> classic command line if possible.

I have used aptitude on the command line since shortly after I started using 
Debian.  I have never used the GUI, although I have looked at it.  I much 
prefer the command line.  I am comfortable and familiar with aptitude, and it 
is recommended in Lenny, which I am still using.

I shall certainly install Squeeze with apt-get as instructed, but I doubt that 
I shall immediately stop using Aptitude.

If I want a GUI version of apt (e.g. to give someone who would faint at sight 
of the command line) I use Synaptic, and I sometimes use Synaptic to search 
for packages.  But I still use aptitude to sort out any problems!

Lisi


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110632.27422.lisi.re...@gmail.com



Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Stan Hoeppner
teddi...@tmo.blackberry.net put forth on 1/10/2011 11:29 PM:
> 
> I think what we mainly should take from all this is Western Digital sucks and 
> we should never buy their crap...
> 
> I know there are some who will disagree with this, so no flames needed...

Not a flame at all here.  Totally agree WRT Green drives.  As I've stated or 
eluded to many times in this thread, I think their Green "advanced format" 
drives Suck--yes that's with a capital S.  

They may be OK under MS Windows but not Linux.  They may yet work well with 
Linux if/when the partitioners get up to speed.  However I think the power 
savings push is a joke.  Most of the people buying them aren't using them in a 
manner conducive to allowing them to shut down aggressively as they are 
programmed to.  I'm guessing many of these Green drives will start failing at 
the 2-3 year mark due to excessive head parking, prompting WD to pull them from 
the market or rewrite the firmware so they're not as aggressively "Green".  
They'll then rename them after the current Green brand gets a bad reputation.  
They'll rename the entire line, eliminating the Blue and Green lines 
altogether. The new name will be something like "Azure" with marketing speak 
something like "the best features of the former Blue and Green drives".  
They'll keep the Black drives, and introduce a new line so they still have 
three lines.  The new one will be called something like "Red" with a subtitle "F
ormula 1 Ferrari Red" and will be a 10k rpm drive replacing the Raptor line.  
Disclaimer:  I don't work for WDC.  If these things come to pass, it is 
strictly coincidence I mentioned them first. :)

The Blue drives are fine.  My server, through which this email will travel on 
its way to you, uses a single 3.5" 500GB WD Blue drive.  Runs like a champ, no 
problems.  Installed Oct 3, 2009, IIRC:

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail  Always   
-   0
  3 Spin_Up_Time0x0027   143   142   021Pre-fail  Always   
-   3808
  4 Start_Stop_Count0x0032   100   100   000Old_age   Always   
-   33
  5 Reallocated_Sector_Ct   0x0033   200   200   140Pre-fail  Always   
-   0
  7 Seek_Error_Rate 0x002e   200   200   000Old_age   Always   
-   0
  9 Power_On_Hours  0x0032   087   087   000Old_age   Always   
-   9494
 10 Spin_Retry_Count0x0032   100   253   000Old_age   Always   
-   0
 11 Calibration_Retry_Count 0x0032   100   253   000Old_age   Always   
-   0
 12 Power_Cycle_Count   0x0032   100   100   000Old_age   Always   
-   32
192 Power-Off_Retract_Count 0x0032   200   200   000Old_age   Always   
-   17
193 Load_Cycle_Count0x0032   200   200   000Old_age   Always   
-   15
194 Temperature_Celsius 0x0022   118   104   000Old_age   Always   
-   25
196 Reallocated_Event_Count 0x0032   200   200   000Old_age   Always   
-   0
197 Current_Pending_Sector  0x0032   200   200   000Old_age   Always   
-   0
198 Offline_Uncorrectable   0x0030   200   200   000Old_age   Offline  
-   0
199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age   Always   
-   0
200 Multi_Zone_Error_Rate   0x0008   200   200   000Old_age   Offline  
-   0

I have no personal experience with the Black series drives, but I've neither 
heard nor read anything bad about them.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2c7edb.8000...@hardwarefreak.com



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Camaleón
On Tue, 11 Jan 2011 07:13:47 -0600, Stan Hoeppner wrote:

> Camaleón put forth on 1/10/2011 2:11 PM:
> 
>> I used a VM to get the closest environment as you seem to have (a low
>> resource machine) and the above command (timed) gives:
> 
> I'm not sure what you mean by resources in this context.  My box has
> plenty of resources for the task we're discussing.  Each convert
> process, IIRC, was using 80MB on my system.  Only two can run
> simultaneously.  So why queue up 4 or more processes?  That just eats
> memory uselessly for zero decrease in total run time.

I supposed you wouldn't care much in getting a script to run faster with 
all the available core "occupied" if you had a modern (<4 years) cpu and 
plenty of speedy ram because the routine you wanted to run it should not 
take many time... unless you were going to process "thousand" of 
images :-)

(...)

> I just made two runs on the same set of photos but downsized them to
> 800x600 to keep the run time down.  (I had you upscale them to 3072x2048
> as your CPUs are much newer)
> 
> $ time for k in *.JPG; do convert $k -resize 800 $k; done
> 
> real1m16.542s
> user1m11.872s
> sys 0m4.104s
> 
> $ time for k in *.JPG; do echo $k; done | xargs -I{} -P2 convert {}
> -resize 800 {}
> 
> real0m41.188s
> user1m14.837s
> sys 0m4.812s
> 
> 41s vs 77s = 53% decrease in run time.  In this case there is
> insufficient memory bandwidth as well.  The Intel BX chipset supports a
> single channel of PC100 memory for a raw bandwidth of 800MB/s.  Image
> manipulation programs will eat all available memory b/w.  On my system,
> running two such processes allows ~400MB/s to each processor socket,
> starving the convert program of memory access.
> 
> To get close to _linear_ scaling in this scenario, one would need
> something like an 8 core AMD Magny Cours system with quad memory
> channels, or whatever the Intel platform is with quad channels.  One
> would run with xargs -P2, allowing each process ~12GB/s of memory
> bandwidth.  This should yield a 90-100% decrease in run time.
> 
>> Running more processes than real cores seems fine, did you try it?
> 
> Define "fine".  

Fine = system not hogging all resources.

> Please post the specs of your SUT, both CPU/mem
> subsystem and OS environment details (what hypervisor and guest).  (SUT
> is IBM speak for System Under Test).

I didn't know the meaning of that "SUT" term... The test was run in a 
laptop (Toshiba Tecra A7) with an Intel Core Duo T2400 (in brief, 2M 
Cache, 1.83 GHz, 667 MHz FSB, full specs¹) and 4 GiB of ram (DDR2).

VM is Virtualbox (4.0) with Windows XP Pro as host and Debian Squeeze as 
guest. VM was setup to use the 2 cores and 1.5 GiB of system ram. Disk 
controller is emulated via ich6.

>>> Linux is pretty efficient at scheduling multiple processes among cores
>>> in multiprocessor and/or multi-core systems and achieving near linear
>>> performance scaling.  This is one reason why "fork and forget" is such
>>> a popular method used for parallel programming.  All you have to do is
>>> fork many children and the kernel takes care of scheduling the
>>> processes to run simultaneously.
>> 
>> Yep. It handles the proccesses quite nice.
> 
> Are you "new" to the concept of parallel processing and what CPU process
> scheduling is?

No... I guess this is quite similar to the way most of the daemons do 
when running in background and launch several instances (like "amavisd-
new" does) but I didn't think there was a direct relation in the number 
of the running daemons/processes and the cores available in the CPU, I 
mean, I thought the kernel would automatically handle all the resources 
available the best it can, regardless of the number of cores in use.

¹http://ark.intel.com/Product.aspx?id=27235

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/pan.2011.01.11.15.38...@gmail.com



Re: [OT]: Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Stan Hoeppner
Dan Serban put forth on 1/10/2011 7:52 PM:
> On Mon, 10 Jan 2011 12:04:19 -0600
> Stan Hoeppner  wrote:
> 
> [snip]
>> http://www.hardwarefreak.com/server-pics/
> 
> Which gallery system are you using?  I quite like it.

That's the result of Curator:
http://furius.ca/curator/

I've been using it for 7+ years.  Debian dropped the package sometime back,
before Etch IIRC.  Last time I installed it I grabbed it from SourceForge.  It's
a python app so you need python and you'll need the imagemagick tools.

Unfortunately its functions are written in a manner that psyco can't optimize.
It's plenty fast though if you're doing a directory structure with only a couple
hundred pic files or less.  My server is pretty old, 550MHz, and I've got a
couple of dirs with thousands of image files.  It takes over 12 hours to process
them.  It processes all subdirs under a dir.  I've found no option to disable
this.  Thus, be mindful of the way you setup your directory structures.  Even if
nothing in a subdir has changed since the last run, curator will still process
all subdirs.  It's pretty fast at doing so, but if you have 100 subdirs with 100
files in each that's 10,000 image files to be looked at, and bumps up the run 
time.

With any modern 2-3GHz x86 AMD/Intel CPU you prolly don't need to worry about
the speed of curator.  I've never run it on a modern chip, just my lowly, but
uber cool, vintage Abit BP6 dual Celeron 3...@550 server, which is the server in
those photos.  I have a tendency to hang onto systems as long as they're still
useful.  At one time it was my workstation/gaming rig.  Those dual Celerons are
now idle 99%| of the time, and the machine is usually plenty fast for any
interactive command line or batch work I need to do.

Of note, if you've been reading this thread, you'll notice I use this script and
ImageMagick's convert utility to resize my camera photos before running curator
on them, since I can now resize them almost twice as fast, running 2 parallel
convert processes.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2c74d8.6030...@hardwarefreak.com



Re: need help making shell script use two CPUs/cores

2011-01-11 Thread Stan Hoeppner
Camaleón put forth on 1/10/2011 2:11 PM:

> Did'nt you run any test? Okay... (now downloading the sample images)

Yes, or course.  I just didn't capture results to file.  And it's usually better
if people see their own results instead of someone else' copy/paste.

>> 2.  On your dual processor, or dual core system, execute:
>>
>> for k in *.JPG; do echo $k; done | xargs -I{} -P2 convert {} -resize
>> 3072 {} &
> 
> I used a VM to get the closest environment as you seem to have (a low 
> resource machine) and the above command (timed) gives:

I'm not sure what you mean by resources in this context.  My box has plenty of
resources for the task we're discussing.  Each convert process, IIRC, was using
80MB on my system.  Only two can run simultaneously.  So why queue up 4 or more
processes?  That just eats memory uselessly for zero decrease in total run time.

> real  1m44.038s
> user  2m5.420s
> sys   1m17.561s
> 
> It uses 2 "convert" proccesses so the files are being run on pairs.
> 
> And you can even get the job done faster if using -P8:
> 
> real  1m25.255s
> user  2m1.792s
> sys   0m43.563s

That's an unexpected result.  I would think running #cores*2^x with an
increasing x value would start yielding lower total run times within a few
multiples of #cores.

> No need to have a quad core with HT. Nice :-)

Use some of the other convert options on large files and you'll want those extra
two real cores. ;)

>> Now, to compare the "xargs -P" parallel process performance to standard
>> serial performance, clear the temp dir and copy the original files over
>> again.  Now execute:
>>
>> for k in *.JPG; do convert $k -resize 3072 $k; done &
> 
> This gives:
> 
> real  2m30.007s
> user  2m11.908s
> sys   1m42.634s
> 
> Which is ~0.46s. of plus delay. Not that bad.

You mean 46s not 0.46s.  104s vs 150s = 44% decrease in run time.  This _should_
be closer to a 90-100% decrease in a "perfect world".  In this case there is
insufficient memory bandwidth to feed all the processors.

I just made two runs on the same set of photos but downsized them to 800x600 to
keep the run time down.  (I had you upscale them to 3072x2048 as your CPUs are
much newer)

$ time for k in *.JPG; do convert $k -resize 800 $k; done

real1m16.542s
user1m11.872s
sys 0m4.104s

$ time for k in *.JPG; do echo $k; done | xargs -I{} -P2 convert {} -resize 800 
{}

real0m41.188s
user1m14.837s
sys 0m4.812s

41s vs 77s = 53% decrease in run time.  In this case there is insufficient
memory bandwidth as well.  The Intel BX chipset supports a single channel of
PC100 memory for a raw bandwidth of 800MB/s.  Image manipulation programs will
eat all available memory b/w.  On my system, running two such processes allows
~400MB/s to each processor socket, starving the convert program of memory 
access.

To get close to _linear_ scaling in this scenario, one would need something like
an 8 core AMD Magny Cours system with quad memory channels, or whatever the
Intel platform is with quad channels.  One would run with xargs -P2, allowing
each process ~12GB/s of memory bandwidth.  This should yield a 90-100% decrease
in run time.

> Running more processes than real cores seems fine, did you try it?

Define "fine".  Please post the specs of your SUT, both CPU/mem subsystem and OS
environment details (what hypervisor and guest).  (SUT is IBM speak for System
Under Test).

>> Linux is pretty efficient at scheduling multiple processes among cores
>> in multiprocessor and/or multi-core systems and achieving near linear
>> performance scaling.  This is one reason why "fork and forget" is such a
>> popular method used for parallel programming.  All you have to do is
>> fork many children and the kernel takes care of scheduling the processes
>> to run simultaneously.
> 
> Yep. It handles the proccesses quite nice.

Are you "new" to the concept of parallel processing and what CPU process
scheduling is?

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2c578b.3090...@hardwarefreak.com



Re: Squeeze: Gnome icons missing

2011-01-11 Thread Camaleón
On Mon, 10 Jan 2011 17:35:12 -0800, Alan Ianson wrote:

> On Mon, 10 Jan 2011 22:00:13 + (UTC) Zeissmann wrote:

>> Recently I've installed Debian Squeeze with Gnome on a new laptop.
>> Unfortunately I'm missing some of the icons. Namely missing are those
>> under the System menu and all of those in the right-click menus. I've
>> got all the basic Gnome icons installed which I've checked with the
>> packages present on the old laptop. Has anyone got a clue what's wrong?
>> 
>> 
>> 
> I'm not sure why but those are off by default now and there is no easy
> way to set it now (that I know of).

IIRC, it was an upstream (GNOME) decision which Debian (as well as other 
distributions) also seems to adapt it. I have to admit I fully agree with 
that :-)

> I turn them on by running gconf2 or the configuration editor and
> navigating to Desktop -> Gnome - Interface and checking
> "buttons_have_icons" and "menus_have_icons".

Yep, that is the way to restore the icons.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/pan.2011.01.11.12.35...@gmail.com



Re: Autorun installation CD

2011-01-11 Thread Juha Tuuna
On 11.1.2011 11:41, Kousik Maiti wrote:
> Hi List,
> I want to create a CD that can run automatically on Linux system and start
> installation  from a .jar file. Is it possible? I googled it but don't get any
> proper answer.
> 
> Thanks in advance.

Hi, afaik Gnome and KDE run CDs automatically (if configured to do so). Normal
autorun.inf should do the trick.
Other desktops or window managers may or may not run CDs but anyways users can
disable that feature if they wish. Some people even prefer manual mounting of 
CDs.

-- 
Juha Tuuna


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d2c31ac.3090...@iki.fi



Re: Autorun installation CD

2011-01-11 Thread Darac Marjal
On Tue, Jan 11, 2011 at 03:11:57PM +0530, Kousik Maiti wrote:
>Hi List,
>I want to create a CD that can run automatically on Linux system and start
>installation  from a .jar file. Is it possible? I googled it but don't
>get any proper answer.

Freedesktop-compliant machines support the use of an autorun script in
the same way that Windows does. See
http://specifications.freedesktop.org/autostart-spec/autostart-spec-latest.html#mounting
for the specification.



signature.asc
Description: Digital signature


Autorun installation CD

2011-01-11 Thread Kousik Maiti
Hi List,
I want to create a CD that can run automatically on Linux system and start
installation  from a .jar file. Is it possible? I googled it but don't get
any proper answer.

Thanks in advance.

-- 
Wishing you the very best of everything, always!!!
Kousik Maiti(কৌশিক মাইতি)
Registered Linux User #474025
Registered Ubuntu User # 28654


Re: [OT] Hard Drive Energy Not Worth Conserving drives?

2011-01-11 Thread Jochen Schulz
teddi...@tmo.blackberry.net:
> 
> I think what we mainly should take from all this is Western Digital
> sucks and we should never buy their crap...

Yeah, we should rush out and buy Samsung drives with their faulty
firmware which "forgets" write operations if one sends the wrong IDE
command at the right time. (I know, there's a fix, but since it has the
same version nuber as the buggy firmware, you can never be sure. :))

And those pesky 4k blocks will never take hold. 512 bytes were a good
idea in the 1950s, so what's wrong with it now!?

J.
-- 
If I was Mark Chapman I would have shot John Lennon with a water pistol.
[Agree]   [Disagree]
 


signature.asc
Description: Digital signature