Re: [Discuss] Please help convert a .pst file for use by Evolution

2012-10-04 Thread Jerry Feldman
On 10/03/2012 11:15 PM, Bill Horne wrote:
 Thanks for reading this. 

 I've just flattened a seriously owned Wintel box and installed Ubuntu 11.04 
 LTS on it. 

 The only remaining task is to convert my wife's emails into a format
 that Evolution can use. I've read a bit of the doc on libpst, but it
 seems to be overkill for a one-time transfer, so I'm looking for
 alternatives.

 I have a laptop with Windows 7 and Outhouse Express on it, but that's
 the only M$ email software available if any is needed. The files were 
 created by Outlook 2000. The .pst file is about 500 KB. 

 All suggestions welcome. TIA.

There are a number of ways to do this. Thunderbird
(http://kb.mozillazine.org/Import_.pst_files) can convert this rather
automatically.  Evolution has a similar function:
http://askubuntu.com/questions/84239/import-pst-in-evolution-3-2-1
Do you want to import those files on Windows or on Linux. In any case,
once you convert those files to mbox format, they are more portable. If
you transfer the entire Outhouse mail directory to Linux you can easily
import. Or you can use Thunderbird on Windows to import.

-- 
Jerry Feldman g...@blu.org
Boston Linux and Unix
PGP key id:3BC1EB90 
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Looking for some good people at my company

2012-10-04 Thread Mark Woodward

We need some good people.

(1) We need a Linux configuration guru to build and test kernels, 
package software in RPMS (or debs), work with PAM modules, code in perl, 
python, and C. You would really really need to be able to roll your own 
system upgrade. The interview process is tough, and you need to be able 
to withstand some egos. OpenSSL, OpenSSH, and all forms of linux 
configuration are important.


(2) We need a couple/few fantastic C guys. You would need to be very 
familiar with file system concepts, large data, threading, memory 
management, really really be able to explain the difference between BSD 
and Linux mutexes. You will also need to be solid with the standard 
algorithms, hash, trees, lists, as well as the more advanced indexing 
techniques. Understanding how virtual memory paging affects algorithms 
otherwise expressed with big O notation is a good start. This is not an 
entry level position, it is a position where we need people who know 
their stuff.


Contact me, and I'll tell you the company. You can't blame me for 
wanting the referral!

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] hard drive burn-in

2012-10-04 Thread Tom Metro
Back in 2008-09-19, in the thread:
http://thread.gmane.org/gmane.org.user-groups.linux.boston.discuss/30555/focus=30556

Jarod Wilson wrote:
 I'm partial to running mkfs with full read/write badblocks checking.
 Basically, make a single partition covering the whole disk, then:
 
 # mkfs.ext3 -c -c /dev/sdx1

'mkfs.ext3 -c' calls out to the 'badblocks' command, passing it
arguments so 'badblocks' does the right thing in the context of an ext3
file system.

I've since had occasion to use the 'badblocks' command directly to fix
up a pending failed sector lingering in a swap partition. Pretty handy
command.

I've also found it can be handy to use it directly for drive burn-in.
For example, this:

badblocks -w -b 4096 -s /dev/sdc

will cause badblocks to destructively (it has non-destructive modes as
well) write 4 different bit patterns to the drive and verify them. You
can do this to the raw device, so that the MBR and partition table areas
also get exercised. (The switch to specify a 4096 block size may be
irrelevant in this context. I simply carried it over from the argument
mkfs.ext3 uses when it calls badblocks, not because block alignment is
needed, but in the hope it might speed up processing over the default
1024 block size.)

It isn't particularly fast. Looks like it is taking 48 hours to write
the 4 patterns to a 500 GB USB connected drive. Maybe the slow USB link
is to blame. I did notice it writes one pattern to the entire drive,
then seeks to the beginning and does the next pattern, rather than
cycling through all 4 patterns before seeking to the next spot.

As recommended in the prior thread, also good to incorporate SMART
monitoring with any burn-in operations. I captured the SMART data from
the drive:

smartctl -d sat -a /dev/sdc

('-d sat' because it is a USB drive; newer smartctl probably doesn't
need it)

and ran a long test:

smartctl -d sat -t long /dev/sdc

prior to starting the burn-in. I'll repeat that after to see if any of
the failure counters have incremented.

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
Enterprise solutions through open source.
Professional Profile: http://tmetro.venturelogic.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss