Re: [SLUG] Ubuntu 8.04 issues - are there any?

2008-04-29 Thread jam

On Tue, 2008-04-29 at 12:00 +1000, [EMAIL PROTECTED] wrote:
> I got this comment from another list (Cinepaint):
> 
> After the CinePaint problems I found several other
> difficulties that
> would not resolve on my system and I have now
> abandoned this latest
> Ubuntu release. ... much more comfortable!
> 
> Has anyone else had any problems? I'm thinking of upgrading
> several machines.

NVIDIA issues are a nightmare. It took me two days of groveling to get
my two monitors detected, idid read and used, and twinhead on a
Viewsonic 1680x1050 with a phillips 1280x1024. (Gutsy was working, and I
saved xorg.conf)

Digital media automount and do nothing, vs gutsy where download
happened.

Camera downloaded photos on gutsy, did not download despite
media-settings on Hardy.

After 3 days of trying to get my wife's machine 'back so she could just
use it' I threw in the towel and reinstalled Gutsy. Wimpering
stopped ...

MP3/ogg on desktop just played (mouse over). She liked that.

James

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Ubuntu 8.04 issues - are there any?

2008-04-29 Thread James
On Tue, Apr 29, 2008 at 9:26 PM, jam <[EMAIL PROTECTED]> wrote:

>
> On Tue, 2008-04-29 at 12:00 +1000, [EMAIL PROTECTED] wrote:
> > I got this comment from another list (Cinepaint):
> >
> > After the CinePaint problems I found several other
> > difficulties that
> > would not resolve on my system and I have now
> > abandoned this latest
> > Ubuntu release. ... much more comfortable!
> >
> > Has anyone else had any problems? I'm thinking of upgrading
> > several machines.
>
> NVIDIA issues are a nightmare. It took me two days of groveling to get
> my two monitors detected, idid read and used, and twinhead on a
> Viewsonic 1680x1050 with a phillips 1280x1024. (Gutsy was working, and I
> saved xorg.conf)
>
>
Assuming you're using the proprietary nVidia drivers, are you using
nvidia-settings to detect the two monitors? I got mine working pretty much
straight away using that package.

Kind regards,

James Foster
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] PostgreSQL slowing down on INSERT

2008-04-29 Thread Howard Lowndes
I have a PHP script that inserts around 100K of records into a table on
each time that it runs.

It starts off at a good pace but gets progressively slower until it falls
over complaining that it cannot allocate sufficient memory.

I have increased the memory allocation in the script with:
ini_set('max_execution_time', '3600');
ini_set('memory_limit', '128M');
 but this only seems to delay the crash.

I have also tried closing and reoprning the database` every 10K inserts,
but that doesn't seem to speed things up either.

Any other suggestions?


-- 
Howard
LANNet Computing Associates 
When you want a computer system that works, just choose Linux;
When you want a computer system that works, just, choose Microsoft.

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PostgreSQL slowing down on INSERT

2008-04-29 Thread Michael Chesterton


On 30/04/2008, at 2:32 AM, Howard Lowndes wrote:

I have a PHP script that inserts around 100K of records into a  
table on

each time that it runs.

It starts off at a good pace but gets progressively slower until it  
falls

over complaining that it cannot allocate sufficient memory.


I have also tried closing and reoprning the database` every 10K  
inserts,

but that doesn't seem to speed things up either.

Any other suggestions?


How are you inserting the data? What library?
It might be that you need to free or flush the handle.


Michael Chesterton
http://chesterton.id.au/blog/
http://barrang.com.au/

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PostgreSQL slowing down on INSERT

2008-04-29 Thread Rick Welykochy

Howard Lowndes wrote:


I have a PHP script that inserts around 100K of records into a table on
each time that it runs.

It starts off at a good pace but gets progressively slower until it falls
over complaining that it cannot allocate sufficient memory.


When I need to do this in MySQL, I used a LOADDATA INFILE sql command
that loads the data from a CSV or TSV file. Completes very quickly,
with only one round trip to the server.

Is there a similar command in PostgresSQL?


cheers
rickw


--

Rick Welykochy || Praxis Services || Internet Driving Instructor

Tis the dream of each programmer before his life is done,
To write three lines of APL and make the damn thing run.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PostgreSQL slowing down on INSERT

2008-04-29 Thread justin randell
On Wed, Apr 30, 2008 at 2:32 AM, Howard Lowndes <[EMAIL PROTECTED]> wrote:
> I have a PHP script that inserts around 100K of records into a table on
>  each time that it runs.

maybe pastebin the relevant bits somewhere?

>  It starts off at a good pace but gets progressively slower until it falls
>  over complaining that it cannot allocate sufficient memory.
>
>  I have increased the memory allocation in the script with:
>  ini_set('max_execution_time', '3600');
>  ini_set('memory_limit', '128M');
>   but this only seems to delay the crash.
>
>  I have also tried closing and reoprning the database` every 10K inserts,
>  but that doesn't seem to speed things up either.
>
>  Any other suggestions?

php4 or php5?

php5 has some nasty memory leaks with objects that reference each other:

http://bugs.php.net/bug.php?id=33595

if this applies to you, there are workarounds in the issue.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PostgreSQL slowing down on INSERT

2008-04-29 Thread Robert Collins
On Wed, 2008-04-30 at 02:32 +1000, Howard Lowndes wrote:
> I have a PHP script that inserts around 100K of records into a table on
> each time that it runs.
> 
> It starts off at a good pace but gets progressively slower until it falls
> over complaining that it cannot allocate sufficient memory.
> 
> I have increased the memory allocation in the script with:
> ini_set('max_execution_time', '3600');
> ini_set('memory_limit', '128M');
>  but this only seems to delay the crash.
> 
> I have also tried closing and reoprning the database` every 10K inserts,
> but that doesn't seem to speed things up either.
> 
> Any other suggestions?

Are you doing this as one transaction or many?

-Rob
-- 
GPG key available at: .


signature.asc
Description: This is a digitally signed message part
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Ubuntu 8.04 issues - are there any?

2008-04-29 Thread jam

On Wed, 2008-04-30 at 10:05 +1000, [EMAIL PROTECTED] wrote:
> > > I got this comment from another list (Cinepaint):
> > >
> > > After the CinePaint problems I found
> several other
> > > difficulties that
> > > would not resolve on my system and I have
> now
> > > abandoned this latest
> > > Ubuntu release. ... much more comfortable!
> > >
> > > Has anyone else had any problems? I'm thinking of
> upgrading
> > > several machines.
> >
> > NVIDIA issues are a nightmare. It took me two days of
> groveling to get
> > my two monitors detected, idid read and used, and twinhead
> on a
> > Viewsonic 1680x1050 with a phillips 1280x1024. (Gutsy was
> working, and I
> > saved xorg.conf)
> >
> >
> Assuming you're using the proprietary nVidia drivers, are you
> using
> nvidia-settings to detect the two monitors? I got mine working
> pretty much
> straight away using that package.

Sheer luxuary!

When we were young ... the bluddy resolution (640x480) was too low to
let you get to the [apply] button on nvidia-settings, which does not
support any geometry settings. So nvidia-settings and counting and
 and return were tried until a fair slice of my life had burned
off. Eventually the solution started with

Option "UseEdidFreqs" "false"

from https://help.ubuntu.com/community/FixVideoResolutionHowto

James

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] NTFS HD, chkdsk and ntfsresize?

2008-04-29 Thread bill
I have a 120gb SATA HD formatted NTFS ( 2 partitions - dont think 2nd is 
used/formatted as only 1 shows up) in my Kubuntu Hardy PC.


Cant mount the HD, requires CHKDSK to be run as error message says drive 
not shut down properly.


Havent had an XP install on any of my Desktop PCS for 2 years now, so 
cant fix problem that way. Dont want to have to do an XP install ( with 
all stuffing around to get and install mobo SATA drivers), so I tried 
the following, gleaned from Googling :-


[EMAIL PROTECTED]:~$ sudo ntfsresize -i /dev/sda1
ntfsresize v2.0.0 (libntfs 10:0:0)
Device name: /dev/sda1
NTFS volume version: 3.1
Cluster size   : 4096 bytes
Current volume size: 57675629056 bytes (57676 MB)
Current device size: 57675631104 bytes (57676 MB)
Checking filesystem consistency ...
100.00 percent completed
Accounting clusters ...
Space in use   : 43896 MB (76.1%)
Collecting resizing constraints ...
You might resize at 43895435264 bytes or 43896 MB (freeing 13780 MB).
Please make a test run using both the -n and -s options before real 
resizing!

[EMAIL PROTECTED]:~$


[EMAIL PROTECTED]:~$ sudo ntfsresize -n --force -s 43896M /dev/sda1
ntfsresize v2.0.0 (libntfs 10:0:0)
Device name: /dev/sda1
NTFS volume version: 3.1
Cluster size   : 4096 bytes
Current volume size: 57675629056 bytes (57676 MB)
Current device size: 57675631104 bytes (57676 MB)
New volume size: 43895992832 bytes (43896 MB)
Checking filesystem consistency ...
100.00 percent completed
Accounting clusters ...
Space in use   : 43896 MB (76.1%)
Collecting resizing constraints ...
Needed relocations : 3364169 (13780 MB)
Schedule chkdsk for NTFS consistency check at Windows boot time ...
Resetting $LogFile ... (this might take a while)
Relocating needed data ...
100.00 percent completed
Updating $BadClust file ...
Updating $Bitmap file ...
Updating Boot record ...
The read-only test run ended successfully.
[EMAIL PROTECTED]:~$


Note - Using 57676M ( obtained from result of sudo ntfsresize -i 
/dev/sda1 above) didnt work.


Is it safe to use sudo ntfsresize  --force -s 43896M /dev/sda1 or do I 
risk losing my data?


Thanks

Bill
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] multiple domain to one web site

2008-04-29 Thread Martin Barry
$quoted_author = "Voytek Eymont" ;
> 
> do I need anything else in apache virtual container?:

No, it's only purpose is to redirect to a different container.
 

> ---
> 
> ServerAdmin [EMAIL PROTECTED]
> ServerName www.new.net.au
> ServerAlias new.net.au
> 
> Redirect permanent / http://www.old.com
> 
> 
> ---

As Jeff posted in his blog [1] libapache2-redirtoservname (in Debian
derivatives at least) provides the functionality to do this in one line and
without the extra container.

[1] 
http://bethesignal.org/blog/2008/04/24/smooth-upgrade-to-ubuntu-804-lts-on-my-linode/

cheers
marty

-- 
I simply tell them "If _I_ don't have a ticket number then _you_ don't have 
a problem. Call the helpdesk." Repeat as many times as necessary.
- Jay Mottern

alt.sysadmin.recovery - <[EMAIL PROTECTED]>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Outputing progress counters with PHP/HTML

2008-04-29 Thread Howard Lowndes
I'm sorry, I should have been more specific on  this.

It's PHP5 and the input is a text file from a data logger which means that
I have to find the data records then decompose each into the individual
data elements to insert into the database, so it's not just a case of a
simple .SQL file as input.

The process is:
Read a line from the file.
Decompose the line into the data elements.
For each data element, do a select on the database to see whether it
already exists, if not then do an insert into the database.
Rinse and repeat...

It certainly has the smell of being a PHP memory leak, but how I can work
around it I am just not sure.


On Tue, April 29, 2008 15:45, Howard Lowndes wrote:
> I have a need to output a progress counter from a PHP script that takes a
> while to run whilst writing a large number of records out to an SQL
> database, mainly so that the user knows that things are still happening
> and not hung.
>
> It seems a simple thing to do, but when I try it, the progress counter
> (say, every 100 records) instead of being output at the correct time, gets
> delayed until the whole process has finished.
>
> What is the best way to get around this problem.
>
>
> --
> Howard
> LANNet Computing Associates 
> When you want a computer system that works, just choose Linux;
> When you want a computer system that works, just, choose Microsoft.
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
>


-- 
Howard
LANNet Computing Associates 
When you want a computer system that works, just choose Linux;
When you want a computer system that works, just, choose Microsoft.

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Outputing progress counters with PHP/HTML

2008-04-29 Thread Matthew Hannigan
On Wed, Apr 30, 2008 at 12:08:50PM +1000, Howard Lowndes wrote:
> I'm sorry, I should have been more specific on  this.
> 
> It's PHP5 and the input is a text file from a data logger which means that
> I have to find the data records then decompose each into the individual
> data elements to insert into the database, so it's not just a case of a
> simple .SQL file as input.
> 
> The process is:
> Read a line from the file.
> Decompose the line into the data elements.
> For each data element, do a select on the database to see whether it
 ^^^
> already exists, if not then do an insert into the database.
> Rinse and repeat...
> 
> It certainly has the smell of being a PHP memory leak, but how I can work
> around it I am just not sure.

You do a select before every insert?!  Is the table indexed?
If not that might explain the slowdown; a select WILL take
longer the bigger the table.


Matt

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Outputing progress counters with PHP/HTML

2008-04-29 Thread Robert Collins
On Wed, 2008-04-30 at 12:30 +1000, Matthew Hannigan wrote:
> 
> 
> You do a select before every insert?!  Is the table indexed?
> If not that might explain the slowdown; a select WILL take
> longer the bigger the table.

Also when adding 100K records the stats for the table could become
incorrect quite rapidly; this can lead to bad query plans even on an
indexed table.

But I'm guessing this is being done in a single transaction which is not
nice to the database :P

-Rob

-- 
GPG key available at: .


signature.asc
Description: This is a digitally signed message part
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Outputing progress counters with PHP/HTML

2008-04-29 Thread Sonia Hamilton
On Wed, 2008-04-30 at 12:30 +1000, Matthew Hannigan wrote:
> On Wed, Apr 30, 2008 at 12:08:50PM +1000, Howard Lowndes wrote:
> > 
> > The process is:
> > Read a line from the file.
> > Decompose the line into the data elements.
> > For each data element, do a select on the database to see whether it
>  ^^^
> > already exists, if not then do an insert into the database.
> > Rinse and repeat...
> > 
> > It certainly has the smell of being a PHP memory leak, but how I can work
> > around it I am just not sure.
> 
> You do a select before every insert?!  Is the table indexed?
> If not that might explain the slowdown; a select WILL take
> longer the bigger the table.

Perhaps better to put a unique index on the appropriate column(s), then
just do an insert and throw away the error if the data is already there.

-- 
Thanks,
.
Sonia Hamilton
http://www.snowfrog.net
http://training.snowfrog.net
http://www.linkedin.com/in/soniahamilton
.
Your manuscript is both good and original; but the part that is good is
not original and the part that is original is not good - Samuel Johnson.

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] NTFS HD, chkdsk and ntfsresize?

2008-04-29 Thread Amos Shapira
On Wed, Apr 30, 2008 at 11:42 AM, bill <[EMAIL PROTECTED]> wrote:
>  Note - Using 57676M ( obtained from result of sudo ntfsresize -i /dev/sda1
> above) didnt work.
>
>  Is it safe to use sudo ntfsresize  --force -s 43896M /dev/sda1 or do I risk
> losing my data?

You should first resize the partition in the partition table (using
fdisk, delete then re-create the partition, change its type to "7"
(NTFS)), have you done that?
After that is done, ntfsresize will by default automatically resize
the file system to occupy the entire partition (see the bottom of the
output from running "ntfsresize" without arguments).

--Amos
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] PostgreSQL slowing down on INSERT

2008-04-29 Thread Rick Welykochy

Rick Welykochy wrote this and replies to hisself:


Howard Lowndes wrote:


I have a PHP script that inserts around 100K of records into a table on
each time that it runs.

It starts off at a good pace but gets progressively slower until it falls
over complaining that it cannot allocate sufficient memory.


When I need to do this in MySQL, I used a LOADDATA INFILE sql command
that loads the data from a CSV or TSV file. Completes very quickly,
with only one round trip to the server.

Is there a similar command in PostgresSQL?


Found it. It is called COPY in PostgreSQL.



They recommend to turn AUTOCOMMIT off.

Then take Sonia's recommendation of enforcing a unique index
so the dupes are chucked out.

Fire off one single COPY command from PHP and see how that works.


cheers
rickw


--

Rick Welykochy || Praxis Services || Internet Driving Instructor

Tis the dream of each programmer before his life is done,
To write three lines of APL and make the damn thing run.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] OT Open Source CRM Package

2008-04-29 Thread leei
Hi all,

Sorry to bother all of you.

I hope that you might have a solution for me regarding the below.

We are looking for an Open Source CRM package that will take all e-mails
from Microsoft Outlook and export the mail folders from them into the CRM
package and have all the e-mails stored on the server so that all customer
details will be on the server. On the CRM package there can be levels of
security so that the md's data and the data for the staff are separate.

Thanks,

Regards,
Lee






---
South Africas premier free email service - www.webmail.co.za 
--
For super low premiums, click here http://www.webmail.co.za/dd.pwm

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html