Re: [sqlite] Save my harddrive!

2006-01-27 Thread John Stanton
Reading and writing data to a hard drive is not what wears it out.  The 
critical element is the spindle bearing, and it is being used whenever 
the disk is powered up and spinning.  The heads float on air and the 
actuator bearing does little work.  In simplistic terms a disk drive has 
a certain number of rotations in its life and when they are exhausted 
the bearing will fail.


If you are concerned about hard disk life, keep it cool and in clean air 
and don't shock it.  Use it as much as you like.


Teg wrote:

Hello nbiggs,

My users typically download between 3 to 40 gigs of data a day to
commodity IDE hard drives. This means downloading files in pieces and
when there are enough pieces to create the file, assemble the
files on the hard disk at maximum speed. The files range from 60K to
50 Megs each. During download they sustain fairly constant writes to
disk of between 1.5-10 Mbps. Some run 24x7 (and some have been tossed
out by their ISP's).

I've asked them whether they've been seeing increased failure rates on
their hard drives, I use SCSI only so, they're designed for this kind
of usage. The results were inconclusive. Some have lost hard drives
but, for the most part their hard disks just crunch away for years at a
time.

I think it unlikely that your usage is more than a blip of data to the
hard drive.

C

Friday, January 27, 2006, 12:26:15 PM, you wrote:

n> This is what I am inserting per record.
n> Insert into table values(1, 1, 172, 97, 1, 4, 1, 2.29, 'A',
n> '2006012410052941', 12345, 0, 0, 0, 1, 1, 0)

n> Other then that, I do some updates on the last field by setting the
n> value to 1 or 2.


n> -Original Message-
n> From: Robert Simpson [mailto:[EMAIL PROTECTED] 
n> Sent: Friday, January 27, 2006 12:06 PM

n> To: sqlite-users@sqlite.org
n> Subject: Re: [sqlite] Save my harddrive!

n> - Original Message - 
n> From: "nbiggs" <[EMAIL PROTECTED]>



My application generates about 12 records a second.  I have no


n> problems


storing the records into the database, but started thinking that if I
commit every 12 records, will my hard drive eventually die to extreme
usage?  During a 24 hour period up to 1 million records will be
generated and inserted.  At the end of the day, all the records will


n> be


deleted and the inserts will start again for another 24 hours.

Can I store the records into memory, or just not commit as often,


n> maybe


once every 5 minutes while still protecting my data in case of a PC
crash or unexpected shutdown due to user ignorance?

Does anyone have any ideas for this type of situation?



n> How large are these rows?  12 inserts a second is chump change if
n> they're 
n> small ... If you're inserting 100k blobs then you may want to rethink

n> things.

n> At 12 rows per second (given a relatively small row), 24hrs of usage
n> will 
n> still be less than the amount of harddrive churning involved in a single


n> reboot of your machine.  Consider that a fast app can insert about 1
n> million 
n> rows into a SQLite table in about 15 seconds.


n> Robert








RE: Re[2]: [sqlite] Save my harddrive!

2006-01-27 Thread nbiggs
Thanks for everyone's input, that's what I wanted to hear.

-Original Message-
From: Teg [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 27, 2006 2:06 PM
To: nbiggs
Subject: Re[2]: [sqlite] Save my harddrive!

Hello nbiggs,

My users typically download between 3 to 40 gigs of data a day to
commodity IDE hard drives. This means downloading files in pieces and
when there are enough pieces to create the file, assemble the
files on the hard disk at maximum speed. The files range from 60K to
50 Megs each. During download they sustain fairly constant writes to
disk of between 1.5-10 Mbps. Some run 24x7 (and some have been tossed
out by their ISP's).

I've asked them whether they've been seeing increased failure rates on
their hard drives, I use SCSI only so, they're designed for this kind
of usage. The results were inconclusive. Some have lost hard drives
but, for the most part their hard disks just crunch away for years at a
time.

I think it unlikely that your usage is more than a blip of data to the
hard drive.

C

Friday, January 27, 2006, 12:26:15 PM, you wrote:

n> This is what I am inserting per record.
n> Insert into table values(1, 1, 172, 97, 1, 4, 1, 2.29, 'A',
n> '2006012410052941', 12345, 0, 0, 0, 1, 1, 0)

n> Other then that, I do some updates on the last field by setting the
n> value to 1 or 2.


n> -Original Message-
n> From: Robert Simpson [mailto:[EMAIL PROTECTED] 
n> Sent: Friday, January 27, 2006 12:06 PM
n> To: sqlite-users@sqlite.org
n> Subject: Re: [sqlite] Save my harddrive!

n> - Original Message - 
n> From: "nbiggs" <[EMAIL PROTECTED]>
>>
>> My application generates about 12 records a second.  I have no
n> problems
>> storing the records into the database, but started thinking that if I
>> commit every 12 records, will my hard drive eventually die to extreme
>> usage?  During a 24 hour period up to 1 million records will be
>> generated and inserted.  At the end of the day, all the records will
n> be
>> deleted and the inserts will start again for another 24 hours.
>>
>> Can I store the records into memory, or just not commit as often,
n> maybe
>> once every 5 minutes while still protecting my data in case of a PC
>> crash or unexpected shutdown due to user ignorance?
>>
>> Does anyone have any ideas for this type of situation?

n> How large are these rows?  12 inserts a second is chump change if
n> they're 
n> small ... If you're inserting 100k blobs then you may want to rethink
n> things.

n> At 12 rows per second (given a relatively small row), 24hrs of usage
n> will 
n> still be less than the amount of harddrive churning involved in a
single

n> reboot of your machine.  Consider that a fast app can insert about 1
n> million 
n> rows into a SQLite table in about 15 seconds.

n> Robert




-- 
Best regards,
 Tegmailto:[EMAIL PROTECTED]



Re[2]: [sqlite] Save my harddrive!

2006-01-27 Thread Teg
Hello nbiggs,

My users typically download between 3 to 40 gigs of data a day to
commodity IDE hard drives. This means downloading files in pieces and
when there are enough pieces to create the file, assemble the
files on the hard disk at maximum speed. The files range from 60K to
50 Megs each. During download they sustain fairly constant writes to
disk of between 1.5-10 Mbps. Some run 24x7 (and some have been tossed
out by their ISP's).

I've asked them whether they've been seeing increased failure rates on
their hard drives, I use SCSI only so, they're designed for this kind
of usage. The results were inconclusive. Some have lost hard drives
but, for the most part their hard disks just crunch away for years at a
time.

I think it unlikely that your usage is more than a blip of data to the
hard drive.

C

Friday, January 27, 2006, 12:26:15 PM, you wrote:

n> This is what I am inserting per record.
n> Insert into table values(1, 1, 172, 97, 1, 4, 1, 2.29, 'A',
n> '2006012410052941', 12345, 0, 0, 0, 1, 1, 0)

n> Other then that, I do some updates on the last field by setting the
n> value to 1 or 2.


n> -Original Message-
n> From: Robert Simpson [mailto:[EMAIL PROTECTED] 
n> Sent: Friday, January 27, 2006 12:06 PM
n> To: sqlite-users@sqlite.org
n> Subject: Re: [sqlite] Save my harddrive!

n> - Original Message - 
n> From: "nbiggs" <[EMAIL PROTECTED]>
>>
>> My application generates about 12 records a second.  I have no
n> problems
>> storing the records into the database, but started thinking that if I
>> commit every 12 records, will my hard drive eventually die to extreme
>> usage?  During a 24 hour period up to 1 million records will be
>> generated and inserted.  At the end of the day, all the records will
n> be
>> deleted and the inserts will start again for another 24 hours.
>>
>> Can I store the records into memory, or just not commit as often,
n> maybe
>> once every 5 minutes while still protecting my data in case of a PC
>> crash or unexpected shutdown due to user ignorance?
>>
>> Does anyone have any ideas for this type of situation?

n> How large are these rows?  12 inserts a second is chump change if
n> they're 
n> small ... If you're inserting 100k blobs then you may want to rethink
n> things.

n> At 12 rows per second (given a relatively small row), 24hrs of usage
n> will 
n> still be less than the amount of harddrive churning involved in a single

n> reboot of your machine.  Consider that a fast app can insert about 1
n> million 
n> rows into a SQLite table in about 15 seconds.

n> Robert




-- 
Best regards,
 Tegmailto:[EMAIL PROTECTED]



Re: [sqlite] Save my harddrive!

2006-01-27 Thread Robert Simpson
- Original Message - 
From: "nbiggs" <[EMAIL PROTECTED]>




This is what I am inserting per record.
Insert into table values(1, 1, 172, 97, 1, 4, 1, 2.29, 'A',
'2006012410052941', 12345, 0, 0, 0, 1, 1, 0)

Other then that, I do some updates on the last field by setting the
value to 1 or 2.




Looks like an NMEA GPS record :)

That will not remotely tax your HD.



RE: [sqlite] Save my harddrive!

2006-01-27 Thread nbiggs
This is what I am inserting per record.
Insert into table values(1, 1, 172, 97, 1, 4, 1, 2.29, 'A',
'2006012410052941', 12345, 0, 0, 0, 1, 1, 0)

Other then that, I do some updates on the last field by setting the
value to 1 or 2.


-Original Message-
From: Robert Simpson [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 27, 2006 12:06 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Save my harddrive!

- Original Message - 
From: "nbiggs" <[EMAIL PROTECTED]>
>
> My application generates about 12 records a second.  I have no
problems
> storing the records into the database, but started thinking that if I
> commit every 12 records, will my hard drive eventually die to extreme
> usage?  During a 24 hour period up to 1 million records will be
> generated and inserted.  At the end of the day, all the records will
be
> deleted and the inserts will start again for another 24 hours.
>
> Can I store the records into memory, or just not commit as often,
maybe
> once every 5 minutes while still protecting my data in case of a PC
> crash or unexpected shutdown due to user ignorance?
>
> Does anyone have any ideas for this type of situation?

How large are these rows?  12 inserts a second is chump change if
they're 
small ... If you're inserting 100k blobs then you may want to rethink 
things.

At 12 rows per second (given a relatively small row), 24hrs of usage
will 
still be less than the amount of harddrive churning involved in a single

reboot of your machine.  Consider that a fast app can insert about 1
million 
rows into a SQLite table in about 15 seconds.

Robert



Re: [sqlite] Save my harddrive!

2006-01-27 Thread Robert Simpson
- Original Message - 
From: "nbiggs" <[EMAIL PROTECTED]>


My application generates about 12 records a second.  I have no problems
storing the records into the database, but started thinking that if I
commit every 12 records, will my hard drive eventually die to extreme
usage?  During a 24 hour period up to 1 million records will be
generated and inserted.  At the end of the day, all the records will be
deleted and the inserts will start again for another 24 hours.

Can I store the records into memory, or just not commit as often, maybe
once every 5 minutes while still protecting my data in case of a PC
crash or unexpected shutdown due to user ignorance?

Does anyone have any ideas for this type of situation?


How large are these rows?  12 inserts a second is chump change if they're 
small ... If you're inserting 100k blobs then you may want to rethink 
things.


At 12 rows per second (given a relatively small row), 24hrs of usage will 
still be less than the amount of harddrive churning involved in a single 
reboot of your machine.  Consider that a fast app can insert about 1 million 
rows into a SQLite table in about 15 seconds.


Robert




[sqlite] Save my harddrive!

2006-01-27 Thread nbiggs
My application generates about 12 records a second.  I have no problems
storing the records into the database, but started thinking that if I
commit every 12 records, will my hard drive eventually die to extreme
usage?  During a 24 hour period up to 1 million records will be
generated and inserted.  At the end of the day, all the records will be
deleted and the inserts will start again for another 24 hours.  
 
Can I store the records into memory, or just not commit as often, maybe
once every 5 minutes while still protecting my data in case of a PC
crash or unexpected shutdown due to user ignorance?  
 
Does anyone have any ideas for this type of situation?
 
 
 
Nathan Biggs