Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread Paul Lovejoy via 4D_Tech
We have SSD. A high speed RAID is another story.


> Le 23 avr. 2018 à 10:59, Wayne Stewart via 4D_Tech <4d_tech@lists.4d.com> a 
> écrit :
> 
>> I don’t have the luxury of installing an SSD RAID.
> 
> A 512GB SSD is less than $200
> https://www.amazon.com/Samsung-500GB-Internal-MZ-76E500B-AM/dp/B0781Z7Y3S/
> 
> or just over if you go for the pro version with the 5 year warranty
> https://www.amazon.com/Samsung-512GB-Inch-Internal-MZ-76P512BW/dp/B07836C6YV/
> 
> 
> 
> 
> 
> 
> 
> Regards,
> 
> Wayne
> 
> 
> [image: --]
> Wayne Stewart
> [image: http://]about.me/waynestewart
> 
> 
> 
> On 23 April 2018 at 15:51, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com>
> wrote:
> 
>> Chuck,
>> 
>> You know the old saying: “A watched pot never boils.”
>> Watching 4D reload the same table with 7 million records 8 times to
>> generate 8 indexes is kind of like that.
>> 
>> I don’t have the luxury of installing an SSD RAID. I’m sure better
>> hardware would help.
>> 
>> 
>> 
>> Paul
>> 
>> 
>>> Le 23 avr. 2018 à 04:20, Chuck Miller via 4D_Tech <4d_tech@lists.4d.com>
>> a écrit :
>>> 
>>> OK I don;t get it. We have a data base of over 200 Gig with many
>> millions of records. We run using SSD,s in an Areca RAID. 1 Terabyte SSDs.
>> I can tell you that when we restored from a backup up and rebuilt indices
>> it took not more than 20 minutes to so. It does not matter if it is by
>> table or not. Yes I agree it would be faster by table but with a database
>> of this size, you really need to be running on an SSD RAID.
>>> 
>>> Regards
>>> 
>>> Chuck
>>> 
>> 
>>> Chuck Miller Voice: (617) 739-0306
>>> Informed Solutions, Inc. Fax: (617) 232-1064
>>> mailto:cjmillerinformed-solutions.com
>>> Brookline, MA 02446 USA Registered 4D Developer
>>>  Providers of 4D and Sybase connectivity
>>> http://www.informed-solutions.com
>>> 
>> 
>>> This message and any attached documents contain information which may be
>> confidential, subject to privilege or exempt from disclosure under
>> applicable law.  These materials are intended only for the use of the
>> intended recipient. If you are not the intended recipient of this
>> transmission, you are hereby notified that any distribution, disclosure,
>> printing, copying, storage, modification or the taking of any action in
>> reliance upon this transmission is strictly prohibited.  Delivery of this
>> message to any person other than the intended recipient shall not
>> compromise or waive such confidentiality, privilege or exemption from
>> disclosure as to this communication.
>>> 
 On Apr 22, 2018, at 6:40 AM, Arnaud de Montard via 4D_Tech <
>> 4d_tech@lists.4d.com> wrote:
 
> 
> I guess our database of 120gb, running on a 32 bit machine doesn’t
>> benefit from this. We were working on the most recent R release and
>> yesterday we went to 15.5.
> When reindexing the entire database, 4D Server/4D are still going
>> index by index instead of table by table.
>>> 
>>> **
>>> 4D Internet Users Group (4D iNUG)
>>> FAQ:  http://lists.4d.com/faqnug.html
>>> Archive:  http://lists.4d.com/archives.html
>>> Options: https://lists.4d.com/mailman/options/4d_tech
>>> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
>>> **
>> 
>> **
>> 4D Internet Users Group (4D iNUG)
>> FAQ:  http://lists.4d.com/faqnug.html
>> Archive:  http://lists.4d.com/archives.html
>> Options: https://lists.4d.com/mailman/options/4d_tech
>> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
>> **
>> 
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread npdennis via 4D_Tech
> You know the old saying: “A watched pot never boils.” 
> Watching 4D reload the same table with 7 million records 8 times to generate 
> 8 indexes is kind of like that.

Later versions of 4D optimized the index rebuilding by caching the records and 
doing all of the indexes on table at a time. If there is enough cache (memory) 
the second through 8 index would be much much quicker in that it wouldn’t need 
to reload the records in the table.

Log story short, since you can’t get an SSD, maybe increase the server memory 
and cache size for reindexing.

Neil


**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread Wayne Stewart via 4D_Tech
> I don’t have the luxury of installing an SSD RAID.

A 512GB SSD is less than $200
https://www.amazon.com/Samsung-500GB-Internal-MZ-76E500B-AM/dp/B0781Z7Y3S/

or just over if you go for the pro version with the 5 year warranty
https://www.amazon.com/Samsung-512GB-Inch-Internal-MZ-76P512BW/dp/B07836C6YV/







Regards,

Wayne


[image: --]
Wayne Stewart
[image: http://]about.me/waynestewart



On 23 April 2018 at 15:51, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com>
wrote:

> Chuck,
>
> You know the old saying: “A watched pot never boils.”
> Watching 4D reload the same table with 7 million records 8 times to
> generate 8 indexes is kind of like that.
>
> I don’t have the luxury of installing an SSD RAID. I’m sure better
> hardware would help.
>
>
>
> Paul
>
>
> > Le 23 avr. 2018 à 04:20, Chuck Miller via 4D_Tech <4d_tech@lists.4d.com>
> a écrit :
> >
> > OK I don;t get it. We have a data base of over 200 Gig with many
> millions of records. We run using SSD,s in an Areca RAID. 1 Terabyte SSDs.
> I can tell you that when we restored from a backup up and rebuilt indices
> it took not more than 20 minutes to so. It does not matter if it is by
> table or not. Yes I agree it would be faster by table but with a database
> of this size, you really need to be running on an SSD RAID.
> >
> > Regards
> >
> > Chuck
> > 
> 
> > Chuck Miller Voice: (617) 739-0306
> > Informed Solutions, Inc. Fax: (617) 232-1064
> > mailto:cjmillerinformed-solutions.com
> > Brookline, MA 02446 USA Registered 4D Developer
> >   Providers of 4D and Sybase connectivity
> >  http://www.informed-solutions.com
> > 
> 
> > This message and any attached documents contain information which may be
> confidential, subject to privilege or exempt from disclosure under
> applicable law.  These materials are intended only for the use of the
> intended recipient. If you are not the intended recipient of this
> transmission, you are hereby notified that any distribution, disclosure,
> printing, copying, storage, modification or the taking of any action in
> reliance upon this transmission is strictly prohibited.  Delivery of this
> message to any person other than the intended recipient shall not
> compromise or waive such confidentiality, privilege or exemption from
> disclosure as to this communication.
> >
> >> On Apr 22, 2018, at 6:40 AM, Arnaud de Montard via 4D_Tech <
> 4d_tech@lists.4d.com> wrote:
> >>
> >>>
> >>> I guess our database of 120gb, running on a 32 bit machine doesn’t
> benefit from this. We were working on the most recent R release and
> yesterday we went to 15.5.
> >>> When reindexing the entire database, 4D Server/4D are still going
> index by index instead of table by table.
> >
> > **
> > 4D Internet Users Group (4D iNUG)
> > FAQ:  http://lists.4d.com/faqnug.html
> > Archive:  http://lists.4d.com/archives.html
> > Options: https://lists.4d.com/mailman/options/4d_tech
> > Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> > **
>
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **
>
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-23 Thread Arnaud de Montard via 4D_Tech

> Le 23 avr. 2018 à 07:51, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com> a 
> écrit :
> 
> Chuck,
> 
> You know the old saying: “A watched pot never boils.” 
> Watching 4D reload the same table with 7 million records 8 times to generate 
> 8 indexes is kind of like that.
> 
> I don’t have the luxury of installing an SSD RAID. I’m sure better hardware 
> would help.

Not only the raid makes it better, a small cache is a mess… On a 32bit server, 
watching 4D going back and forth between table sucks. 

-- 
Arnaud de Montard 





**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Paul Lovejoy via 4D_Tech
Chuck,

You know the old saying: “A watched pot never boils.” 
Watching 4D reload the same table with 7 million records 8 times to generate 8 
indexes is kind of like that.

I don’t have the luxury of installing an SSD RAID. I’m sure better hardware 
would help.

 

Paul


> Le 23 avr. 2018 à 04:20, Chuck Miller via 4D_Tech <4d_tech@lists.4d.com> a 
> écrit :
> 
> OK I don;t get it. We have a data base of over 200 Gig with many millions of 
> records. We run using SSD,s in an Areca RAID. 1 Terabyte SSDs. I can tell you 
> that when we restored from a backup up and rebuilt indices it took not more 
> than 20 minutes to so. It does not matter if it is by table or not. Yes I 
> agree it would be faster by table but with a database of this size, you 
> really need to be running on an SSD RAID.
> 
> Regards
> 
> Chuck
> 
> Chuck Miller Voice: (617) 739-0306
> Informed Solutions, Inc. Fax: (617) 232-1064   
> mailto:cjmillerinformed-solutions.com 
> Brookline, MA 02446 USA Registered 4D Developer
>   Providers of 4D and Sybase connectivity
>  http://www.informed-solutions.com  
> 
> This message and any attached documents contain information which may be 
> confidential, subject to privilege or exempt from disclosure under applicable 
> law.  These materials are intended only for the use of the intended 
> recipient. If you are not the intended recipient of this transmission, you 
> are hereby notified that any distribution, disclosure, printing, copying, 
> storage, modification or the taking of any action in reliance upon this 
> transmission is strictly prohibited.  Delivery of this message to any person 
> other than the intended recipient shall not compromise or waive such 
> confidentiality, privilege or exemption from disclosure as to this 
> communication. 
> 
>> On Apr 22, 2018, at 6:40 AM, Arnaud de Montard via 4D_Tech 
>> <4d_tech@lists.4d.com> wrote:
>> 
>>> 
>>> I guess our database of 120gb, running on a 32 bit machine doesn’t benefit 
>>> from this. We were working on the most recent R release and yesterday we 
>>> went to 15.5.
>>> When reindexing the entire database, 4D Server/4D are still going index by 
>>> index instead of table by table.
> 
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Chuck Miller via 4D_Tech
OK I don;t get it. We have a data base of over 200 Gig with many millions of 
records. We run using SSD,s in an Areca RAID. 1 Terabyte SSDs. I can tell you 
that when we restored from a backup up and rebuilt indices it took not more 
than 20 minutes to so. It does not matter if it is by table or not. Yes I agree 
it would be faster by table but with a database of this size, you really need 
to be running on an SSD RAID.

Regards

Chuck

 Chuck Miller Voice: (617) 739-0306
 Informed Solutions, Inc. Fax: (617) 232-1064   
 mailto:cjmillerinformed-solutions.com 
 Brookline, MA 02446 USA Registered 4D Developer
   Providers of 4D and Sybase connectivity
  http://www.informed-solutions.com  

This message and any attached documents contain information which may be 
confidential, subject to privilege or exempt from disclosure under applicable 
law.  These materials are intended only for the use of the intended recipient. 
If you are not the intended recipient of this transmission, you are hereby 
notified that any distribution, disclosure, printing, copying, storage, 
modification or the taking of any action in reliance upon this transmission is 
strictly prohibited.  Delivery of this message to any person other than the 
intended recipient shall not compromise or waive such confidentiality, 
privilege or exemption from disclosure as to this communication. 

> On Apr 22, 2018, at 6:40 AM, Arnaud de Montard via 4D_Tech 
> <4d_tech@lists.4d.com> wrote:
> 
>> 
>> I guess our database of 120gb, running on a 32 bit machine doesn’t benefit 
>> from this. We were working on the most recent R release and yesterday we 
>> went to 15.5.
>> When reindexing the entire database, 4D Server/4D are still going index by 
>> index instead of table by table.

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Arnaud de Montard via 4D_Tech

> Le 22 avr. 2018 à 11:08, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com> a 
> écrit :
> 
> Hi,
> 
> I guess our database of 120gb, running on a 32 bit machine doesn’t benefit 
> from this. We were working on the most recent R release and yesterday we went 
> to 15.5.
> When reindexing the entire database, 4D Server/4D are still going index by 
> index instead of table by table.

You can run the described behaviour by yourself:
1/ create and keep an index description file  
2/ open the data with an index depleted structure
3/ after the On open database, create the indexes table by table, synchronously
It works fine but sucks because you need to produce the structure in step 2. 
Some code for steps 1/ & 3/ here:

A related feature request here:


-- 
Arnaud de Montard 



**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Keisuke Miyako via 4D_Tech
15.5 maybe a newer release than 15Rx, but it is an older branch.

15.0 ➡︎ 15.1 ➡︎ 15.2 ➡︎ 15.3 ➡︎ 15.4 ➡︎ 15.5
┗15R2
┗15R3
┗15R4
┗15R5
┗16.0 ➡︎ 16.1 ➡︎ 16.2 ➡︎ 16.2

left to right: bug fixes
top to bottom: new features

2018/04/22 18:08、Paul Lovejoy via 4D_Tech 
<4d_tech@lists.4d.com> のメール:
Or maybe I missed something.


**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Paul Lovejoy via 4D_Tech
Hi,

I guess our database of 120gb, running on a 32 bit machine doesn’t benefit from 
this. We were working on the most recent R release and yesterday we went to 
15.5.
When reindexing the entire database, 4D Server/4D are still going index by 
index instead of table by table.

Or maybe I missed something.


Cheers,


Paul


> Le 21 avr. 2018 à 21:43, Keisuke Miyako via 4D_Tech <4d_tech@lists.4d.com> a 
> écrit :
> 
> you can read about it in the 15R4 upgrade ref.
> 
> ftp://ftp-public.4d.fr/Documents/Products_Documentation/LastVersions/Line_15R4/VIntl/4D_Upgrade_v15_R4.pdf
> 
> p.62
> 
>> In 4D v15 R4, we have greatly optimized the algorithm for global reindexing 
>> of a database. The whole process has been dramatically accelerated, and can 
>> be up to two times faster.
>> 
>> Note: A global reindexing is required, for example, after a database repair 
>> or when the .4dindx file has been deleted.
>> 
>> Since each record of each indexed table needs to be loaded in memory during 
>> indexing, the optimization aimed at minimize disk swaps. This operation is 
>> now performed on each table sequentially, which reduces record loading and 
>> unloading operations.
>> 
>> In a perfect scenario, the cache would be large enough to contain the whole 
>> data file and index --- in this case there would be no speed improvement by 
>> the new algorithm. However, the available server memory is usually not that 
>> big. If the cache is large enough to hold at least the largest table and its 
>> index(es), the new algorithm will be up to twice as fast as before.
> 
> 
> 
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-22 Thread Paul Lovejoy via 4D_Tech
Hi Jeff,


We are on 15.5 now. I don’t see any R releases which are more recent. v15.5 
does not do this…


Paul


> Le 21 avr. 2018 à 17:06, Jeffrey Kain via 4D_Tech <4d_tech@lists.4d.com> a 
> écrit :
> 
> 4D implemented this very improvement in the v15 R releases.
> 
> --
> Jeffrey Kain
> jeffrey.k...@gmail.com
> 
> 
>> On Apr 21, 2018, at 8:04 AM, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com> 
>> wrote:
>> 
>> I guess you could call this a gripe but I don’t understand why, if a table 
>> has several indexes, 4D can’t load the data once and generate all the 
>> indexes for that table in one go. It would presumably allow for reindexing a 
>> database much quicker.
> 
> **
> 4D Internet Users Group (4D iNUG)
> FAQ:  http://lists.4d.com/faqnug.html
> Archive:  http://lists.4d.com/archives.html
> Options: https://lists.4d.com/mailman/options/4d_tech
> Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
> **

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-21 Thread Keisuke Miyako via 4D_Tech
you can read about it in the 15R4 upgrade ref.

ftp://ftp-public.4d.fr/Documents/Products_Documentation/LastVersions/Line_15R4/VIntl/4D_Upgrade_v15_R4.pdf

p.62

> In 4D v15 R4, we have greatly optimized the algorithm for global reindexing 
> of a database. The whole process has been dramatically accelerated, and can 
> be up to two times faster.
>
> Note: A global reindexing is required, for example, after a database repair 
> or when the .4dindx file has been deleted.
>
> Since each record of each indexed table needs to be loaded in memory during 
> indexing, the optimization aimed at minimize disk swaps. This operation is 
> now performed on each table sequentially, which reduces record loading and 
> unloading operations.
>
> In a perfect scenario, the cache would be large enough to contain the whole 
> data file and index --- in this case there would be no speed improvement by 
> the new algorithm. However, the available server memory is usually not that 
> big. If the cache is large enough to hold at least the largest table and its 
> index(es), the new algorithm will be up to twice as fast as before.



**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

Re: A thought on re-indexing of a large database after repairs, compacting etc

2018-04-21 Thread Jeffrey Kain via 4D_Tech
4D implemented this very improvement in the v15 R releases.

--
Jeffrey Kain
jeffrey.k...@gmail.com


> On Apr 21, 2018, at 8:04 AM, Paul Lovejoy via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> I guess you could call this a gripe but I don’t understand why, if a table 
> has several indexes, 4D can’t load the data once and generate all the indexes 
> for that table in one go. It would presumably allow for reindexing a database 
> much quicker.

**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**

A thought on re-indexing of a large database after repairs, compacting etc

2018-04-21 Thread Paul Lovejoy via 4D_Tech
Hi,

I’m working with a pretty big database, with about 120gb of data and about 45 
million records over about 250 tables. This database has been in use for about 
20 years and is growing ever faster.

I guess you could call this a gripe but I don’t understand why, if a table has 
several indexes, 4D can’t load the data once and generate all the indexes for 
that table in one go. It would presumably allow for reindexing a database much 
quicker.

Just my 2 cents worth. Maybe someone from the 4D engineering team could explain 
why I’m wrong.



Paul
**
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: https://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**