If you want the best performance and don't have a requirement to use the 
plugin, I'd bet the fastest way to bulk load with Postgres is going to be using 
files. If you are on the same machine, you can COPY directly from a file if you 
have superuser access. If not, you can still launch psql and use the \copy 
command.

Assuming you are on v16 and the data could be segmented accordingly, you could 
even launch multiple pre-emptive workers in parallel to export and use multiple 
connections to import. Each postgres connection will use a separate CPU core, 
if available.

John DeSoi, Ph.D.

> On Aug 1, 2017, at 2:42 PM, David Adams via 4D_Tech <4d_tech@lists.4d.com> 
> wrote:
> 
> I'm starting with standard rows with UUIDs, strings, longs, reals, and
> perhaps text. I need to go for maximum speed...most of the work on tuning
> is on the Postgres side. 

**********************************************************************
4D Internet Users Group (4D iNUG)
FAQ:  http://lists.4d.com/faqnug.html
Archive:  http://lists.4d.com/archives.html
Options: http://lists.4d.com/mailman/options/4d_tech
Unsub:  mailto:4d_tech-unsubscr...@lists.4d.com
**********************************************************************

Reply via email to