Re: [GENERAL] Basic Tutorials for 9.0

2010-11-14 Thread Alban Hertroys
On 15 Nov 2010, at 2:02, ray joseph wrote:

> Alban,
> 
> Thank you for your time and effort.  I can see that you are very familiar
> with this environment.
> 
> I have only used MS Access (for years).  My difficulties are very basic.
> When I said I can't view the data in the data base, I meant basically - with
> any method.  The psql help shows many commands for displaying.  My basic
> difficulties are: Choosing the right one(s), determining whether I have used
> it correctly, knowing whether I have actually put data in the db.  
> 
> This is why I was looking for a basic tutorial, something that would guide
> me in detail through steps to achieve a goal.  I have read through the psql
> manual a couple times in the last two years but I have not been able to
> complete any task.  
> 
> A pointer to a detailed tutorial would be great.

Please include the mailing-list in your replies, you'll get better help that 
way. Try to prevent top-posting as well, it makes it more difficult on people 
to get the context of your message and thus reduces the chance they'll answer - 
there's plenty more mail in this list ;)

I'm not really sure what you're looking for, you didn't tell what you tried to 
see your data or what failed (and how) when you tried.

The built-in commands in psql and pgadmin that I referred to don't show you 
your data, they show your data structure. You can query your data using SQL 
commands.
I'm sure there are plenty of SQL tutorials around if that's what you're looking 
for?

I'm guessing that if no error message was shown when you copied your data into 
the database, then you'll be able to see your data if you query your tables. 
None of us can tell you how to do that exactly, as we don't know your table 
structure, but you could get some general pointers (The SELECT statement is 
probably what you're looking for).

Seeing your original message though, I wonder whether you did create any tables 
at all before you tried copying data into them?

> -Original Message-
> From: Alban Hertroys [mailto:dal...@solfertje.student.utwente.nl] 
> Sent: Saturday, November 13, 2010 6:12 AM
> To: ray
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Basic Tutorials for 9.0
> 
> On 13 Nov 2010, at 3:44, ray wrote:
> 
>> On Nov 10, 11:07 pm, robjsarg...@gmail.com (Rob Sargent) wrote:
>>> ray wrote:
 I also tried the shell.  create mydb.  I used all the defaults but 
 the console came back and rejected all the defaults and closed the 
 console.
>>> 
 I would like to export an Excel file as CVS and ;'copy' into pg.  So 
 I would like to learn how to create a new database and what ever 
 goes along with that to acheive this goal.
>>> 
 I would appreciate all help.
>>> 
 Thanks,
 ray
>>> 
>>> if the defaults are in play the owning account should be able to do
>>> 
>>> createdb somedbname;
>>> 
>>> You left of the "db" in createdb in your posting.
> 
>> I really didn't understand.  I was trying to create a new db inside of 
>> the pg shell.  I found that I needed to do it at the OS command 
>> prompt.  This was totally unclear in the manual.
> 
> That's not true, it's just that the command you used wouldn't have worked in
> either.
> 
> On the OS shell's command line (cmd in Windows) you use "createdb mydb" to
> create a database, in the psql shell you use "CREATE DATABASE mydb;".
> I expect the latter command would work in pgadmin too (although you may have
> to leave out the semi-colon), but it probably has a convenient menu option
> for creating databases somewhere.
> 
> I tend to install MSys in Windows so that I have a proper UNIX shell to run
> those commands from, but it's a bit hard to set up.
> 
>> Now I have found that the copy cammand is to be done in the shell.
> 
> If you're saying shell, do you mean your OS shell (cmd.exe) or are you
> talking about the psql shell?
> 
> To copy data you can use either the \copy command built in the psql shell,
> or you can use the COPY statement directly and copy from STDIN, followed by
> your CSV data and closed with a \. terminator.
> Check the documentation on COPY for details and examples.
> 
>> Now if I could just find out if it is working.  I can't seem to look into
> the db.
> 
> 
> I guess you're talking about pgadmin here, with which I'm not familiar. It's
> probably just not connected to your database, guessing from your earlier
> remark about the red X.
> Is the database server on the same machine as pgadmin? If not, you probably
> need to edit pg_hba.conf to allow access from the machine running pgadmin.
> 
> In the psql shell you can type \dt to see your tables (or \? for a list of
> built-in commands) and use SELECT statements to look at your data.
> 
> You could also try to hook up an ODBC connection to your database and look
> at it using Access or Openoffice's variant of that. There are plenty more
> possibilities.
> 
> Alban Hertroys
> 
> --
> Screwing up is an excellent way to attach something t

Re: [GENERAL] Suggested swap size for new db?

2010-11-14 Thread Scott Marlowe
On Sun, Nov 14, 2010 at 8:34 PM, Evan D. Hoffman  wrote:
> I'm planning to migrate our pg db to a new machine in the next couple
> of weeks.  The current DB has 32 GB memory; the new one will have 96
> GB.  It's going to be Postgres 8.2.x (we're planning to upgrade to 8.4
> as part of another project) running on CentOS 5.4 or 5.5.  I know the
> old rule of thumb that your swap partition/disk should be equal to the
> physical memory, but when dealing with memory sizes greater than ~16
> GB that starts to seem strange to me; and now with 96 GB of physical
> memory I'm starting to wonder if I'd be better off forgoing swap
> altogether for the new database.

My servers were setup with something like 16G of swap on a machine
with 128G ram.  After runnig for about a month of heavy work, kswapd
decided to start thrashing the system for no apparent reason, and
would not stop.  The fix was sudo swapoff -a.  As soon as kswapd had
swapped everything back in the server returned to normal.  Now,
swapoff -a is the last line in my /etc/rc.local on both machines.
Running Ubuntu 10.04 64 bit btw.  Note that at the time of this
happening I had 90+G of kernel cache, and nothing NEEDED to be swapped
out.  the linux kernel, and virtual memory, with a machine with lots
of memory, has some rather odd behaviour, and I don't really need swap
on these machines.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Suggested swap size for new db?

2010-11-14 Thread Evan D. Hoffman
I'm planning to migrate our pg db to a new machine in the next couple
of weeks.  The current DB has 32 GB memory; the new one will have 96
GB.  It's going to be Postgres 8.2.x (we're planning to upgrade to 8.4
as part of another project) running on CentOS 5.4 or 5.5.  I know the
old rule of thumb that your swap partition/disk should be equal to the
physical memory, but when dealing with memory sizes greater than ~16
GB that starts to seem strange to me; and now with 96 GB of physical
memory I'm starting to wonder if I'd be better off forgoing swap
altogether for the new database.

Are there any general suggestions for swap size for a Postgres DB?  I
assume that paging to/from disk is something that I'd want to avoid,
especially in a database (which is why I'm debating disabling it
altogether), but I'm not sure what the drawbacks would be (aside from
the obvious one of there being less total memory available to the
system).  Oracle actually has some guidelines for swap space in the
installation docs -
http://www.dba-oracle.com/t_server_swap_space_allocation.htm - but I
don't know of anything similar for Postgres, and I don't know if/how
having such a huge amount of memory affects those guidelines. 3/4 of 8
GB is 6 GB of swap, which seems reasonable; 3/4 of 96 GB is 72 GB,
which strikes me as excessive.

Any thoughts would be appreciated.

Thanks,
Evan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] when postgres failed to recover after the crash...

2010-11-14 Thread Gabriele Bartolini

Hi,

In general, it's a really bad idea to run PostgreSQL (or any other
database) over file-level network storage like SMB/AFP/CIFS/NFS.
Block-level network storage like iSCSI is generally OK, depending on the
quality of the drivers in target and initiator.


What Craig says it is true and it might be worth reading the free 
chapter about "Database Hardware" from Greg's book on high performance, 
which you can download from 
http://blog.2ndquadrant.com/en/2010/10/postgresql-90-high-performance.html



Before you do anything more, make a COMPLETE COPY of the entire data
directory, including the pg_clog, pg_xlog, etc directories as well as
the main data base storage. Put it somewhere safe and do not touch it
again, because it might be critical for recovery.
Yes, also in case you had any tablespace do not forget about them. But a 
cold backup in this cases it is always a good thing.


Ciao,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] when postgres failed to recover after the crash...

2010-11-14 Thread Craig Ringer
On 15/11/10 07:04, anypossibility wrote:
> I am running postgres postgres version 8.3 on OS X.
> The data directory is on network volume.

What kind of network volume?

An AFP mount? SMB share? NFS? iSCSI?

In general, it's a really bad idea to run PostgreSQL (or any other
database) over file-level network storage like SMB/AFP/CIFS/NFS.
Block-level network storage like iSCSI is generally OK, depending on the
quality of the drivers in target and initiator.

> I understand that some updates were lost because they haven't been
> written to the disk yet hence updates are lost.
> However, it seems that record that were created long time ago (but
> updated before the crash occurs) is completely missing (unable to find
> even after reindex is done).
> Does this make sense? or Is this impossible and record might be
> somewhere on the disk? 

Without details it is hard to know.

Before you do anything more, make a COMPLETE COPY of the entire data
directory, including the pg_clog, pg_xlog, etc directories as well as
the main data base storage. Put it somewhere safe and do not touch it
again, because it might be critical for recovery.

In addition to the network file system type, provide the log files
generated by PostgreSQL when you post a follow-up. These might provide
some explanation of what is wrong.

There is a significant chance your database is severely corrupted if
you've been using a network file system that doesn't respect write
ordering and had it unexpectedly disconnect.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] when postgres failed to recover after the crash...

2010-11-14 Thread anypossibility
I am running postgres postgres version 8.3 on OS X.
The data directory is on network volume.
The network volume was disconnected and server crashed.
Log reported that last know up was 9:30 pm (about 30 min prior to the server 
crash.)
My conf Checkpoint_Segments setting = 3 (not sure if this is all helpful info)
When postgres try to recover from the crash the volume was still not mounted.
Once storage volume is re-connected:
There were index corruptions. I fixed all that with single user mode.
There were also some missing record... 
I understand that some updates were lost because they haven't been written to 
the disk yet hence updates are lost.
However, it seems that record that were created long time ago (but updated 
before the crash occurs) is completely missing (unable to find even after 
reindex is done).
Does this make sense? or Is this impossible and record might be somewhere on 
the disk? 
Thank you very much for your time in advance.


Re: [GENERAL] Why facebook used mysql ?

2010-11-14 Thread r t
On Sun, Nov 14, 2010 at 12:15 PM, Ron Mayer
wrote:

> Lincoln Yeoh wrote:
> > What's more important to such companies is the ability to scale over
> > multiple machines.
>
> That question - how much work it is to administer thousands of database
> servers - seems to have been largely missing from this conversation.
>
> Apparently back in 2008, Facebook had 1800 MySQL servers with 2 DBAs.[1]
>
> I wonder how that compares with large-scale Postgres deployments.
>
>
 From a technology standpoint, it doesn't need to be ostensibly different,
provided you use Postgres in a way similar to how facebook is using MySQL.
Well, at least now; 8.4's re-implementation of the free space map was
critical for "zero-administration" type deployments. If you can script basic
failover deployments (remember that 1/2 of those 1800 are just slave
machines), you don't abstract storage from the app, and you keep database
schema similar across nodes, you can really ramp up the number of deployed
servers per dba.


Robert Treat
play: http://www.xzilla.net
work: http://www.omniti.com/is/hiring


Re: [GENERAL] ipv4 data type does not allow to use % as subnet mask delimiter

2010-11-14 Thread Tom Lane
Peter Eisentraut  writes:
> On sön, 2010-11-14 at 16:46 -0500, Tom Lane wrote:
>> I believe we looked into that some time ago and decided that the
>> behavior was too platform-dependent to be worth messing with.

> I suppose the problem is that the zone identifier could be almost any
> string, and storing that would upset the inet storage format.

That was one problem --- but since inet is already varlena, I think that
adding a string wouldn't be fatal in itself.  The real problem IMO is
that the specific strings aren't standardized, so an inet value that is
valid on one platform might not be valid on another.  Simple concepts
like comparing for equality also get hard if you don't know how the
platform actually interprets the strings.

> Then again, this is part of the IPv6 standard, so just giving up might
> not be sustainable in the long run.

Possibly someday the standard will actually standardize the things,
and then maybe we can work with them usefully ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ipv4 data type does not allow to use % as subnet mask delimiter

2010-11-14 Thread Peter Eisentraut
On sön, 2010-11-14 at 16:46 -0500, Tom Lane wrote:
> Peter Eisentraut  writes:
> > On tor, 2010-11-11 at 20:33 +0200, Andrus wrote:
> >> Windows uses % as subnet mask delimiter.
> 
> > This is not a subnet mask but a zone index, but it should probably still
> > be supported.
> 
> I believe we looked into that some time ago and decided that the
> behavior was too platform-dependent to be worth messing with.

I suppose the problem is that the zone identifier could be almost any
string, and storing that would upset the inet storage format.

Then again, this is part of the IPv6 standard, so just giving up might
not be sustainable in the long run.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ipv4 data type does not allow to use % as subnet mask delimiter

2010-11-14 Thread Tom Lane
Peter Eisentraut  writes:
> On tor, 2010-11-11 at 20:33 +0200, Andrus wrote:
>> Windows uses % as subnet mask delimiter.

> This is not a subnet mask but a zone index, but it should probably still
> be supported.

I believe we looked into that some time ago and decided that the
behavior was too platform-dependent to be worth messing with.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] streaming replication feature request

2010-11-14 Thread Scott Ribe
How about supporting something like:

wal_keep_segments = '7d'

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ipv4 data type does not allow to use % as subnet mask delimiter

2010-11-14 Thread Peter Eisentraut
On tor, 2010-11-11 at 20:33 +0200, Andrus wrote:
> Windows uses % as subnet mask delimiter.

This is not a subnet mask but a zone index, but it should probably still
be supported.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 9.0 RPMs for RHEL 6 and Fedora 14 released

2010-11-14 Thread Devrim GÜNDÜZ

I just released PostgreSQL 9.0 RPM for Red Hat Enterprise Linux 6 and
Fedora 14, on both x86 and x86_64.

Please note that 9.0 packages have a different layout as compared to
previous ones. You may want to read this blog post about this first:

http://people.planetpostgresql.org/devrim/index.php?/archives/48-What-is-new-in-PostgreSQL-9.0-RPMs.html

Installing PostgreSQL 9.0 on these platforms are quite easy. First,
install repository RPM from here:

http://yum.pgrpms.org/reporpms/repoview/letter_p.group.html

Then, 

yum groupinstall "PostgreSQL Database Server PGDG" 

will install minimum package sets for you.

Here are all packages that have been released so far:

RHEL 6:

http://yum.pgrpms.org/9.0/redhat/rhel-6-i386/repoview/
http://yum.pgrpms.org/9.0/redhat/rhel-6-x86_64/repoview/

Fedora 14:

http://yum.pgrpms.org/9.0/fedora/fedora-14-i386/repoview/
http://yum.pgrpms.org/9.0/fedora/fedora-14-x86_64/repoview/

If you find any issues with the repository or packaging, please send an
e-mail to me.

Regards,
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread Adrian Klaver
On Sunday 14 November 2010 4:44:53 am franrtorres77 wrote:
> Hi there
>
> I need to add periodically some data from a remote mysql database into our
> postgresql database. So, does anyone know how to do it having in mind that
> it must be runned every minute or so for adding new records to the
> postresql?
>
> Best regards
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Adding-data-from-mysql-to-postgresq
>l-periodically-tp3264392p3264392.html Sent from the PostgreSQL - general
> mailing list archive at Nabble.com.

Some questions.
1) Are you only pulling records from the MySQL db that are not in the Pg db?
What about previously pulled records that have changed in MySQL, are the   
changes going to be propagated to Pg?
What about deleted records?
2) As mentioned in another post what about data cleanup?
For instance MySQL '00-00-000' date, or empty string in integer fields?

I have done this using a Python script. Not on a minute to minute basis, but I 
could see doing it either using sleep() or by calling the script in a cron job.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why facebook used mysql ?

2010-11-14 Thread Ron Mayer
Lincoln Yeoh wrote:
> What's more important to such companies is the ability to scale over
> multiple machines.

That question - how much work it is to administer thousands of database
servers - seems to have been largely missing from this conversation.

Apparently back in 2008, Facebook had 1800 MySQL servers with 2 DBAs.[1]

I wonder how that compares with large-scale Postgres deployments.

  Ron

[1] http://perspectives.mvdirona.com/2008/04/22/1800MySQLServersWithTwoDBAs.aspx


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comments on tables

2010-11-14 Thread Pasman
> do $$
> begin
>   execute 'COMMENT ON TABLE test_count is ''Updated ' || current_date ||
> ;
> end$$;
>

thanks, it works cool.


pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread Allan Kamau
On Sun, Nov 14, 2010 at 4:12 PM, Leif Biberg Kristensen
 wrote:
> On Sunday 14. November 2010 13.44.53 franrtorres77 wrote:
>>
>> Hi there
>>
>> I need to add periodically some data from a remote mysql database into our
>> postgresql database. So, does anyone know how to do it having in mind that
>> it must be runned every minute or so for adding new records to the
>> postresql?
>
> It should be trivial to write a Perl script that pulls the data from MySQL,
> inserts them into PostgreSQL, and then goes to sleep for 60 seconds.
>
> regards,
> Leif B. Kristensen
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



I would recommend first exporting to CSV (or other text format) either
using MySQL's copy command (then use sed or other scripting
tool/language to transform/cleanup the data) and then loading this
file (or STDIN) using the COPY command.
Why?
1)Copy is a fast way to bulk load data.
2)The CSV file may come in handy when debugging/testing/auditing
providing "right from the horse's mouth" functionality when all is not
well.

You may want to spit the file using some number of lines threshold
(the split command may help) to avoid large transactions.


Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread Leif Biberg Kristensen
On Sunday 14. November 2010 14.33.39 franrtorres77 wrote:
> 
> well, I know how to query to mysql but what i dont know is how to then 
write
> that data on the fly to the postgresql

I'd also like to say that it's an interesting question, and a lot of 
people (including me) might want to take a stab at the solution.

If you can tell what the data looks like coming from MySQL, and the 
corresponding table structure in PostgreSQL, you may well get a much 
more detailed reply.

regards,
Leif B. Kristensen

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread Leif Biberg Kristensen
On Sunday 14. November 2010 14.33.39 franrtorres77 wrote:
> 
> well, I know how to query to mysql but what i dont know is how to then 
write
> that data on the fly to the postgresql

The DBD::Pg package has an excellent documentation: 


regards,
Leif B. Kristensen

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread franrtorres77

well, I know how to query to mysql but what i dont know is how to then write
that data on the fly to the postgresql
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Adding-data-from-mysql-to-postgresql-periodically-tp3264392p3264417.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread franrtorres77

So, do you know where I can find out an example for that?

*Fran*


On 14 November 2010 14:13, Leif Biberg Kristensen [via PostgreSQL] <
ml-node+3264406-1436673590-144...@n5.nabble.com
> wrote:

> On Sunday 14. November 2010 13.44.53 franrtorres77 wrote:
> >
> > Hi there
> >
> > I need to add periodically some data from a remote mysql database into
> our
> > postgresql database. So, does anyone know how to do it having in mind
> that
> > it must be runned every minute or so for adding new records to the
> > postresql?
>
> It should be trivial to write a Perl script that pulls the data from MySQL,
>
> inserts them into PostgreSQL, and then goes to sleep for 60 seconds.
>
> regards,
> Leif B. Kristensen
>
> --
> Sent via pgsql-general mailing list ([hidden 
> email])
>
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
> --
>  View message @
> http://postgresql.1045698.n5.nabble.com/Adding-data-from-mysql-to-postgresql-periodically-tp3264392p3264406.html
> To unsubscribe from Adding data from mysql to postgresql periodically, click
> here.
>
>
>

-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Adding-data-from-mysql-to-postgresql-periodically-tp3264392p3264415.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


Re: [GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread Leif Biberg Kristensen
On Sunday 14. November 2010 13.44.53 franrtorres77 wrote:
> 
> Hi there
> 
> I need to add periodically some data from a remote mysql database into our
> postgresql database. So, does anyone know how to do it having in mind that
> it must be runned every minute or so for adding new records to the
> postresql?

It should be trivial to write a Perl script that pulls the data from MySQL,
inserts them into PostgreSQL, and then goes to sleep for 60 seconds.

regards,
Leif B. Kristensen

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding data from mysql to postgresql periodically

2010-11-14 Thread franrtorres77

Hi there

I need to add periodically some data from a remote mysql database into our
postgresql database. So, does anyone know how to do it having in mind that
it must be runned every minute or so for adding new records to the
postresql?

Best regards
-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Adding-data-from-mysql-to-postgresql-periodically-tp3264392p3264392.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general