On Dec 11, 2012 2:25 PM, "Adrian Klaver" wrote:
>
> On 12/11/2012 01:58 PM, Mihai Popa wrote:
>>
>> On Tue, 2012-12-11 at 10:00 -0800, Jeff Janes wrote:
>>>
>>> On Mon, Dec 10, 2012 at 12:26 PM, Mihai Popa wrote:
Hi,
I've recently inherited a project that involves importing a
Another question is whether there's a particular reason that you're
converting to CSV prior to importing the data?
All major ETL tools that I know of, including the major open source
ones (Pentaho / Talend) can move data directly from Access databases
to Postgresql.
Yes, I wish somebody ask
On 12/11/2012 01:58 PM, Mihai Popa wrote:
On Tue, 2012-12-11 at 10:00 -0800, Jeff Janes wrote:
On Mon, Dec 10, 2012 at 12:26 PM, Mihai Popa wrote:
Hi,
I've recently inherited a project that involves importing a large set of
Access mdb files into a Postgres or MySQL database.
The process is to
On 12/11/2012 1:58 PM, Mihai Popa wrote:
1TB of storage sounds desperately small for loading 300GB of csv files.
really? that's good to know; I wouldn't have guessed
on many of our databases, the indexes are as large as the tables.
--
Sent via pgsql-general mailing list (pgsql-general@pos
On Tue, 2012-12-11 at 14:28 -0700, David Boreham wrote:
> Try SoftLayer instead for physical machines delivered on-demand :
> http://www.softlayer.com/dedicated-servers/
>
> If you're looking for low cost virtual hosting alternative to Amazon,
> try Rackspace.
Thank you, I will
regards,
--
On Tue, 2012-12-11 at 10:00 -0800, Jeff Janes wrote:
> On Mon, Dec 10, 2012 at 12:26 PM, Mihai Popa wrote:
> > Hi,
> >
> > I've recently inherited a project that involves importing a large set of
> > Access mdb files into a Postgres or MySQL database.
> > The process is to export the mdb's to comm
On 12/11/2012 2:03 PM, Mihai Popa wrote:
I actually looked at Linode, but Amazon looked more competitive...
Checking Linode's web site just now it looks like they have removed
physical machines as an option.
Try SoftLayer instead for physical machines delivered on-demand :
http://www.softlayer.
On Tue, 2012-12-11 at 09:47 -0700, David Boreham wrote:
> Finally, note that there is a middle-ground available between cloud
> hosting and outright machine purchase -- providers such as Linode and
> SoftLayer will sell physical machines in a way that gives much of the
> convenience of cloud ho
On Dec 11, 2012, at 2:25 AM, Chris Angelico wrote:
> On Tue, Dec 11, 2012 at 7:26 AM, Mihai Popa wrote:
>> Second, where should I deploy it? The cloud or a dedicated box?
>
> Forget cloud. For similar money, you can get dedicated hosting with
> much more reliable performance. We've been looking
On Mon, Dec 10, 2012 at 12:26 PM, Mihai Popa wrote:
> Hi,
>
> I've recently inherited a project that involves importing a large set of
> Access mdb files into a Postgres or MySQL database.
> The process is to export the mdb's to comma separated files than import
> those into the final database.
>
On 12/11/2012 8:28 AM, Mihai Popa wrote:
I guess Chris was right, I have to better understand the usage pattern
and do some testing of my own.
I was just hoping my hunch about Amazon being the better alternative
would be confirmed, but this does not
seem to be the case; most of you recommend pu
On 12/11/2012 07:27 AM, Bill Moran wrote:
On Mon, 10 Dec 2012 15:26:02 -0500 (EST) "Mihai Popa" wrote:
Hi,
I've recently inherited a project that involves importing a large set of
Access mdb files into a Postgres or MySQL database.
The process is to export the mdb's to comma separated files t
On 12/10/2012 1:26 PM, Mihai Popa wrote:
Second, where should I deploy it? The cloud or a dedicated box?
Amazon seems like the sensible choice; you can scale it up and down as
needed and backup is handled automatically.
I was thinking of an x-large RDS instance with 1 IOPS and 1 TB of
stora
Hi,
If you have big table you could also think about Hadoop/HBase or Cassandra but
do not put large data set in MySQL. I agree with Bill that "Despite the fact
that lots of people have been able to make it (MySQL) work" (me too, another
example), there are issues with it. I have been using My
On Mon, 10 Dec 2012 15:26:02 -0500 (EST) "Mihai Popa" wrote:
> Hi,
>
> I've recently inherited a project that involves importing a large set of
> Access mdb files into a Postgres or MySQL database.
> The process is to export the mdb's to comma separated files than import
> those into the final d
On Tue, Dec 11, 2012 at 9:33 PM, Gavin Flower
wrote:
>
> On Tue, Dec 11, 2012 at 7:26 AM, Mihai Popa wrote:
> > Second, where should I deploy it? The cloud or a dedicated box?
>
> Would you say the issue is cloudy?
> (I'm not being entirely facetious!)
*Groan* :)
It's certainly not clear-cut in
Hi all,
On 12/11/2012 11:02 AM, Jan Kesten wrote:
>> I would very much appreciate a copy or a link to these slides!
> here they are:
>
> http://www.scribd.com/mobile/doc/61186429
>
thank you very much!
Johannes
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make chang
On Mon, Dec 10, 2012 at 12:26 PM, Mihai Popa wrote:
> Hi,
>
> I've recently inherited a project that involves importing a large set of
> Access mdb files into a Postgres or MySQL database.
> The process is to export the mdb's to comma separated files than import
> those into the final database.
>
On 11/12/12 23:25, Chris Angelico wrote:
On Tue, Dec 11, 2012 at 7:26 AM, Mihai Popa wrote:
Second, where should I deploy it? The cloud or a dedicated box?
Forget cloud. For similar money, you can get dedicated hosting with
much more reliable performance. We've been looking at places to deploy
On Tue, Dec 11, 2012 at 7:26 AM, Mihai Popa wrote:
> Second, where should I deploy it? The cloud or a dedicated box?
Forget cloud. For similar money, you can get dedicated hosting with
much more reliable performance. We've been looking at places to deploy
a new service, and to that end, we booked
Hi all,
> I would very much appreciate a copy or a link to these slides!
here they are:
http://www.scribd.com/mobile/doc/61186429
Have fun!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-ge
Hello Jan, hello List
On 12/11/2012 09:10 AM, Jan Kesten wrote:
> There are some sildes from Sun/Oracle about ZFS, ZIL, SSD and
> PostgreSQL performance (I can look if I find them if needed).
I would very much appreciate a copy or a link to these slides!
Johannes
--
Sent via pgsql-general mai
Hi Mihai.
> We are now at the point where the csv files are all created and amount
> to some 300 GB of data.
> I would like to get some advice on the best deployment option.
First - and maybe best - advice: Do some testing on your own and plan
some time for this.
> First, the project has been s
Hi,
On 11 December 2012 07:26, Mihai Popa wrote:
> First, the project has been started using MySQL. Is it worth switching
> to Postgres and if so, which version should I use?
You should to consider several things:
- do you have in-depth MySQL knowledge in you team?
- do you need any sql_mode "fe
Hi,
I've recently inherited a project that involves importing a large set of
Access mdb files into a Postgres or MySQL database.
The process is to export the mdb's to comma separated files than import
those into the final database.
We are now at the point where the csv files are all created and am
"Thomas F. O'Connell" writes:
I'm dealing with a database where there are ~150,000 rows in
information_schema.tables. I just tried to do a \d, and it came back
with this:
ERROR: cache lookup failed for relation [oid]
Is this indicative of corruption, or is it possibly a resource issue?
I'm working with these guys to resolve the immediate issue, but I
suspect there's a race condition somewhere in the code.
What's happened is that OIDs have been changed in the system. There's
not a lot of table DDL that happens, but there is a substantial
amount of view DDL that can take pl
I originally sent this message from my gmail account yesterday as we
were having issues with our work mail servers yesterday, but seeing
that it hasn't made it to the lists yet, I'm resending from my
registered address. You have my apologies if you receive this twice.
"Thomas F. O'Connell"
"Thomas F. O'Connell" <[EMAIL PROTECTED]> writes:
> I'm dealing with a database where there are ~150,000 rows in
> information_schema.tables. I just tried to do a \d, and it came back
> with this:
> ERROR: cache lookup failed for relation [oid]
> Is this indicative of corruption, or is it po
I'm dealing with a database where there are ~150,000 rows in
information_schema.tables. I just tried to do a \d, and it came back
with this:
ERROR: cache lookup failed for relation [oid]
Is this indicative of corruption, or is it possibly a resource issue?
I don't see a lot of evidence of
Lee Keel wrote:
So then the best way to do this kind of backup\restore is to use pg_dump?
Is there any plan in the future to be able to do some sort of file-level
backup like SqlServer?
Oh you *can* do a file-level backup, but only of the entire cluster. If
you have information shared between
Richard Huxton escribió:
> Alvaro Herrera wrote:
> >Richard Huxton escribió:
> >>Alvaro Herrera wrote:
> >>>Lee Keel escribió:
> So then the best way to do this kind of backup\restore is to use
> pg_dump?
> Is there any plan in the future to be able to do some sort of file-level
>
Alvaro Herrera wrote:
Richard Huxton escribió:
Alvaro Herrera wrote:
Lee Keel escribió:
So then the best way to do this kind of backup\restore is to use pg_dump?
Is there any plan in the future to be able to do some sort of file-level
backup like SqlServer?
Actually you can do single database
Richard Huxton escribió:
> Alvaro Herrera wrote:
> >Lee Keel escribió:
> >>So then the best way to do this kind of backup\restore is to use pg_dump?
> >>Is there any plan in the future to be able to do some sort of file-level
> >>backup like SqlServer?
> >
> >Actually you can do single databases, b
Alvaro Herrera wrote:
Lee Keel escribió:
So then the best way to do this kind of backup\restore is to use pg_dump?
Is there any plan in the future to be able to do some sort of file-level
backup like SqlServer?
Actually you can do single databases, but you must also include some
other director
Lee Keel escribió:
> So then the best way to do this kind of backup\restore is to use pg_dump?
> Is there any plan in the future to be able to do some sort of file-level
> backup like SqlServer?
Actually you can do single databases, but you must also include some
other directories besides the data
Keel
Cc: Michael Nolan; Ron Johnson; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Large Database Restore
Lee Keel wrote:
> Thanks to everyone for their input on this. After reading all the emails
> and some of the documentation (section 23.3), I think this is all a little
> more th
Lee Keel wrote:
Thanks to everyone for their input on this. After reading all the emails
and some of the documentation (section 23.3), I think this is all a little
more than what I need. My database is basically read-only and all I was
looking to do is to be able to take snap-shots of it and be
s new directory
to their database list, or will it just automatically show up when they
refresh the service?
_
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael Nolan
Sent: Thursday, May 17, 2007 7:03 PM
To: Ron Johnson; pgsql-general@postgresql.org
Subject: Re: [GENERAL]
On 5/17/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/17/07 16:49, Michael Nolan wrote:
> I don't know if my database is typical (as there probably is no such
> thing), but to restore a full dump (pg_dumpall) takes over 4 hours on my
> backup ser
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/17/07 16:49, Michael Nolan wrote:
> I don't know if my database is typical (as there probably is no such
> thing), but to restore a full dump (pg_dumpall) takes over 4 hours on my
> backup server, but to restore a low level backup (about 35GB)
I don't know if my database is typical (as there probably is no such thing),
but to restore a full dump (pg_dumpall) takes over 4 hours on my backup
server, but to restore a low level backup (about 35GB) and then process 145
WAL files (since Tuesday morning when the last low level backup was run)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Yes, but that's not always a valid assumption.
And still PITR must update the index at each "insert", which is much
slower than the "bulk load then create index" of pg_dump.
On 05/17/07 16:01, Ben wrote:
> Yes, but the implication is that large datab
Yes, but the implication is that large databases probably don't update
every row between backup periods.
On Thu, 17 May 2007, Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/17/07 11:04, Jim C. Nasby wrote:
[snip]
Ultimately though, once your database gets past a certa
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/17/07 11:04, Jim C. Nasby wrote:
[snip]
>
> Ultimately though, once your database gets past a certain size, you
> really want to be using PITR and not pg_dump as your main recovery
> strategy.
But doesn't that just replay each transaction? It
On Thu, May 17, 2007 at 08:19:08AM -0500, Lee Keel wrote:
> I am restoring a 51GB backup file that has been running for almost 26 hours.
> There have been no errors and things are still working. I have turned fsync
> off, but that still did not speed things up. Can anyone provide me with the
> op
I am restoring a 51GB backup file that has been running for almost 26 hours.
There have been no errors and things are still working. I have turned fsync
off, but that still did not speed things up. Can anyone provide me with the
optimal settings for restoring a large database?
Thanks in advan
On Thu, Aug 24, 2006 at 07:19:29PM +0200, Harald Armin Massa wrote:
> so with serial there are only 2.147.483.648 possible recordnumbers.
Actually 2147483647 using the default sequence start value of 1 and
going up to 2^31 - 1, the largest positive value a 32-bit integer
can hold. You could get t
On Thu, Aug 24, 2006 at 06:21:01PM +0200, Harald Armin Massa wrote:
> with a normal "serial", without "big", you can have
> 9.223.372.036.854.775.807 records individually numbered.
Not true; see the documentation:
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
"The
Joe,with a normal "serial", without "big", you can have 9.223.372.036.854.775.807 records individually numbered.
- Few tables but number of objects is tens-hundreds of thousands.- less than 100 queries per second.so you are talking about 10*100*1000=100 in words one million records? That is not
Hello,
I am designing database for a web product with large number of data records.
- Few tables but number of objects is tens-hundreds of thousands.
- less than 100 queries per second.
The application has basically tens thousands of (user) accounts,
every account has associated hundreds of it
On Wed, Nov 10, 2004 at 04:10:43PM +0200, Alexander Antonakakis wrote:
> I will have to make sql queries in the form "select value from
> product_actions where who='someone' and where='somewhere' and maybe make
> also some calculations on these results. I allready have made some
> indexes on th
I would like to ask the more experienced users on Postgres database a
couple of questions I have on a db I manage with a lot of data. A lot of
data means something like 15.000.000 rows in a table. I will try to
describe the tables and what I will have to do on them :)
There is a table that has p
On Tue, 8 Feb 2000, Brian Piatkus wrote:
> I am constructing a large ( by some standards) database where the largest table
> threatens to be about 6-10 Gb on a Linux system. I understand that postgresql
> splits the tables into manageable chunks & I have no problem with that as a
> workround for
I am constructing a large ( by some standards) database where the largest table
threatens to be about 6-10 Gb on a Linux system. I understand that postgresql
splits the tables into manageable chunks & I have no problem with that as a
workround for the 2 GB fs limit
.. My question concerns the
All,
Problem solved. We upgraded glibc as well, but didn't recompile
readline.
Well, at least the problem's fixed.
Myles
At 09:18 PM 11/10/99 +, Myles Chippendale wrote:
>All,
>
>We are having a few problems with a large database in postgres 6.5.2
>under Linux 2.2.13 (upgraded from RedHat
All,
We are having a few problems with a large database in postgres 6.5.2
under Linux 2.2.13 (upgraded from RedHat 6.0).
We have a table of around a million books and a number of tables to
store words in the book's titles for free text indexing. The books
file in base, for instance, is 559M.
1)
57 matches
Mail list logo