On 31 Aug 2010, at 6:44, yasin malli wrote:
Hi everyone.
I try this command ' pg_dump --compress=5 DBNAME ***.sql ' and ' psql -f
***.sql -d DBNAME '
but I take some error because of compression. how can restore compressed dump
file without taking any error ?
By using pg_restore
On 31 Aug 2010, at 8:17, yasin malli wrote:
Don't reply to just me, include the list.
if I took my dump file with 'pg_dump -Ft ' command, I would use 'pg_restore',
but I take my dump file at plain-old format for compressing data ( tar format
dump hasn't compress feature )
when I tried your
I tried it and it ran without any error but my table wasn't created so
problem is going on.
compress level isn't important because when I controlled it gave me same
results ( 5 or 9 )
Unfortunately, only plain-old dump works correctly while restoring.
if command contains any compress option, it
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
How did you get OTHER to work? Did you define your own TypeHandler?
That's the only change we had to make. The Java code works as is. We only
changed the jdbcType from ORACLECURSOR TO OTHER
On Tue, 31 Aug 2010, Stavroula Gasparotto wrote:
Currently, only the B-tree, GiST and GIN index types support
multicolumn indexes.
What does this mean exactly if I'm trying to create a multicolumn GIN
index? Does this mean the index can contain one or more tsvector type
fields only, or can I
yasin malli yasinma...@gmail.com writes:
Unfortunately, only plain-old dump works correctly while restoring.
if command contains any compress option, it won't work
--compress is normally used as an adjunct to -Fc.
I'm not real sure what you get if you specify it without that;
maybe a compressed
Hi,
I've the problem my database is not using the 'right' queryplan in all
cases. Is there a way I can force that and/or how should I tuned the
table statistics?
I'm doing a rsyslog database in PostgreSQL with millions of records
(firewall logging). The db scheme is the so called 'MonitorWare'
I am looking for a way to obtain the words that are common amongst two
tsvector records.
The long workaround I know is to:
1)convert the contents of the tsvector fields to text then find and
replace single quote followed by space then single quote with a comma
character then stripping away the
On Tue, 2010-08-31 at 00:50 -0700, yasin malli wrote:
I tried it and it ran without any error but my table wasn't created so
problem is going on.
compress level isn't important because when I controlled it gave me
same results ( 5 or 9 )
Unfortunately, only plain-old dump works correctly
sunpeng wrote:
are there any documents describe the index mechanic? For example, how to
store the B tree in tables in hard disk?
thanks!
peng
There is a README in the source tree:
http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/access/nbtree/README?rev=1.22
and many
Excerpts from yasin malli's message of mar ago 31 00:44:36 -0400 2010:
Hi everyone.
I try this command ' pg_dump --compress=5 DBNAME ***.sql ' and ' psql -f
***.sql -d DBNAME '
but I take some error because of compression. how can restore compressed
dump file without taking any error ?
Hi all -
Is there a way I can tell table - sequence dependent information.
which sequences are being used by which table?
thanks for the help
Hi all -
Is there a way I can tell table - sequence dependent information.
which sequences are being used by which table?
thanks for the help
On 31 August 2010 18:02, akp geek akpg...@gmail.com wrote:
Hi all -
Is there a way I can tell table - sequence dependent information.
which sequences are being used by which table?
thanks for the help
Take a look at the post Finding orphaned sequences on this blog:
I tried to restore one of our db backups to 3 different machines today.
After restore, all machines reported larger on-disk size, and also
psql's \l+ confirmed that.
Here is the live machine:
On-disk size: 84 GB
Size reported by psql: 79 GB
Backup machine 1:
On-disk size: 162 GB
Size reported
I just got my hands on mysql (5.0.something) and it does not cache the
scalar subquery result.
So... now I'm completely puzzled whether this is a bug, a desired result or
just a loosely standardized thing.
Help anyone?
On Fri, Aug 27, 2010 at 5:41 PM, Vyacheslav Kalinin v...@mgcp.com wrote:
Hi,
Vyacheslav Kalinin v...@mgcp.com writes:
I just got my hands on mysql (5.0.something) and it does not cache the
scalar subquery result.
So... now I'm completely puzzled whether this is a bug, a desired result or
just a loosely standardized thing.
It's loosely standardized.
AFAICS, the spec
Let me stress that this is not a bug in PostgreSQL; if anything at
all, it's only a lack of a stupid feature.
I'm working on a project for a client where I have a table for arbitrary
categories to be applied to their data, and they need to be able to set
the order in which the categories
2010/8/31 Devrim GÜNDÜZ dev...@gunduz.org:
I tried to restore one of our db backups to 3 different machines today.
After restore, all machines reported larger on-disk size, and also
psql's \l+ confirmed that.
Here is the live machine:
On-disk size: 84 GB
Size reported by psql: 79 GB
On Tue, Aug 31, 2010 at 07:56:23PM -0400, Raymond C. Rodgers wrote:
Let me stress that this is not a bug in PostgreSQL; if anything at
all, it's only a lack of a stupid feature.
PostgreSQL's version involves UPDATE ... FROM. Use an ORDER BY in the
FROM clause like this:
UPDATE mydemo SET
On Tue, Aug 31, 2010 at 7:56 PM, Raymond C. Rodgers sinful...@gmail.com wrote:
Let me stress that this is not a bug in PostgreSQL; if anything at all,
it's only a lack of a stupid feature.
I'm working on a project for a client where I have a table for arbitrary
categories to be applied to
On Tue, 2010-08-31 at 20:17 -0400, Merlin Moncure wrote:
This is where the interesting thing happens: On MySQL the query actually
works as intended, but it doesn't on PostgreSQL. As I said, I'm sure this is
not a bug in PostgreSQL, but the lack of a stupid user trick. While my
project is
On 8/31/2010 8:17 PM, Merlin Moncure wrote:
On Tue, Aug 31, 2010 at 7:56 PM, Raymond C. Rodgerssinful...@gmail.com wrote:
Let me stress that this is not a bug in PostgreSQL; if anything at all,
it's only a lack of a stupid feature.
I'm working on a project for a client where I have a table
I have a notebook that I am using as a server for testing purposes and
it has the official ODBC driver installed. I can access this and use it
connect to PostreSql.
On a second machine on the same network - I also have the same ODBC
driver installed.
The behaviour of this one is quite
24 matches
Mail list logo