> On Tuesday, April 26, 2011, Tomas Vondra wrote:
>> Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
>>> Sorry, spoke too soon.
>>>
>>> I can COPY individual chunks to files. Did that by year, and at least
>>> the dumping worked.
>>>
>>> Now I need to pull the data in at the destination server.
>>>
On 04/25/2011 02:13 PM, Seb wrote:
A query such as:
SELECT 'a' || 'b' || NULL;
returns a NULL. How can I ensure I get 'ab' in the result? I'm trying
to concatenate columns and one of them might have NULL values, which I
simply want to ignore.
SELECT 'a' || 'b' || coalesce(NULL, '');
--
S
Using a CASE construct is good when you need non-string output but COALESCE
is functionality equivalent and much less verbose when doing a default
string output for null values.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Brent Wood
Se
Hi,
Thanks for all the helpful suggestions everyone!
Cheers,
Seb
On Tue, 26 Apr 2011 10:32:59 +1200,
"Brent Wood" wrote:
> Hi Seb, Use CASE to change nulls to empty strings (or a placeholder)
> as below.
> See:
> http://www.postgresql.org/docs/9.0/static/functions-conditional.html
> if you
Hi Seb,
Use CASE to change nulls to empty strings (or a placeholder) as below.
See: http://www.postgresql.org/docs/9.0/static/functions-conditional.html
if you want a placeholder in the result to indicate the presence of a null, try
the second SQL:
Seb wrote:
> A query such as:
>
> SELECT 'a' || 'b' || NULL;
>
> returns a NULL. How can I ensure I get 'ab' in the result? I'm trying
> to concatenate columns and one of them might have NULL values, which I
> simply want to ignore.
COALESCE is your friend:
select 'a' || 'b' || COALESCE(col
Hi,
A query such as:
SELECT 'a' || 'b' || NULL;
returns a NULL. How can I ensure I get 'ab' in the result? I'm trying
to concatenate columns and one of them might have NULL values, which I
simply want to ignore.
Cheers,
--
Seb
--
Sent via pgsql-general mailing list (pgsql-general@postgre
On Tuesday, April 26, 2011, Tomas Vondra wrote:
> Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
>> Sorry, spoke too soon.
>>
>> I can COPY individual chunks to files. Did that by year, and at least
>> the dumping worked.
>>
>> Now I need to pull the data in at the destination server.
>>
>> If I CO
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Greg Smith
> Sent: Monday, April 25, 2011 4:23 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] 10 missing features
>
> On 04/25/2011 10:48 AM, Andrew S
On 04/25/2011 10:48 AM, Andrew Sullivan wrote:
You can see this in certain items in the top 10. Three, four, five,
seven, maybe 8 eight, and ten all seemed to me to be things I've
actually done before, but not using something directly inside
Postgres.
The idea that something must ship in th
Dne 25.4.2011 18:16, Phoenix Kiula napsal(a):
> Sorry, spoke too soon.
>
> I can COPY individual chunks to files. Did that by year, and at least
> the dumping worked.
>
> Now I need to pull the data in at the destination server.
>
> If I COPY each individual file back into the table, it works. S
Phoenix Kiula writes:
> I did a COPY FROM and populated the entire table. In my hard disk, the
> space consumption went up by 64GB.
> Yet, when I do a "SELECT * FROM mytable LIMIT 1" the entire DB
> crashes. There is no visible record.
There should certainly be a "visible record" somewhere, ie,
Dne 25.4.2011 20:40, Phoenix Kiula napsal(a):
>
> I did a COPY FROM and populated the entire table. In my hard disk, the
> space consumption went up by 64GB.
So you have dumped the table piece by piece, it worked, and now you have
a complete copy of the table? All the rows?
> Yet, when I do a "S
On Tue, Apr 26, 2011 at 1:56 AM, Tomas Vondra wrote:
> Dne 25.4.2011 19:31, Alban Hertroys napsal(a):
>> On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
>>
>>> If I COPY each individual file back into the table, it works. Slowly,
>>> but seems to work. I tried to combine all the files into one go,
On 04/25/11 11:14 AM, SUBHAM ROY wrote:
While loading data from shape files, the default encoding type is
UTF8. It reports an error "couldn't convert to UTF8 ... change the
encoding to latin1 ... etc."
what reported this?
So my question is, if one table say T1 is loaded using UTF8 and the
While loading data from shape files, the default encoding type is UTF8. It
reports an error "couldn't convert to UTF8 ... change the encoding to latin1
... etc."
So my question is, if one table say T1 is loaded using UTF8 and the other
say T2 is loaded using say LATIN1, then does it impose any pro
Dne 25.4.2011 19:31, Alban Hertroys napsal(a):
> On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
>
>> If I COPY each individual file back into the table, it works. Slowly,
>> but seems to work. I tried to combine all the files into one go, then
>> truncate the table, and pull it all in in one go (1
On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
> If I COPY each individual file back into the table, it works. Slowly,
> but seems to work. I tried to combine all the files into one go, then
> truncate the table, and pull it all in in one go (130 million rows or
> so) but this time it gave the sam
Jean Pereira writes:
> [ different sorting results in 8.4.3 and 9.0.3 ]
This probably means you have a different LC_COLLATE setting in the 9.0
installation than the 8.4 one.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
I'm having a problem with the sort of names with accents
In the version of postgres 8.4.3, I run the following command:
SELECT name FROM person ORDER BY name
And I have the following result:
name
-- -
Abel Clara Valentin
Ab
On Mon, Apr 25, 2011 at 9:19 PM, Phoenix Kiula wrote:
> On Fri, Apr 22, 2011 at 8:35 PM, wrote:
>>> On Fri, Apr 22, 2011 at 8:20 PM, wrote:
> On Fri, Apr 22, 2011 at 7:07 PM, wrote:
> In the pg_dumpall backup process, I get this error. Does this help?
>
Well, not really
Joshua Tolley wrote:
> On Thu, Apr 14, 2011 at 07:33:17PM -0700, Bosco Rama wrote:
>> 1) a function that removes/creates the views, etc in the 'xyz' schema
>>that gets called as part of the replacement process for schema 'abc'
>>
>> 2) replacing the views, etc. with functions that r
On Mon, Apr 25, 2011 at 10:53 AM, Raghavendra <
raghavendra@enterprisedb.com> wrote:
> 1. Send all the data to .csv file with COPY TO command of the base
> table(which will undergo for partition).
> 2. Create the partition setup with TRIGGER's
> 3. Use COPY FROM command for inserting data into
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of mark
Sent: Monday, April 25, 2011 10:30 AM
To: 'Linos'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] 10 missing features
> -Original Message-
> From: pgsq
On Mon, Apr 25, 2011 at 7:40 PM, Vick Khera wrote:
> On Mon, Apr 25, 2011 at 6:46 AM, Phoenix Kiula wrote:
>
>> I could create a new parent table with child tables, and then INSERT
>> all these millions of rows to put them into the right partition. But
>> is that recommended?
>>
>
> I did this tw
On Mon, Apr 25, 2011 at 08:29:43AM -0600, mark wrote:
> One or two of his points are on my list as well, but as far as a TOP 10
> missing features that PG "needs" his probably aren't anywhere close to what
> the majority of people are in need of.
When I had to hire Postgres DBAs, I hated hiring p
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Linos
> Sent: Monday, April 25, 2011 2:42 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] 10 missing features
>
> Hi all,
> only want to link this
On Sunday, April 24, 2011 8:01:23 pm Hyung Joo Lee wrote:
> Hi all.
>
> I got some error about psql pipeline.
>
> I input the command in shell script below :
>
> "shp2pgsql -d -w -I -s 987987(custom srid) test.shp test | psql -h testhost
> -d testdb"
>
> The problem is, when I check the table l
On Mon, Apr 25, 2011 at 6:46 AM, Phoenix Kiula wrote:
> I could create a new parent table with child tables, and then INSERT
> all these millions of rows to put them into the right partition. But
> is that recommended?
>
I did this twice (several years ago). Basically, you create your partitions
On Fri, Apr 22, 2011 at 8:35 PM, wrote:
>> On Fri, Apr 22, 2011 at 8:20 PM, wrote:
On Fri, Apr 22, 2011 at 7:07 PM, wrote:
In the pg_dumpall backup process, I get this error. Does this help?
>>>
>>> Well, not really - it's just another incarnation of the problem we've
>>> alrea
On Sun, Apr 24, 2011 at 4:16 PM, Tom Lane wrote:
> Phil Couling writes:
> > I'm looking for a way to extract the data from a PostgreSQL 8.3.14
> > database (cluster) that was built using an an ARM/Linux server. The
> > ...
> > Are there any tools for recovering data from a database built with a
INSERT INTO child SELECT FROM parent WHERE ;
Once you have fully allocated all the records on the parent table you can
then:
TRUNCATE parent;
Then install the triggers.
The only real downside is that the queries are going to take forever to run.
If you can naturally sub-divide the child tab
2011/4/25 Phoenix Kiula :
> Hi.
>
> The partitioning documentation in PG is very clear on how to partition
> a new table. Create child tables, and have triggers that manage
> INSERT, UPDATE and DELETE commands.
>
> How about doing this with existing massive tables? (Over 120 million rows)
>
> I cou
Hi.
The partitioning documentation in PG is very clear on how to partition
a new table. Create child tables, and have triggers that manage
INSERT, UPDATE and DELETE commands.
How about doing this with existing massive tables? (Over 120 million rows)
I could create a new parent table with child t
Hi all,
only want to link this blog post
http://blog.kimiensoftware.com/2011/04/top-10-missing-postgresql-features , i
think he may have any good points.
Miguel Angel.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.post
35 matches
Mail list logo