Re: [GENERAL] How do I bump a row to the front of sort efficiently

2015-02-03 Thread Sam Saffron
Note: I still consider this a bug/missing feature of sorts since the planner could do better here, and there is no real clean way of structuring a query to perform efficiently here, which is why I erroneously cross posted this to hacker initially: # create table testing(id serial primary key, dat

Re: [GENERAL] How do I bump a row to the front of sort efficiently

2015-02-03 Thread BladeOfLight16
On Tue, Feb 3, 2015 at 9:33 PM, BladeOfLight16 wrote: > This is why ORMs are bad. They make hard problems *much* harder, and the > only benefit is that they maybe make easy problems a little quicker. The > cost/savings is *heavily* skewed toward the cost, since there's no upper > bound on the cos

Re: [GENERAL] How do I bump a row to the front of sort efficiently

2015-02-03 Thread BladeOfLight16
On Mon, Feb 2, 2015 at 1:16 AM, Sam Saffron wrote: > However, the contortions on the above query make it very un-ORM > friendly as I would need to define a view for it but would have no > clean way to pass limits and offsets in. > This is why ORMs are bad. They make hard problems *much* harder,

Re: [GENERAL] oracle to postgres

2015-02-03 Thread BladeOfLight16
> > BEGIN > EXECUTE IMMEDIATE 'DROP TABLE CONTAINER'; > EXCEPTION > WHEN OTHERS THEN >IF SQLCODE != -942 THEN > RAISE; >END IF; > END; > Jim nailed it. In PostgreSQL, this is just DROP TABLE IF EXISTS CONTAINER; One line. No dynamic SQL, exception block, or even

Re: [GENERAL] array in a store procedure in C

2015-02-03 Thread Jim Nasby
On 2/3/15 7:03 AM, holger.friedrich-fa-triva...@it.nrw.de wrote: On Tuesday, February 03, 2015 3:58 AM, Jim Nasby wrote: Note that the recursive grep starts at the current directory, so make sure you're actually in the pgsql source code when you use it. cat ~/bin/pg_grep #!/bin/sh grep -r "

Re: [GENERAL] VACUUM FULL pg_largeobject without (much) downtime?

2015-02-03 Thread Bill Moran
On Tue, 3 Feb 2015 14:48:17 -0500 Adam Hooper wrote: > On Tue, Feb 3, 2015 at 2:29 PM, Bill Moran wrote: > > On Tue, 3 Feb 2015 14:17:03 -0500 > > Adam Hooper wrote: > > > > My recommendation here would be to use Slony to replicate the data to a > > new server, then switch to the new server onc

Re: [GENERAL] VACUUM FULL pg_largeobject without (much) downtime?

2015-02-03 Thread Adam Hooper
On Tue, Feb 3, 2015 at 2:29 PM, Bill Moran wrote: > On Tue, 3 Feb 2015 14:17:03 -0500 > Adam Hooper wrote: > > My recommendation here would be to use Slony to replicate the data to a > new server, then switch to the new server once the data has synchornized. Looks exciting. But then I notice: "S

Re: [GENERAL] VACUUM FULL pg_largeobject without (much) downtime?

2015-02-03 Thread Bill Moran
On Tue, 3 Feb 2015 14:17:03 -0500 Adam Hooper wrote: > On Tue, Feb 3, 2015 at 12:58 PM, Bill Moran wrote: > > On Tue, 3 Feb 2015 10:53:11 -0500 > > Adam Hooper wrote: > > > >> This plan won't work: Step 2 will be too slow because pg_largeobject > >> still takes 266GB. We tested `VACUUM FULL pg_

Re: [GENERAL] VACUUM FULL pg_largeobject without (much) downtime?

2015-02-03 Thread Adam Hooper
On Tue, Feb 3, 2015 at 12:58 PM, Bill Moran wrote: > On Tue, 3 Feb 2015 10:53:11 -0500 > Adam Hooper wrote: > >> This plan won't work: Step 2 will be too slow because pg_largeobject >> still takes 266GB. We tested `VACUUM FULL pg_largeobject` on our >> staging database: it took two hours, during

Re: [GENERAL] postgres cust types

2015-02-03 Thread Adrian Klaver
On 02/03/2015 07:50 AM, Ramesh T wrote: Am CCing the list. CREATE TYPE order_list AS (order_id bigint); i created above type Not sure that the above does anything. and i am using order_list, trying creating table type (datatype) *create or replace type order_list_table as table of order

Re: [GENERAL] VACUUM FULL pg_largeobject without (much) downtime?

2015-02-03 Thread Bill Moran
On Tue, 3 Feb 2015 10:53:11 -0500 Adam Hooper wrote: > Hi list, > > We run a website. We once stored all sorts of files in pg_largeobject, > which grew to 266GB. This is on an m1.large on Amazon EC2 on a single, > magnetic, non-provisioned-IO volume. In that context, 266GB is a lot. > > We've s

[GENERAL] VACUUM FULL pg_largeobject without (much) downtime?

2015-02-03 Thread Adam Hooper
Hi list, We run a website. We once stored all sorts of files in pg_largeobject, which grew to 266GB. This is on an m1.large on Amazon EC2 on a single, magnetic, non-provisioned-IO volume. In that context, 266GB is a lot. We've since moved all but 60GB of that data to S3. We plan to reduce that to

Re: [GENERAL] postgres cust types

2015-02-03 Thread Adrian Klaver
On 02/03/2015 04:49 AM, Ramesh T wrote: Hi , i created type on postgres CREATE TYPE order_list AS (order_id bigint); it works fine. then, i try to create a other table type using above created type. like, --create or replace type suborder_list_table as table

Re: [GENERAL] dbmsscheduler

2015-02-03 Thread Adrian Klaver
On 02/03/2015 05:16 AM, Ramesh T wrote: hi, How to run dbms_scheduler.create_job in postgres and is it available postgres..? Postgres != Oracle. There is no dbms_scheduler in Postgres. You can use cron or pgAgent: http://www.pgadmin.org/docs/1.20/pgagent.html As has been pointed about

Re: [GENERAL] Ransomware article

2015-02-03 Thread Magnus Hagander
On Tue, Feb 3, 2015 at 3:33 PM, Gordon Haverland < ghave...@materialisations.com> wrote: > TheRegister is running an article about someone breaking into a dbase, > taking control of the encryption key, and 6 or so months later > demanding ransom from the owner of the dbase. > > > http://www.thereg

[GENERAL] Ransomware article

2015-02-03 Thread Gordon Haverland
TheRegister is running an article about someone breaking into a dbase, taking control of the encryption key, and 6 or so months later demanding ransom from the owner of the dbase. http://www.theregister.co.uk/2015/02/03/web_ransomware_scum_now_lay_waste_to_your_backups/ Anyone want to comment on

Re: [GENERAL] "Ungroup" data for import into PostgreSQL

2015-02-03 Thread George Weaver
Hi Adrian, From: "Adrian Klaver" Subject: Re: [GENERAL] "Ungroup" data for import into PostgreSQL On 01/15/2015 04:56 PM, Jim Nasby wrote: On 1/15/15 9:43 AM, George Weaver wrote: Hi List, I need to import data from a large Excel spreadsheet into a PostgreSQL table. I have a program that

Re: [GENERAL] "Ungroup" data for import into PostgreSQL

2015-02-03 Thread George Weaver
Sorry for the late reply...life interefered... From: "Jim Nasby" On 1/15/15 9:43 AM, George Weaver wrote: Hi List, I need to import data from a large Excel spreadsheet into a PostgreSQL table. I have a program that uses ODBC to connect to Excel and extract data using SQL queries. The p

Re: [GENERAL] array in a store procedure in C

2015-02-03 Thread Holger.Friedrich-Fa-Trivadis
On Tuesday, February 03, 2015 3:58 AM, Jim Nasby wrote: > Note that the recursive grep starts at the current directory, so make sure > you're actually in the pgsql source code when you use it. > cat ~/bin/pg_grep > #!/bin/sh > > grep -r "$*" * | grep -iv TAGS: | grep -v 'Binary file' | grep -v '.